Test Report: KVM_Linux_crio 19429

                    
                      b06913c07d6338950e5c7fdbd8346c60c9653ed1:2024-08-14:35775
                    
                

Test fail (27/318)

Order failed test Duration
34 TestAddons/parallel/Ingress 153.26
36 TestAddons/parallel/MetricsServer 346.66
45 TestAddons/StoppedEnableDisable 154.35
168 TestMultiControlPlane/serial/RestartClusterKeepsNodes 354.57
171 TestMultiControlPlane/serial/StopCluster 141.67
231 TestMultiNode/serial/RestartKeepsNodes 324.69
233 TestMultiNode/serial/StopMultiNode 141.11
240 TestPreload 283.69
248 TestKubernetesUpgrade 389.32
284 TestStartStop/group/old-k8s-version/serial/FirstStart 297.4
300 TestStartStop/group/embed-certs/serial/Stop 139.01
303 TestStartStop/group/no-preload/serial/Stop 139.07
306 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.13
307 TestStartStop/group/old-k8s-version/serial/DeployApp 0.48
308 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 91.47
309 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
311 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
313 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.39
317 TestStartStop/group/old-k8s-version/serial/SecondStart 770.86
318 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 544.04
319 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 544.04
320 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 544.15
321 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.29
322 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 454.3
323 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 478.36
324 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 343.69
325 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 93.28
x
+
TestAddons/parallel/Ingress (153.26s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-937866 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-937866 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-937866 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [58e32069-0078-4b2c-83a7-45c915783932] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [58e32069-0078-4b2c-83a7-45c915783932] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003609102s
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-937866 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-937866 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.145103818s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-937866 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-937866 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.8
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-937866 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-amd64 -p addons-937866 addons disable ingress-dns --alsologtostderr -v=1: (1.582360883s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-937866 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-937866 addons disable ingress --alsologtostderr -v=1: (7.668588617s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-937866 -n addons-937866
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-937866 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-937866 logs -n 25: (1.128864714s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-343093                                                                     | download-only-343093 | jenkins | v1.33.1 | 13 Aug 24 23:47 UTC | 13 Aug 24 23:47 UTC |
	| delete  | -p download-only-307809                                                                     | download-only-307809 | jenkins | v1.33.1 | 13 Aug 24 23:47 UTC | 13 Aug 24 23:47 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-857485 | jenkins | v1.33.1 | 13 Aug 24 23:47 UTC |                     |
	|         | binary-mirror-857485                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:46401                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-857485                                                                     | binary-mirror-857485 | jenkins | v1.33.1 | 13 Aug 24 23:47 UTC | 13 Aug 24 23:47 UTC |
	| addons  | enable dashboard -p                                                                         | addons-937866        | jenkins | v1.33.1 | 13 Aug 24 23:47 UTC |                     |
	|         | addons-937866                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-937866        | jenkins | v1.33.1 | 13 Aug 24 23:47 UTC |                     |
	|         | addons-937866                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-937866 --wait=true                                                                | addons-937866        | jenkins | v1.33.1 | 13 Aug 24 23:47 UTC | 13 Aug 24 23:50 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-937866 addons disable                                                                | addons-937866        | jenkins | v1.33.1 | 13 Aug 24 23:50 UTC | 13 Aug 24 23:50 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-937866        | jenkins | v1.33.1 | 13 Aug 24 23:50 UTC | 13 Aug 24 23:50 UTC |
	|         | addons-937866                                                                               |                      |         |         |                     |                     |
	| addons  | addons-937866 addons disable                                                                | addons-937866        | jenkins | v1.33.1 | 13 Aug 24 23:50 UTC | 13 Aug 24 23:50 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| ssh     | addons-937866 ssh cat                                                                       | addons-937866        | jenkins | v1.33.1 | 13 Aug 24 23:50 UTC | 13 Aug 24 23:50 UTC |
	|         | /opt/local-path-provisioner/pvc-a7fb6e01-e9d6-4ee0-9569-672424823465_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-937866 addons disable                                                                | addons-937866        | jenkins | v1.33.1 | 13 Aug 24 23:50 UTC | 13 Aug 24 23:50 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-937866 ip                                                                            | addons-937866        | jenkins | v1.33.1 | 13 Aug 24 23:50 UTC | 13 Aug 24 23:50 UTC |
	| addons  | addons-937866 addons disable                                                                | addons-937866        | jenkins | v1.33.1 | 13 Aug 24 23:50 UTC | 13 Aug 24 23:50 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-937866        | jenkins | v1.33.1 | 13 Aug 24 23:50 UTC | 13 Aug 24 23:50 UTC |
	|         | -p addons-937866                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-937866        | jenkins | v1.33.1 | 13 Aug 24 23:50 UTC | 13 Aug 24 23:50 UTC |
	|         | -p addons-937866                                                                            |                      |         |         |                     |                     |
	| addons  | addons-937866 addons disable                                                                | addons-937866        | jenkins | v1.33.1 | 13 Aug 24 23:50 UTC | 13 Aug 24 23:50 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-937866 addons disable                                                                | addons-937866        | jenkins | v1.33.1 | 13 Aug 24 23:50 UTC | 13 Aug 24 23:50 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-937866        | jenkins | v1.33.1 | 13 Aug 24 23:51 UTC | 13 Aug 24 23:51 UTC |
	|         | addons-937866                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-937866 ssh curl -s                                                                   | addons-937866        | jenkins | v1.33.1 | 13 Aug 24 23:51 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-937866 addons                                                                        | addons-937866        | jenkins | v1.33.1 | 13 Aug 24 23:51 UTC | 13 Aug 24 23:51 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-937866 addons                                                                        | addons-937866        | jenkins | v1.33.1 | 13 Aug 24 23:51 UTC | 13 Aug 24 23:51 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-937866 ip                                                                            | addons-937866        | jenkins | v1.33.1 | 13 Aug 24 23:53 UTC | 13 Aug 24 23:53 UTC |
	| addons  | addons-937866 addons disable                                                                | addons-937866        | jenkins | v1.33.1 | 13 Aug 24 23:53 UTC | 13 Aug 24 23:53 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-937866 addons disable                                                                | addons-937866        | jenkins | v1.33.1 | 13 Aug 24 23:53 UTC | 13 Aug 24 23:53 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/13 23:47:56
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 23:47:56.015304   17389 out.go:291] Setting OutFile to fd 1 ...
	I0813 23:47:56.015406   17389 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 23:47:56.015415   17389 out.go:304] Setting ErrFile to fd 2...
	I0813 23:47:56.015419   17389 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 23:47:56.015581   17389 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19429-9425/.minikube/bin
	I0813 23:47:56.016105   17389 out.go:298] Setting JSON to false
	I0813 23:47:56.016907   17389 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1822,"bootTime":1723591054,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0813 23:47:56.016958   17389 start.go:139] virtualization: kvm guest
	I0813 23:47:56.018901   17389 out.go:177] * [addons-937866] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0813 23:47:56.019990   17389 notify.go:220] Checking for updates...
	I0813 23:47:56.020005   17389 out.go:177]   - MINIKUBE_LOCATION=19429
	I0813 23:47:56.021169   17389 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0813 23:47:56.022232   17389 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19429-9425/kubeconfig
	I0813 23:47:56.023457   17389 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19429-9425/.minikube
	I0813 23:47:56.024684   17389 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 23:47:56.025798   17389 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0813 23:47:56.027084   17389 driver.go:392] Setting default libvirt URI to qemu:///system
	I0813 23:47:56.057890   17389 out.go:177] * Using the kvm2 driver based on user configuration
	I0813 23:47:56.059131   17389 start.go:297] selected driver: kvm2
	I0813 23:47:56.059143   17389 start.go:901] validating driver "kvm2" against <nil>
	I0813 23:47:56.059153   17389 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0813 23:47:56.059796   17389 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 23:47:56.059851   17389 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19429-9425/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0813 23:47:56.074106   17389 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0813 23:47:56.074157   17389 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0813 23:47:56.074366   17389 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0813 23:47:56.074397   17389 cni.go:84] Creating CNI manager for ""
	I0813 23:47:56.074404   17389 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0813 23:47:56.074411   17389 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0813 23:47:56.074463   17389 start.go:340] cluster config:
	{Name:addons-937866 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-937866 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0813 23:47:56.074542   17389 iso.go:125] acquiring lock: {Name:mk654171f0e78c238a265344dbbd1eacb21d0f1b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 23:47:56.076063   17389 out.go:177] * Starting "addons-937866" primary control-plane node in "addons-937866" cluster
	I0813 23:47:56.077069   17389 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0813 23:47:56.077097   17389 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0813 23:47:56.077105   17389 cache.go:56] Caching tarball of preloaded images
	I0813 23:47:56.077157   17389 preload.go:172] Found /home/jenkins/minikube-integration/19429-9425/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0813 23:47:56.077167   17389 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0813 23:47:56.077449   17389 profile.go:143] Saving config to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/addons-937866/config.json ...
	I0813 23:47:56.077466   17389 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/addons-937866/config.json: {Name:mk8a28a8ad54dcd755c2ce1cbf17fe2ba8c5cf3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 23:47:56.077572   17389 start.go:360] acquireMachinesLock for addons-937866: {Name:mk8ab1b491404dd6cc95a37a50c74a14d6d0e4c9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 23:47:56.077620   17389 start.go:364] duration metric: took 35.654µs to acquireMachinesLock for "addons-937866"
	I0813 23:47:56.077636   17389 start.go:93] Provisioning new machine with config: &{Name:addons-937866 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:addons-937866 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0813 23:47:56.077693   17389 start.go:125] createHost starting for "" (driver="kvm2")
	I0813 23:47:56.079130   17389 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0813 23:47:56.079247   17389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 23:47:56.079279   17389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0813 23:47:56.092702   17389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40889
	I0813 23:47:56.093045   17389 main.go:141] libmachine: () Calling .GetVersion
	I0813 23:47:56.093535   17389 main.go:141] libmachine: Using API Version  1
	I0813 23:47:56.093554   17389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0813 23:47:56.093910   17389 main.go:141] libmachine: () Calling .GetMachineName
	I0813 23:47:56.094090   17389 main.go:141] libmachine: (addons-937866) Calling .GetMachineName
	I0813 23:47:56.094219   17389 main.go:141] libmachine: (addons-937866) Calling .DriverName
	I0813 23:47:56.094374   17389 start.go:159] libmachine.API.Create for "addons-937866" (driver="kvm2")
	I0813 23:47:56.094407   17389 client.go:168] LocalClient.Create starting
	I0813 23:47:56.094445   17389 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem
	I0813 23:47:56.302561   17389 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem
	I0813 23:47:56.629781   17389 main.go:141] libmachine: Running pre-create checks...
	I0813 23:47:56.629806   17389 main.go:141] libmachine: (addons-937866) Calling .PreCreateCheck
	I0813 23:47:56.630298   17389 main.go:141] libmachine: (addons-937866) Calling .GetConfigRaw
	I0813 23:47:56.630768   17389 main.go:141] libmachine: Creating machine...
	I0813 23:47:56.630784   17389 main.go:141] libmachine: (addons-937866) Calling .Create
	I0813 23:47:56.630895   17389 main.go:141] libmachine: (addons-937866) Creating KVM machine...
	I0813 23:47:56.632127   17389 main.go:141] libmachine: (addons-937866) DBG | found existing default KVM network
	I0813 23:47:56.632809   17389 main.go:141] libmachine: (addons-937866) DBG | I0813 23:47:56.632684   17411 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ad0}
	I0813 23:47:56.632825   17389 main.go:141] libmachine: (addons-937866) DBG | created network xml: 
	I0813 23:47:56.632834   17389 main.go:141] libmachine: (addons-937866) DBG | <network>
	I0813 23:47:56.632844   17389 main.go:141] libmachine: (addons-937866) DBG |   <name>mk-addons-937866</name>
	I0813 23:47:56.632855   17389 main.go:141] libmachine: (addons-937866) DBG |   <dns enable='no'/>
	I0813 23:47:56.632864   17389 main.go:141] libmachine: (addons-937866) DBG |   
	I0813 23:47:56.632874   17389 main.go:141] libmachine: (addons-937866) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0813 23:47:56.632880   17389 main.go:141] libmachine: (addons-937866) DBG |     <dhcp>
	I0813 23:47:56.632886   17389 main.go:141] libmachine: (addons-937866) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0813 23:47:56.632892   17389 main.go:141] libmachine: (addons-937866) DBG |     </dhcp>
	I0813 23:47:56.632914   17389 main.go:141] libmachine: (addons-937866) DBG |   </ip>
	I0813 23:47:56.632925   17389 main.go:141] libmachine: (addons-937866) DBG |   
	I0813 23:47:56.632932   17389 main.go:141] libmachine: (addons-937866) DBG | </network>
	I0813 23:47:56.632977   17389 main.go:141] libmachine: (addons-937866) DBG | 
	I0813 23:47:56.638257   17389 main.go:141] libmachine: (addons-937866) DBG | trying to create private KVM network mk-addons-937866 192.168.39.0/24...
	I0813 23:47:56.699410   17389 main.go:141] libmachine: (addons-937866) DBG | private KVM network mk-addons-937866 192.168.39.0/24 created
	I0813 23:47:56.699443   17389 main.go:141] libmachine: (addons-937866) Setting up store path in /home/jenkins/minikube-integration/19429-9425/.minikube/machines/addons-937866 ...
	I0813 23:47:56.699466   17389 main.go:141] libmachine: (addons-937866) DBG | I0813 23:47:56.699356   17411 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19429-9425/.minikube
	I0813 23:47:56.699484   17389 main.go:141] libmachine: (addons-937866) Building disk image from file:///home/jenkins/minikube-integration/19429-9425/.minikube/cache/iso/amd64/minikube-v1.33.1-1723567878-19429-amd64.iso
	I0813 23:47:56.699505   17389 main.go:141] libmachine: (addons-937866) Downloading /home/jenkins/minikube-integration/19429-9425/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19429-9425/.minikube/cache/iso/amd64/minikube-v1.33.1-1723567878-19429-amd64.iso...
	I0813 23:47:56.965916   17389 main.go:141] libmachine: (addons-937866) DBG | I0813 23:47:56.965748   17411 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19429-9425/.minikube/machines/addons-937866/id_rsa...
	I0813 23:47:57.109728   17389 main.go:141] libmachine: (addons-937866) DBG | I0813 23:47:57.109628   17411 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19429-9425/.minikube/machines/addons-937866/addons-937866.rawdisk...
	I0813 23:47:57.109756   17389 main.go:141] libmachine: (addons-937866) DBG | Writing magic tar header
	I0813 23:47:57.109767   17389 main.go:141] libmachine: (addons-937866) DBG | Writing SSH key tar header
	I0813 23:47:57.109775   17389 main.go:141] libmachine: (addons-937866) DBG | I0813 23:47:57.109735   17411 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19429-9425/.minikube/machines/addons-937866 ...
	I0813 23:47:57.109888   17389 main.go:141] libmachine: (addons-937866) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19429-9425/.minikube/machines/addons-937866
	I0813 23:47:57.109908   17389 main.go:141] libmachine: (addons-937866) Setting executable bit set on /home/jenkins/minikube-integration/19429-9425/.minikube/machines/addons-937866 (perms=drwx------)
	I0813 23:47:57.109916   17389 main.go:141] libmachine: (addons-937866) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19429-9425/.minikube/machines
	I0813 23:47:57.109926   17389 main.go:141] libmachine: (addons-937866) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19429-9425/.minikube
	I0813 23:47:57.109936   17389 main.go:141] libmachine: (addons-937866) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19429-9425
	I0813 23:47:57.109949   17389 main.go:141] libmachine: (addons-937866) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0813 23:47:57.109957   17389 main.go:141] libmachine: (addons-937866) DBG | Checking permissions on dir: /home/jenkins
	I0813 23:47:57.109969   17389 main.go:141] libmachine: (addons-937866) DBG | Checking permissions on dir: /home
	I0813 23:47:57.109978   17389 main.go:141] libmachine: (addons-937866) DBG | Skipping /home - not owner
	I0813 23:47:57.109998   17389 main.go:141] libmachine: (addons-937866) Setting executable bit set on /home/jenkins/minikube-integration/19429-9425/.minikube/machines (perms=drwxr-xr-x)
	I0813 23:47:57.110011   17389 main.go:141] libmachine: (addons-937866) Setting executable bit set on /home/jenkins/minikube-integration/19429-9425/.minikube (perms=drwxr-xr-x)
	I0813 23:47:57.110018   17389 main.go:141] libmachine: (addons-937866) Setting executable bit set on /home/jenkins/minikube-integration/19429-9425 (perms=drwxrwxr-x)
	I0813 23:47:57.110058   17389 main.go:141] libmachine: (addons-937866) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0813 23:47:57.110074   17389 main.go:141] libmachine: (addons-937866) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0813 23:47:57.110083   17389 main.go:141] libmachine: (addons-937866) Creating domain...
	I0813 23:47:57.110973   17389 main.go:141] libmachine: (addons-937866) define libvirt domain using xml: 
	I0813 23:47:57.111006   17389 main.go:141] libmachine: (addons-937866) <domain type='kvm'>
	I0813 23:47:57.111016   17389 main.go:141] libmachine: (addons-937866)   <name>addons-937866</name>
	I0813 23:47:57.111028   17389 main.go:141] libmachine: (addons-937866)   <memory unit='MiB'>4000</memory>
	I0813 23:47:57.111037   17389 main.go:141] libmachine: (addons-937866)   <vcpu>2</vcpu>
	I0813 23:47:57.111046   17389 main.go:141] libmachine: (addons-937866)   <features>
	I0813 23:47:57.111054   17389 main.go:141] libmachine: (addons-937866)     <acpi/>
	I0813 23:47:57.111062   17389 main.go:141] libmachine: (addons-937866)     <apic/>
	I0813 23:47:57.111070   17389 main.go:141] libmachine: (addons-937866)     <pae/>
	I0813 23:47:57.111080   17389 main.go:141] libmachine: (addons-937866)     
	I0813 23:47:57.111088   17389 main.go:141] libmachine: (addons-937866)   </features>
	I0813 23:47:57.111100   17389 main.go:141] libmachine: (addons-937866)   <cpu mode='host-passthrough'>
	I0813 23:47:57.111108   17389 main.go:141] libmachine: (addons-937866)   
	I0813 23:47:57.111116   17389 main.go:141] libmachine: (addons-937866)   </cpu>
	I0813 23:47:57.111127   17389 main.go:141] libmachine: (addons-937866)   <os>
	I0813 23:47:57.111135   17389 main.go:141] libmachine: (addons-937866)     <type>hvm</type>
	I0813 23:47:57.111147   17389 main.go:141] libmachine: (addons-937866)     <boot dev='cdrom'/>
	I0813 23:47:57.111157   17389 main.go:141] libmachine: (addons-937866)     <boot dev='hd'/>
	I0813 23:47:57.111175   17389 main.go:141] libmachine: (addons-937866)     <bootmenu enable='no'/>
	I0813 23:47:57.111197   17389 main.go:141] libmachine: (addons-937866)   </os>
	I0813 23:47:57.111207   17389 main.go:141] libmachine: (addons-937866)   <devices>
	I0813 23:47:57.111215   17389 main.go:141] libmachine: (addons-937866)     <disk type='file' device='cdrom'>
	I0813 23:47:57.111226   17389 main.go:141] libmachine: (addons-937866)       <source file='/home/jenkins/minikube-integration/19429-9425/.minikube/machines/addons-937866/boot2docker.iso'/>
	I0813 23:47:57.111234   17389 main.go:141] libmachine: (addons-937866)       <target dev='hdc' bus='scsi'/>
	I0813 23:47:57.111239   17389 main.go:141] libmachine: (addons-937866)       <readonly/>
	I0813 23:47:57.111246   17389 main.go:141] libmachine: (addons-937866)     </disk>
	I0813 23:47:57.111253   17389 main.go:141] libmachine: (addons-937866)     <disk type='file' device='disk'>
	I0813 23:47:57.111261   17389 main.go:141] libmachine: (addons-937866)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0813 23:47:57.111269   17389 main.go:141] libmachine: (addons-937866)       <source file='/home/jenkins/minikube-integration/19429-9425/.minikube/machines/addons-937866/addons-937866.rawdisk'/>
	I0813 23:47:57.111277   17389 main.go:141] libmachine: (addons-937866)       <target dev='hda' bus='virtio'/>
	I0813 23:47:57.111283   17389 main.go:141] libmachine: (addons-937866)     </disk>
	I0813 23:47:57.111290   17389 main.go:141] libmachine: (addons-937866)     <interface type='network'>
	I0813 23:47:57.111296   17389 main.go:141] libmachine: (addons-937866)       <source network='mk-addons-937866'/>
	I0813 23:47:57.111305   17389 main.go:141] libmachine: (addons-937866)       <model type='virtio'/>
	I0813 23:47:57.111326   17389 main.go:141] libmachine: (addons-937866)     </interface>
	I0813 23:47:57.111344   17389 main.go:141] libmachine: (addons-937866)     <interface type='network'>
	I0813 23:47:57.111357   17389 main.go:141] libmachine: (addons-937866)       <source network='default'/>
	I0813 23:47:57.111381   17389 main.go:141] libmachine: (addons-937866)       <model type='virtio'/>
	I0813 23:47:57.111391   17389 main.go:141] libmachine: (addons-937866)     </interface>
	I0813 23:47:57.111398   17389 main.go:141] libmachine: (addons-937866)     <serial type='pty'>
	I0813 23:47:57.111403   17389 main.go:141] libmachine: (addons-937866)       <target port='0'/>
	I0813 23:47:57.111409   17389 main.go:141] libmachine: (addons-937866)     </serial>
	I0813 23:47:57.111415   17389 main.go:141] libmachine: (addons-937866)     <console type='pty'>
	I0813 23:47:57.111426   17389 main.go:141] libmachine: (addons-937866)       <target type='serial' port='0'/>
	I0813 23:47:57.111433   17389 main.go:141] libmachine: (addons-937866)     </console>
	I0813 23:47:57.111438   17389 main.go:141] libmachine: (addons-937866)     <rng model='virtio'>
	I0813 23:47:57.111445   17389 main.go:141] libmachine: (addons-937866)       <backend model='random'>/dev/random</backend>
	I0813 23:47:57.111451   17389 main.go:141] libmachine: (addons-937866)     </rng>
	I0813 23:47:57.111456   17389 main.go:141] libmachine: (addons-937866)     
	I0813 23:47:57.111462   17389 main.go:141] libmachine: (addons-937866)     
	I0813 23:47:57.111467   17389 main.go:141] libmachine: (addons-937866)   </devices>
	I0813 23:47:57.111473   17389 main.go:141] libmachine: (addons-937866) </domain>
	I0813 23:47:57.111480   17389 main.go:141] libmachine: (addons-937866) 
	I0813 23:47:57.117248   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined MAC address 52:54:00:6e:88:2b in network default
	I0813 23:47:57.117773   17389 main.go:141] libmachine: (addons-937866) Ensuring networks are active...
	I0813 23:47:57.117794   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:47:57.118407   17389 main.go:141] libmachine: (addons-937866) Ensuring network default is active
	I0813 23:47:57.118729   17389 main.go:141] libmachine: (addons-937866) Ensuring network mk-addons-937866 is active
	I0813 23:47:57.119257   17389 main.go:141] libmachine: (addons-937866) Getting domain xml...
	I0813 23:47:57.119908   17389 main.go:141] libmachine: (addons-937866) Creating domain...
	I0813 23:47:58.500071   17389 main.go:141] libmachine: (addons-937866) Waiting to get IP...
	I0813 23:47:58.500884   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:47:58.501350   17389 main.go:141] libmachine: (addons-937866) DBG | unable to find current IP address of domain addons-937866 in network mk-addons-937866
	I0813 23:47:58.501400   17389 main.go:141] libmachine: (addons-937866) DBG | I0813 23:47:58.501343   17411 retry.go:31] will retry after 295.085727ms: waiting for machine to come up
	I0813 23:47:58.797710   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:47:58.798089   17389 main.go:141] libmachine: (addons-937866) DBG | unable to find current IP address of domain addons-937866 in network mk-addons-937866
	I0813 23:47:58.798125   17389 main.go:141] libmachine: (addons-937866) DBG | I0813 23:47:58.798019   17411 retry.go:31] will retry after 366.444505ms: waiting for machine to come up
	I0813 23:47:59.165565   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:47:59.165989   17389 main.go:141] libmachine: (addons-937866) DBG | unable to find current IP address of domain addons-937866 in network mk-addons-937866
	I0813 23:47:59.166017   17389 main.go:141] libmachine: (addons-937866) DBG | I0813 23:47:59.165940   17411 retry.go:31] will retry after 420.97021ms: waiting for machine to come up
	I0813 23:47:59.589904   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:47:59.590365   17389 main.go:141] libmachine: (addons-937866) DBG | unable to find current IP address of domain addons-937866 in network mk-addons-937866
	I0813 23:47:59.590393   17389 main.go:141] libmachine: (addons-937866) DBG | I0813 23:47:59.590315   17411 retry.go:31] will retry after 443.200792ms: waiting for machine to come up
	I0813 23:48:00.035144   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:00.035702   17389 main.go:141] libmachine: (addons-937866) DBG | unable to find current IP address of domain addons-937866 in network mk-addons-937866
	I0813 23:48:00.035741   17389 main.go:141] libmachine: (addons-937866) DBG | I0813 23:48:00.035649   17411 retry.go:31] will retry after 681.201668ms: waiting for machine to come up
	I0813 23:48:00.718414   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:00.718796   17389 main.go:141] libmachine: (addons-937866) DBG | unable to find current IP address of domain addons-937866 in network mk-addons-937866
	I0813 23:48:00.718850   17389 main.go:141] libmachine: (addons-937866) DBG | I0813 23:48:00.718782   17411 retry.go:31] will retry after 643.430207ms: waiting for machine to come up
	I0813 23:48:01.364137   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:01.364511   17389 main.go:141] libmachine: (addons-937866) DBG | unable to find current IP address of domain addons-937866 in network mk-addons-937866
	I0813 23:48:01.364538   17389 main.go:141] libmachine: (addons-937866) DBG | I0813 23:48:01.364461   17411 retry.go:31] will retry after 752.692025ms: waiting for machine to come up
	I0813 23:48:02.118473   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:02.118872   17389 main.go:141] libmachine: (addons-937866) DBG | unable to find current IP address of domain addons-937866 in network mk-addons-937866
	I0813 23:48:02.118893   17389 main.go:141] libmachine: (addons-937866) DBG | I0813 23:48:02.118846   17411 retry.go:31] will retry after 1.147620092s: waiting for machine to come up
	I0813 23:48:03.268025   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:03.268468   17389 main.go:141] libmachine: (addons-937866) DBG | unable to find current IP address of domain addons-937866 in network mk-addons-937866
	I0813 23:48:03.268496   17389 main.go:141] libmachine: (addons-937866) DBG | I0813 23:48:03.268417   17411 retry.go:31] will retry after 1.646773744s: waiting for machine to come up
	I0813 23:48:04.916483   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:04.916812   17389 main.go:141] libmachine: (addons-937866) DBG | unable to find current IP address of domain addons-937866 in network mk-addons-937866
	I0813 23:48:04.916840   17389 main.go:141] libmachine: (addons-937866) DBG | I0813 23:48:04.916767   17411 retry.go:31] will retry after 1.966715915s: waiting for machine to come up
	I0813 23:48:06.884641   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:06.885074   17389 main.go:141] libmachine: (addons-937866) DBG | unable to find current IP address of domain addons-937866 in network mk-addons-937866
	I0813 23:48:06.885103   17389 main.go:141] libmachine: (addons-937866) DBG | I0813 23:48:06.885022   17411 retry.go:31] will retry after 1.868597461s: waiting for machine to come up
	I0813 23:48:08.755960   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:08.756378   17389 main.go:141] libmachine: (addons-937866) DBG | unable to find current IP address of domain addons-937866 in network mk-addons-937866
	I0813 23:48:08.756408   17389 main.go:141] libmachine: (addons-937866) DBG | I0813 23:48:08.756332   17411 retry.go:31] will retry after 3.478823879s: waiting for machine to come up
	I0813 23:48:12.237211   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:12.237564   17389 main.go:141] libmachine: (addons-937866) DBG | unable to find current IP address of domain addons-937866 in network mk-addons-937866
	I0813 23:48:12.237589   17389 main.go:141] libmachine: (addons-937866) DBG | I0813 23:48:12.237536   17411 retry.go:31] will retry after 4.371295963s: waiting for machine to come up
	I0813 23:48:16.610789   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:16.611358   17389 main.go:141] libmachine: (addons-937866) Found IP for machine: 192.168.39.8
	I0813 23:48:16.611385   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has current primary IP address 192.168.39.8 and MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:16.611398   17389 main.go:141] libmachine: (addons-937866) Reserving static IP address...
	I0813 23:48:16.611716   17389 main.go:141] libmachine: (addons-937866) DBG | unable to find host DHCP lease matching {name: "addons-937866", mac: "52:54:00:a3:c3:1c", ip: "192.168.39.8"} in network mk-addons-937866
	I0813 23:48:16.680908   17389 main.go:141] libmachine: (addons-937866) DBG | Getting to WaitForSSH function...
	I0813 23:48:16.680933   17389 main.go:141] libmachine: (addons-937866) Reserved static IP address: 192.168.39.8
	I0813 23:48:16.680945   17389 main.go:141] libmachine: (addons-937866) Waiting for SSH to be available...
	I0813 23:48:16.683392   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:16.683811   17389 main.go:141] libmachine: (addons-937866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:c3:1c", ip: ""} in network mk-addons-937866: {Iface:virbr1 ExpiryTime:2024-08-14 00:48:10 +0000 UTC Type:0 Mac:52:54:00:a3:c3:1c Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a3:c3:1c}
	I0813 23:48:16.683834   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined IP address 192.168.39.8 and MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:16.683999   17389 main.go:141] libmachine: (addons-937866) DBG | Using SSH client type: external
	I0813 23:48:16.684024   17389 main.go:141] libmachine: (addons-937866) DBG | Using SSH private key: /home/jenkins/minikube-integration/19429-9425/.minikube/machines/addons-937866/id_rsa (-rw-------)
	I0813 23:48:16.684055   17389 main.go:141] libmachine: (addons-937866) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.8 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19429-9425/.minikube/machines/addons-937866/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0813 23:48:16.684071   17389 main.go:141] libmachine: (addons-937866) DBG | About to run SSH command:
	I0813 23:48:16.684082   17389 main.go:141] libmachine: (addons-937866) DBG | exit 0
	I0813 23:48:16.814089   17389 main.go:141] libmachine: (addons-937866) DBG | SSH cmd err, output: <nil>: 
	I0813 23:48:16.814340   17389 main.go:141] libmachine: (addons-937866) KVM machine creation complete!
	I0813 23:48:16.814634   17389 main.go:141] libmachine: (addons-937866) Calling .GetConfigRaw
	I0813 23:48:16.815102   17389 main.go:141] libmachine: (addons-937866) Calling .DriverName
	I0813 23:48:16.815290   17389 main.go:141] libmachine: (addons-937866) Calling .DriverName
	I0813 23:48:16.815462   17389 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0813 23:48:16.815475   17389 main.go:141] libmachine: (addons-937866) Calling .GetState
	I0813 23:48:16.816739   17389 main.go:141] libmachine: Detecting operating system of created instance...
	I0813 23:48:16.816752   17389 main.go:141] libmachine: Waiting for SSH to be available...
	I0813 23:48:16.816758   17389 main.go:141] libmachine: Getting to WaitForSSH function...
	I0813 23:48:16.816764   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHHostname
	I0813 23:48:16.819160   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:16.819504   17389 main.go:141] libmachine: (addons-937866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:c3:1c", ip: ""} in network mk-addons-937866: {Iface:virbr1 ExpiryTime:2024-08-14 00:48:10 +0000 UTC Type:0 Mac:52:54:00:a3:c3:1c Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:addons-937866 Clientid:01:52:54:00:a3:c3:1c}
	I0813 23:48:16.819531   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined IP address 192.168.39.8 and MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:16.819638   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHPort
	I0813 23:48:16.819812   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHKeyPath
	I0813 23:48:16.819964   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHKeyPath
	I0813 23:48:16.820100   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHUsername
	I0813 23:48:16.820273   17389 main.go:141] libmachine: Using SSH client type: native
	I0813 23:48:16.820440   17389 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.8 22 <nil> <nil>}
	I0813 23:48:16.820450   17389 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0813 23:48:16.925176   17389 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0813 23:48:16.925199   17389 main.go:141] libmachine: Detecting the provisioner...
	I0813 23:48:16.925210   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHHostname
	I0813 23:48:16.927699   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:16.928115   17389 main.go:141] libmachine: (addons-937866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:c3:1c", ip: ""} in network mk-addons-937866: {Iface:virbr1 ExpiryTime:2024-08-14 00:48:10 +0000 UTC Type:0 Mac:52:54:00:a3:c3:1c Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:addons-937866 Clientid:01:52:54:00:a3:c3:1c}
	I0813 23:48:16.928137   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined IP address 192.168.39.8 and MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:16.928287   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHPort
	I0813 23:48:16.928496   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHKeyPath
	I0813 23:48:16.928725   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHKeyPath
	I0813 23:48:16.928889   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHUsername
	I0813 23:48:16.929061   17389 main.go:141] libmachine: Using SSH client type: native
	I0813 23:48:16.929250   17389 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.8 22 <nil> <nil>}
	I0813 23:48:16.929267   17389 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0813 23:48:17.034140   17389 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0813 23:48:17.034237   17389 main.go:141] libmachine: found compatible host: buildroot
	I0813 23:48:17.034252   17389 main.go:141] libmachine: Provisioning with buildroot...
	I0813 23:48:17.034261   17389 main.go:141] libmachine: (addons-937866) Calling .GetMachineName
	I0813 23:48:17.034546   17389 buildroot.go:166] provisioning hostname "addons-937866"
	I0813 23:48:17.034567   17389 main.go:141] libmachine: (addons-937866) Calling .GetMachineName
	I0813 23:48:17.034726   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHHostname
	I0813 23:48:17.037219   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:17.037509   17389 main.go:141] libmachine: (addons-937866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:c3:1c", ip: ""} in network mk-addons-937866: {Iface:virbr1 ExpiryTime:2024-08-14 00:48:10 +0000 UTC Type:0 Mac:52:54:00:a3:c3:1c Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:addons-937866 Clientid:01:52:54:00:a3:c3:1c}
	I0813 23:48:17.037541   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined IP address 192.168.39.8 and MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:17.037642   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHPort
	I0813 23:48:17.037788   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHKeyPath
	I0813 23:48:17.037926   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHKeyPath
	I0813 23:48:17.038090   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHUsername
	I0813 23:48:17.038264   17389 main.go:141] libmachine: Using SSH client type: native
	I0813 23:48:17.038459   17389 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.8 22 <nil> <nil>}
	I0813 23:48:17.038476   17389 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-937866 && echo "addons-937866" | sudo tee /etc/hostname
	I0813 23:48:17.158940   17389 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-937866
	
	I0813 23:48:17.158971   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHHostname
	I0813 23:48:17.161357   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:17.161682   17389 main.go:141] libmachine: (addons-937866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:c3:1c", ip: ""} in network mk-addons-937866: {Iface:virbr1 ExpiryTime:2024-08-14 00:48:10 +0000 UTC Type:0 Mac:52:54:00:a3:c3:1c Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:addons-937866 Clientid:01:52:54:00:a3:c3:1c}
	I0813 23:48:17.161711   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined IP address 192.168.39.8 and MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:17.161836   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHPort
	I0813 23:48:17.162029   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHKeyPath
	I0813 23:48:17.162181   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHKeyPath
	I0813 23:48:17.162349   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHUsername
	I0813 23:48:17.162509   17389 main.go:141] libmachine: Using SSH client type: native
	I0813 23:48:17.162674   17389 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.8 22 <nil> <nil>}
	I0813 23:48:17.162689   17389 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-937866' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-937866/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-937866' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 23:48:17.277816   17389 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0813 23:48:17.277845   17389 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19429-9425/.minikube CaCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19429-9425/.minikube}
	I0813 23:48:17.277893   17389 buildroot.go:174] setting up certificates
	I0813 23:48:17.277907   17389 provision.go:84] configureAuth start
	I0813 23:48:17.277920   17389 main.go:141] libmachine: (addons-937866) Calling .GetMachineName
	I0813 23:48:17.278254   17389 main.go:141] libmachine: (addons-937866) Calling .GetIP
	I0813 23:48:17.280752   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:17.281042   17389 main.go:141] libmachine: (addons-937866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:c3:1c", ip: ""} in network mk-addons-937866: {Iface:virbr1 ExpiryTime:2024-08-14 00:48:10 +0000 UTC Type:0 Mac:52:54:00:a3:c3:1c Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:addons-937866 Clientid:01:52:54:00:a3:c3:1c}
	I0813 23:48:17.281065   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined IP address 192.168.39.8 and MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:17.281184   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHHostname
	I0813 23:48:17.283453   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:17.283754   17389 main.go:141] libmachine: (addons-937866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:c3:1c", ip: ""} in network mk-addons-937866: {Iface:virbr1 ExpiryTime:2024-08-14 00:48:10 +0000 UTC Type:0 Mac:52:54:00:a3:c3:1c Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:addons-937866 Clientid:01:52:54:00:a3:c3:1c}
	I0813 23:48:17.283782   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined IP address 192.168.39.8 and MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:17.283958   17389 provision.go:143] copyHostCerts
	I0813 23:48:17.284030   17389 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem (1082 bytes)
	I0813 23:48:17.284177   17389 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem (1123 bytes)
	I0813 23:48:17.284259   17389 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem (1675 bytes)
	I0813 23:48:17.284325   17389 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem org=jenkins.addons-937866 san=[127.0.0.1 192.168.39.8 addons-937866 localhost minikube]
	I0813 23:48:17.410467   17389 provision.go:177] copyRemoteCerts
	I0813 23:48:17.410529   17389 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 23:48:17.410551   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHHostname
	I0813 23:48:17.412942   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:17.413289   17389 main.go:141] libmachine: (addons-937866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:c3:1c", ip: ""} in network mk-addons-937866: {Iface:virbr1 ExpiryTime:2024-08-14 00:48:10 +0000 UTC Type:0 Mac:52:54:00:a3:c3:1c Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:addons-937866 Clientid:01:52:54:00:a3:c3:1c}
	I0813 23:48:17.413313   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined IP address 192.168.39.8 and MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:17.413443   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHPort
	I0813 23:48:17.413636   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHKeyPath
	I0813 23:48:17.413754   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHUsername
	I0813 23:48:17.413912   17389 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/addons-937866/id_rsa Username:docker}
	I0813 23:48:17.496011   17389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0813 23:48:17.518748   17389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0813 23:48:17.540899   17389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0813 23:48:17.562888   17389 provision.go:87] duration metric: took 284.967043ms to configureAuth
	I0813 23:48:17.562910   17389 buildroot.go:189] setting minikube options for container-runtime
	I0813 23:48:17.563093   17389 config.go:182] Loaded profile config "addons-937866": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0813 23:48:17.563180   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHHostname
	I0813 23:48:17.565610   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:17.565914   17389 main.go:141] libmachine: (addons-937866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:c3:1c", ip: ""} in network mk-addons-937866: {Iface:virbr1 ExpiryTime:2024-08-14 00:48:10 +0000 UTC Type:0 Mac:52:54:00:a3:c3:1c Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:addons-937866 Clientid:01:52:54:00:a3:c3:1c}
	I0813 23:48:17.565948   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined IP address 192.168.39.8 and MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:17.566113   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHPort
	I0813 23:48:17.566301   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHKeyPath
	I0813 23:48:17.566459   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHKeyPath
	I0813 23:48:17.566591   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHUsername
	I0813 23:48:17.566738   17389 main.go:141] libmachine: Using SSH client type: native
	I0813 23:48:17.566894   17389 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.8 22 <nil> <nil>}
	I0813 23:48:17.566907   17389 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0813 23:48:17.827868   17389 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0813 23:48:17.827895   17389 main.go:141] libmachine: Checking connection to Docker...
	I0813 23:48:17.827904   17389 main.go:141] libmachine: (addons-937866) Calling .GetURL
	I0813 23:48:17.829121   17389 main.go:141] libmachine: (addons-937866) DBG | Using libvirt version 6000000
	I0813 23:48:17.831102   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:17.831408   17389 main.go:141] libmachine: (addons-937866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:c3:1c", ip: ""} in network mk-addons-937866: {Iface:virbr1 ExpiryTime:2024-08-14 00:48:10 +0000 UTC Type:0 Mac:52:54:00:a3:c3:1c Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:addons-937866 Clientid:01:52:54:00:a3:c3:1c}
	I0813 23:48:17.831438   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined IP address 192.168.39.8 and MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:17.831562   17389 main.go:141] libmachine: Docker is up and running!
	I0813 23:48:17.831579   17389 main.go:141] libmachine: Reticulating splines...
	I0813 23:48:17.831588   17389 client.go:171] duration metric: took 21.737171133s to LocalClient.Create
	I0813 23:48:17.831616   17389 start.go:167] duration metric: took 21.737250787s to libmachine.API.Create "addons-937866"
	I0813 23:48:17.831640   17389 start.go:293] postStartSetup for "addons-937866" (driver="kvm2")
	I0813 23:48:17.831666   17389 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 23:48:17.831689   17389 main.go:141] libmachine: (addons-937866) Calling .DriverName
	I0813 23:48:17.831918   17389 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 23:48:17.831943   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHHostname
	I0813 23:48:17.833832   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:17.834180   17389 main.go:141] libmachine: (addons-937866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:c3:1c", ip: ""} in network mk-addons-937866: {Iface:virbr1 ExpiryTime:2024-08-14 00:48:10 +0000 UTC Type:0 Mac:52:54:00:a3:c3:1c Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:addons-937866 Clientid:01:52:54:00:a3:c3:1c}
	I0813 23:48:17.834200   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined IP address 192.168.39.8 and MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:17.834363   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHPort
	I0813 23:48:17.834558   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHKeyPath
	I0813 23:48:17.834881   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHUsername
	I0813 23:48:17.835059   17389 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/addons-937866/id_rsa Username:docker}
	I0813 23:48:17.915779   17389 ssh_runner.go:195] Run: cat /etc/os-release
	I0813 23:48:17.919650   17389 info.go:137] Remote host: Buildroot 2023.02.9
	I0813 23:48:17.919674   17389 filesync.go:126] Scanning /home/jenkins/minikube-integration/19429-9425/.minikube/addons for local assets ...
	I0813 23:48:17.919742   17389 filesync.go:126] Scanning /home/jenkins/minikube-integration/19429-9425/.minikube/files for local assets ...
	I0813 23:48:17.919773   17389 start.go:296] duration metric: took 88.113995ms for postStartSetup
	I0813 23:48:17.919810   17389 main.go:141] libmachine: (addons-937866) Calling .GetConfigRaw
	I0813 23:48:17.920410   17389 main.go:141] libmachine: (addons-937866) Calling .GetIP
	I0813 23:48:17.922970   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:17.923286   17389 main.go:141] libmachine: (addons-937866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:c3:1c", ip: ""} in network mk-addons-937866: {Iface:virbr1 ExpiryTime:2024-08-14 00:48:10 +0000 UTC Type:0 Mac:52:54:00:a3:c3:1c Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:addons-937866 Clientid:01:52:54:00:a3:c3:1c}
	I0813 23:48:17.923312   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined IP address 192.168.39.8 and MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:17.923518   17389 profile.go:143] Saving config to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/addons-937866/config.json ...
	I0813 23:48:17.923689   17389 start.go:128] duration metric: took 21.84598673s to createHost
	I0813 23:48:17.923707   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHHostname
	I0813 23:48:17.925887   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:17.926184   17389 main.go:141] libmachine: (addons-937866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:c3:1c", ip: ""} in network mk-addons-937866: {Iface:virbr1 ExpiryTime:2024-08-14 00:48:10 +0000 UTC Type:0 Mac:52:54:00:a3:c3:1c Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:addons-937866 Clientid:01:52:54:00:a3:c3:1c}
	I0813 23:48:17.926222   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined IP address 192.168.39.8 and MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:17.926309   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHPort
	I0813 23:48:17.926490   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHKeyPath
	I0813 23:48:17.926639   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHKeyPath
	I0813 23:48:17.926749   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHUsername
	I0813 23:48:17.926891   17389 main.go:141] libmachine: Using SSH client type: native
	I0813 23:48:17.927043   17389 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.8 22 <nil> <nil>}
	I0813 23:48:17.927054   17389 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0813 23:48:18.034343   17389 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723592898.008013281
	
	I0813 23:48:18.034368   17389 fix.go:216] guest clock: 1723592898.008013281
	I0813 23:48:18.034378   17389 fix.go:229] Guest: 2024-08-13 23:48:18.008013281 +0000 UTC Remote: 2024-08-13 23:48:17.923698269 +0000 UTC m=+21.939464763 (delta=84.315012ms)
	I0813 23:48:18.034435   17389 fix.go:200] guest clock delta is within tolerance: 84.315012ms
	I0813 23:48:18.034443   17389 start.go:83] releasing machines lock for "addons-937866", held for 21.956814087s
	I0813 23:48:18.034465   17389 main.go:141] libmachine: (addons-937866) Calling .DriverName
	I0813 23:48:18.034721   17389 main.go:141] libmachine: (addons-937866) Calling .GetIP
	I0813 23:48:18.037266   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:18.037681   17389 main.go:141] libmachine: (addons-937866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:c3:1c", ip: ""} in network mk-addons-937866: {Iface:virbr1 ExpiryTime:2024-08-14 00:48:10 +0000 UTC Type:0 Mac:52:54:00:a3:c3:1c Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:addons-937866 Clientid:01:52:54:00:a3:c3:1c}
	I0813 23:48:18.037712   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined IP address 192.168.39.8 and MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:18.037840   17389 main.go:141] libmachine: (addons-937866) Calling .DriverName
	I0813 23:48:18.038381   17389 main.go:141] libmachine: (addons-937866) Calling .DriverName
	I0813 23:48:18.038557   17389 main.go:141] libmachine: (addons-937866) Calling .DriverName
	I0813 23:48:18.038667   17389 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0813 23:48:18.038724   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHHostname
	I0813 23:48:18.038821   17389 ssh_runner.go:195] Run: cat /version.json
	I0813 23:48:18.038843   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHHostname
	I0813 23:48:18.041215   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:18.041458   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:18.041490   17389 main.go:141] libmachine: (addons-937866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:c3:1c", ip: ""} in network mk-addons-937866: {Iface:virbr1 ExpiryTime:2024-08-14 00:48:10 +0000 UTC Type:0 Mac:52:54:00:a3:c3:1c Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:addons-937866 Clientid:01:52:54:00:a3:c3:1c}
	I0813 23:48:18.041514   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined IP address 192.168.39.8 and MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:18.041617   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHPort
	I0813 23:48:18.041790   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHKeyPath
	I0813 23:48:18.041844   17389 main.go:141] libmachine: (addons-937866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:c3:1c", ip: ""} in network mk-addons-937866: {Iface:virbr1 ExpiryTime:2024-08-14 00:48:10 +0000 UTC Type:0 Mac:52:54:00:a3:c3:1c Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:addons-937866 Clientid:01:52:54:00:a3:c3:1c}
	I0813 23:48:18.041868   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined IP address 192.168.39.8 and MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:18.041929   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHUsername
	I0813 23:48:18.042017   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHPort
	I0813 23:48:18.042120   17389 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/addons-937866/id_rsa Username:docker}
	I0813 23:48:18.042205   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHKeyPath
	I0813 23:48:18.042325   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHUsername
	I0813 23:48:18.042533   17389 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/addons-937866/id_rsa Username:docker}
	I0813 23:48:18.152396   17389 ssh_runner.go:195] Run: systemctl --version
	I0813 23:48:18.157924   17389 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0813 23:48:18.310645   17389 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0813 23:48:18.316220   17389 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0813 23:48:18.316274   17389 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0813 23:48:18.336256   17389 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0813 23:48:18.336275   17389 start.go:495] detecting cgroup driver to use...
	I0813 23:48:18.336338   17389 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0813 23:48:18.352990   17389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0813 23:48:18.366259   17389 docker.go:217] disabling cri-docker service (if available) ...
	I0813 23:48:18.366309   17389 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0813 23:48:18.379194   17389 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0813 23:48:18.394633   17389 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0813 23:48:18.519796   17389 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0813 23:48:18.668865   17389 docker.go:233] disabling docker service ...
	I0813 23:48:18.668944   17389 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0813 23:48:18.682362   17389 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0813 23:48:18.694540   17389 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0813 23:48:18.830643   17389 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0813 23:48:18.941346   17389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0813 23:48:18.954278   17389 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 23:48:18.971704   17389 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0813 23:48:18.971774   17389 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0813 23:48:18.981199   17389 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0813 23:48:18.981264   17389 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0813 23:48:18.990523   17389 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0813 23:48:19.000175   17389 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0813 23:48:19.010005   17389 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0813 23:48:19.019832   17389 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0813 23:48:19.030113   17389 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0813 23:48:19.046588   17389 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0813 23:48:19.056483   17389 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 23:48:19.065842   17389 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0813 23:48:19.065900   17389 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0813 23:48:19.079300   17389 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 23:48:19.088997   17389 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0813 23:48:19.195893   17389 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0813 23:48:19.337001   17389 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0813 23:48:19.337114   17389 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0813 23:48:19.341809   17389 start.go:563] Will wait 60s for crictl version
	I0813 23:48:19.341877   17389 ssh_runner.go:195] Run: which crictl
	I0813 23:48:19.345245   17389 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0813 23:48:19.380819   17389 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0813 23:48:19.380952   17389 ssh_runner.go:195] Run: crio --version
	I0813 23:48:19.406256   17389 ssh_runner.go:195] Run: crio --version
	I0813 23:48:19.435050   17389 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0813 23:48:19.436271   17389 main.go:141] libmachine: (addons-937866) Calling .GetIP
	I0813 23:48:19.439015   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:19.439287   17389 main.go:141] libmachine: (addons-937866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:c3:1c", ip: ""} in network mk-addons-937866: {Iface:virbr1 ExpiryTime:2024-08-14 00:48:10 +0000 UTC Type:0 Mac:52:54:00:a3:c3:1c Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:addons-937866 Clientid:01:52:54:00:a3:c3:1c}
	I0813 23:48:19.439307   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined IP address 192.168.39.8 and MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:19.439568   17389 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0813 23:48:19.443714   17389 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 23:48:19.456307   17389 kubeadm.go:883] updating cluster {Name:addons-937866 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:addons-937866 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.8 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0813 23:48:19.456422   17389 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0813 23:48:19.456488   17389 ssh_runner.go:195] Run: sudo crictl images --output json
	I0813 23:48:19.487831   17389 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0813 23:48:19.487912   17389 ssh_runner.go:195] Run: which lz4
	I0813 23:48:19.491707   17389 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0813 23:48:19.495730   17389 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0813 23:48:19.495758   17389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0813 23:48:20.568086   17389 crio.go:462] duration metric: took 1.076405627s to copy over tarball
	I0813 23:48:20.568163   17389 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0813 23:48:22.703031   17389 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.134838623s)
	I0813 23:48:22.703065   17389 crio.go:469] duration metric: took 2.134951647s to extract the tarball
	I0813 23:48:22.703075   17389 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0813 23:48:22.738320   17389 ssh_runner.go:195] Run: sudo crictl images --output json
	I0813 23:48:22.784505   17389 crio.go:514] all images are preloaded for cri-o runtime.
	I0813 23:48:22.784526   17389 cache_images.go:84] Images are preloaded, skipping loading
	I0813 23:48:22.784534   17389 kubeadm.go:934] updating node { 192.168.39.8 8443 v1.31.0 crio true true} ...
	I0813 23:48:22.784646   17389 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-937866 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.8
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-937866 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0813 23:48:22.784707   17389 ssh_runner.go:195] Run: crio config
	I0813 23:48:22.825980   17389 cni.go:84] Creating CNI manager for ""
	I0813 23:48:22.825999   17389 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0813 23:48:22.826008   17389 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0813 23:48:22.826035   17389 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.8 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-937866 NodeName:addons-937866 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.8"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.8 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0813 23:48:22.826198   17389 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.8
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-937866"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.8
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.8"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 23:48:22.826274   17389 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0813 23:48:22.835697   17389 binaries.go:44] Found k8s binaries, skipping transfer
	I0813 23:48:22.835768   17389 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0813 23:48:22.844796   17389 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0813 23:48:22.859923   17389 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0813 23:48:22.874987   17389 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0813 23:48:22.890240   17389 ssh_runner.go:195] Run: grep 192.168.39.8	control-plane.minikube.internal$ /etc/hosts
	I0813 23:48:22.893912   17389 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.8	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 23:48:22.905336   17389 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0813 23:48:23.017756   17389 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0813 23:48:23.034255   17389 certs.go:68] Setting up /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/addons-937866 for IP: 192.168.39.8
	I0813 23:48:23.034281   17389 certs.go:194] generating shared ca certs ...
	I0813 23:48:23.034300   17389 certs.go:226] acquiring lock for ca certs: {Name:mke321171338291cf6d66f3acbfe43d3fabcafa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 23:48:23.034463   17389 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19429-9425/.minikube/ca.key
	I0813 23:48:23.097098   17389 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19429-9425/.minikube/ca.crt ...
	I0813 23:48:23.097123   17389 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19429-9425/.minikube/ca.crt: {Name:mk2977fbe2eeb4385cb50c31ef49d890db41b8bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 23:48:23.097289   17389 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19429-9425/.minikube/ca.key ...
	I0813 23:48:23.097299   17389 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19429-9425/.minikube/ca.key: {Name:mke2ec5f52fb9207c0853de1fa6abf7f31b66110 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 23:48:23.097367   17389 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.key
	I0813 23:48:23.145500   17389 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.crt ...
	I0813 23:48:23.145526   17389 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.crt: {Name:mk6a4b7b7b85b800eb2b54749ea5d443607a3feb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 23:48:23.145679   17389 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.key ...
	I0813 23:48:23.145690   17389 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.key: {Name:mk58b1c47f5e33e6b8b6b98b3d9f11f815c4d139 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 23:48:23.145757   17389 certs.go:256] generating profile certs ...
	I0813 23:48:23.145809   17389 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/addons-937866/client.key
	I0813 23:48:23.145831   17389 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/addons-937866/client.crt with IP's: []
	I0813 23:48:23.285504   17389 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/addons-937866/client.crt ...
	I0813 23:48:23.285535   17389 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/addons-937866/client.crt: {Name:mkeac58f052437b2d744fedbb7b91d00b0fc5f45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 23:48:23.285691   17389 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/addons-937866/client.key ...
	I0813 23:48:23.285701   17389 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/addons-937866/client.key: {Name:mka8d943dfc54e96068a797dac3bd89a31200db0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 23:48:23.285772   17389 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/addons-937866/apiserver.key.51719a42
	I0813 23:48:23.285789   17389 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/addons-937866/apiserver.crt.51719a42 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.8]
	I0813 23:48:23.347329   17389 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/addons-937866/apiserver.crt.51719a42 ...
	I0813 23:48:23.347357   17389 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/addons-937866/apiserver.crt.51719a42: {Name:mk59cd7c654147be8ecda1106b330647aaf66d6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 23:48:23.347503   17389 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/addons-937866/apiserver.key.51719a42 ...
	I0813 23:48:23.347514   17389 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/addons-937866/apiserver.key.51719a42: {Name:mkbbf49bb639d8a56eb052d370671f63f5678ea1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 23:48:23.347579   17389 certs.go:381] copying /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/addons-937866/apiserver.crt.51719a42 -> /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/addons-937866/apiserver.crt
	I0813 23:48:23.347669   17389 certs.go:385] copying /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/addons-937866/apiserver.key.51719a42 -> /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/addons-937866/apiserver.key
	I0813 23:48:23.347722   17389 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/addons-937866/proxy-client.key
	I0813 23:48:23.347739   17389 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/addons-937866/proxy-client.crt with IP's: []
	I0813 23:48:23.561565   17389 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/addons-937866/proxy-client.crt ...
	I0813 23:48:23.561600   17389 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/addons-937866/proxy-client.crt: {Name:mk85241301946cdd3bbc9cef53a4b84f65b6fe58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 23:48:23.561764   17389 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/addons-937866/proxy-client.key ...
	I0813 23:48:23.561775   17389 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/addons-937866/proxy-client.key: {Name:mk3e6bfee87b00af8f8a4fb1688e115f7968ea18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 23:48:23.561931   17389 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem (1679 bytes)
	I0813 23:48:23.561965   17389 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem (1082 bytes)
	I0813 23:48:23.561991   17389 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem (1123 bytes)
	I0813 23:48:23.562013   17389 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem (1675 bytes)
	I0813 23:48:23.562602   17389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0813 23:48:23.585612   17389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0813 23:48:23.608721   17389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0813 23:48:23.631365   17389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0813 23:48:23.652810   17389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/addons-937866/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0813 23:48:23.673830   17389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/addons-937866/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0813 23:48:23.697682   17389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/addons-937866/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0813 23:48:23.723797   17389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/addons-937866/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0813 23:48:23.748347   17389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0813 23:48:23.770963   17389 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0813 23:48:23.785945   17389 ssh_runner.go:195] Run: openssl version
	I0813 23:48:23.791325   17389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0813 23:48:23.801224   17389 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0813 23:48:23.805088   17389 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 13 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0813 23:48:23.805138   17389 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0813 23:48:23.810369   17389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0813 23:48:23.820184   17389 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0813 23:48:23.823707   17389 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0813 23:48:23.823764   17389 kubeadm.go:392] StartCluster: {Name:addons-937866 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 C
lusterName:addons-937866 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.8 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0813 23:48:23.823833   17389 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0813 23:48:23.823902   17389 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 23:48:23.860040   17389 cri.go:89] found id: ""
	I0813 23:48:23.860109   17389 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0813 23:48:23.869680   17389 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 23:48:23.878643   17389 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 23:48:23.887340   17389 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 23:48:23.887389   17389 kubeadm.go:157] found existing configuration files:
	
	I0813 23:48:23.887440   17389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0813 23:48:23.895816   17389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0813 23:48:23.895868   17389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0813 23:48:23.904305   17389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0813 23:48:23.913063   17389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0813 23:48:23.913122   17389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0813 23:48:23.921804   17389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0813 23:48:23.930089   17389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0813 23:48:23.930141   17389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0813 23:48:23.938746   17389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0813 23:48:23.946920   17389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0813 23:48:23.946982   17389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0813 23:48:23.955228   17389 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0813 23:48:24.003406   17389 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0813 23:48:24.003537   17389 kubeadm.go:310] [preflight] Running pre-flight checks
	I0813 23:48:24.097007   17389 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0813 23:48:24.097143   17389 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0813 23:48:24.097268   17389 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0813 23:48:24.108332   17389 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0813 23:48:24.110514   17389 out.go:204]   - Generating certificates and keys ...
	I0813 23:48:24.110596   17389 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0813 23:48:24.110679   17389 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0813 23:48:24.180944   17389 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0813 23:48:24.238864   17389 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0813 23:48:24.475259   17389 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0813 23:48:24.699741   17389 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0813 23:48:24.773354   17389 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0813 23:48:24.773868   17389 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-937866 localhost] and IPs [192.168.39.8 127.0.0.1 ::1]
	I0813 23:48:25.030034   17389 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0813 23:48:25.030205   17389 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-937866 localhost] and IPs [192.168.39.8 127.0.0.1 ::1]
	I0813 23:48:25.146473   17389 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0813 23:48:25.473747   17389 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0813 23:48:25.595182   17389 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0813 23:48:25.595722   17389 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0813 23:48:25.694610   17389 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0813 23:48:25.788502   17389 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0813 23:48:25.962252   17389 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0813 23:48:26.297629   17389 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0813 23:48:26.408172   17389 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0813 23:48:26.409112   17389 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0813 23:48:26.411609   17389 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0813 23:48:26.413354   17389 out.go:204]   - Booting up control plane ...
	I0813 23:48:26.413452   17389 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0813 23:48:26.413540   17389 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0813 23:48:26.414100   17389 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0813 23:48:26.437069   17389 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0813 23:48:26.443257   17389 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0813 23:48:26.443334   17389 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0813 23:48:26.566660   17389 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0813 23:48:26.566834   17389 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0813 23:48:27.068269   17389 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.823494ms
	I0813 23:48:27.068374   17389 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0813 23:48:32.066559   17389 kubeadm.go:310] [api-check] The API server is healthy after 5.001845026s
	I0813 23:48:32.084830   17389 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0813 23:48:32.106881   17389 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0813 23:48:32.140124   17389 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0813 23:48:32.140400   17389 kubeadm.go:310] [mark-control-plane] Marking the node addons-937866 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0813 23:48:32.154831   17389 kubeadm.go:310] [bootstrap-token] Using token: htc53c.dc5uvt68z1ujyfnc
	I0813 23:48:32.156566   17389 out.go:204]   - Configuring RBAC rules ...
	I0813 23:48:32.156705   17389 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0813 23:48:32.160739   17389 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0813 23:48:32.174011   17389 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0813 23:48:32.177420   17389 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0813 23:48:32.181224   17389 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0813 23:48:32.183911   17389 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0813 23:48:32.473528   17389 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0813 23:48:32.959993   17389 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0813 23:48:33.474101   17389 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0813 23:48:33.475028   17389 kubeadm.go:310] 
	I0813 23:48:33.475114   17389 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0813 23:48:33.475127   17389 kubeadm.go:310] 
	I0813 23:48:33.475239   17389 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0813 23:48:33.475251   17389 kubeadm.go:310] 
	I0813 23:48:33.475282   17389 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0813 23:48:33.475373   17389 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0813 23:48:33.475464   17389 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0813 23:48:33.475486   17389 kubeadm.go:310] 
	I0813 23:48:33.475563   17389 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0813 23:48:33.475572   17389 kubeadm.go:310] 
	I0813 23:48:33.475637   17389 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0813 23:48:33.475646   17389 kubeadm.go:310] 
	I0813 23:48:33.475749   17389 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0813 23:48:33.475855   17389 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0813 23:48:33.475951   17389 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0813 23:48:33.475969   17389 kubeadm.go:310] 
	I0813 23:48:33.476069   17389 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0813 23:48:33.476171   17389 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0813 23:48:33.476180   17389 kubeadm.go:310] 
	I0813 23:48:33.476306   17389 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token htc53c.dc5uvt68z1ujyfnc \
	I0813 23:48:33.476435   17389 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f608549e9c065598e2d494ba753eb8a7bbe7260a53a76decd30e6e17b5fe24c3 \
	I0813 23:48:33.476463   17389 kubeadm.go:310] 	--control-plane 
	I0813 23:48:33.476476   17389 kubeadm.go:310] 
	I0813 23:48:33.476572   17389 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0813 23:48:33.476584   17389 kubeadm.go:310] 
	I0813 23:48:33.476689   17389 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token htc53c.dc5uvt68z1ujyfnc \
	I0813 23:48:33.476822   17389 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f608549e9c065598e2d494ba753eb8a7bbe7260a53a76decd30e6e17b5fe24c3 
	I0813 23:48:33.477490   17389 kubeadm.go:310] W0813 23:48:23.981767     821 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0813 23:48:33.477815   17389 kubeadm.go:310] W0813 23:48:23.982640     821 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0813 23:48:33.477944   17389 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0813 23:48:33.477985   17389 cni.go:84] Creating CNI manager for ""
	I0813 23:48:33.477998   17389 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0813 23:48:33.479696   17389 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0813 23:48:33.480891   17389 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0813 23:48:33.490848   17389 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0813 23:48:33.507814   17389 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 23:48:33.507900   17389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 23:48:33.507944   17389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-937866 minikube.k8s.io/updated_at=2024_08_13T23_48_33_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a5ac17629aabe8562ef9220195ba6559cf416caf minikube.k8s.io/name=addons-937866 minikube.k8s.io/primary=true
	I0813 23:48:33.526829   17389 ops.go:34] apiserver oom_adj: -16
	I0813 23:48:33.654478   17389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 23:48:34.155263   17389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 23:48:34.655260   17389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 23:48:35.154984   17389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 23:48:35.655077   17389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 23:48:36.155495   17389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 23:48:36.654653   17389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 23:48:37.155298   17389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 23:48:37.237329   17389 kubeadm.go:1113] duration metric: took 3.729495612s to wait for elevateKubeSystemPrivileges
	I0813 23:48:37.237370   17389 kubeadm.go:394] duration metric: took 13.413610914s to StartCluster
	I0813 23:48:37.237394   17389 settings.go:142] acquiring lock: {Name:mkb0f793aa2a6618ff3457f9cd2d34beec5f1b5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 23:48:37.237545   17389 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19429-9425/kubeconfig
	I0813 23:48:37.238069   17389 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19429-9425/kubeconfig: {Name:mkd99f966962f24d3ec49056c344f5320df43dba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 23:48:37.238282   17389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0813 23:48:37.238298   17389 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.8 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0813 23:48:37.238346   17389 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0813 23:48:37.238458   17389 addons.go:69] Setting helm-tiller=true in profile "addons-937866"
	I0813 23:48:37.238471   17389 addons.go:69] Setting yakd=true in profile "addons-937866"
	I0813 23:48:37.238479   17389 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-937866"
	I0813 23:48:37.238493   17389 addons.go:234] Setting addon helm-tiller=true in "addons-937866"
	I0813 23:48:37.238496   17389 addons.go:234] Setting addon yakd=true in "addons-937866"
	I0813 23:48:37.238491   17389 addons.go:69] Setting ingress-dns=true in profile "addons-937866"
	I0813 23:48:37.238517   17389 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-937866"
	I0813 23:48:37.238524   17389 host.go:66] Checking if "addons-937866" exists ...
	I0813 23:48:37.238530   17389 addons.go:69] Setting registry=true in profile "addons-937866"
	I0813 23:48:37.238530   17389 addons.go:69] Setting ingress=true in profile "addons-937866"
	I0813 23:48:37.238547   17389 addons.go:234] Setting addon registry=true in "addons-937866"
	I0813 23:48:37.238555   17389 host.go:66] Checking if "addons-937866" exists ...
	I0813 23:48:37.238559   17389 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-937866"
	I0813 23:48:37.238566   17389 addons.go:69] Setting inspektor-gadget=true in profile "addons-937866"
	I0813 23:48:37.238575   17389 addons.go:69] Setting storage-provisioner=true in profile "addons-937866"
	I0813 23:48:37.238590   17389 addons.go:234] Setting addon inspektor-gadget=true in "addons-937866"
	I0813 23:48:37.238597   17389 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-937866"
	I0813 23:48:37.238603   17389 addons.go:69] Setting cloud-spanner=true in profile "addons-937866"
	I0813 23:48:37.238614   17389 host.go:66] Checking if "addons-937866" exists ...
	I0813 23:48:37.238618   17389 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-937866"
	I0813 23:48:37.238623   17389 addons.go:234] Setting addon cloud-spanner=true in "addons-937866"
	I0813 23:48:37.238647   17389 host.go:66] Checking if "addons-937866" exists ...
	I0813 23:48:37.238968   17389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 23:48:37.238985   17389 addons.go:69] Setting default-storageclass=true in profile "addons-937866"
	I0813 23:48:37.238985   17389 addons.go:69] Setting gcp-auth=true in profile "addons-937866"
	I0813 23:48:37.239006   17389 mustload.go:65] Loading cluster: addons-937866
	I0813 23:48:37.239008   17389 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-937866"
	I0813 23:48:37.239012   17389 addons.go:69] Setting volcano=true in profile "addons-937866"
	I0813 23:48:37.239016   17389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 23:48:37.239024   17389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 23:48:37.239032   17389 addons.go:234] Setting addon volcano=true in "addons-937866"
	I0813 23:48:37.239044   17389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0813 23:48:37.239052   17389 host.go:66] Checking if "addons-937866" exists ...
	I0813 23:48:37.239061   17389 addons.go:69] Setting metrics-server=true in profile "addons-937866"
	I0813 23:48:37.239086   17389 addons.go:234] Setting addon metrics-server=true in "addons-937866"
	I0813 23:48:37.239115   17389 host.go:66] Checking if "addons-937866" exists ...
	I0813 23:48:37.239166   17389 config.go:182] Loaded profile config "addons-937866": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0813 23:48:37.239322   17389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 23:48:37.239347   17389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0813 23:48:37.239351   17389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 23:48:37.239375   17389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0813 23:48:37.239484   17389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 23:48:37.238555   17389 config.go:182] Loaded profile config "addons-937866": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0813 23:48:37.239507   17389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0813 23:48:37.239513   17389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 23:48:37.238522   17389 addons.go:234] Setting addon ingress-dns=true in "addons-937866"
	I0813 23:48:37.238549   17389 addons.go:234] Setting addon ingress=true in "addons-937866"
	I0813 23:48:37.239550   17389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0813 23:48:37.239558   17389 host.go:66] Checking if "addons-937866" exists ...
	I0813 23:48:37.239570   17389 host.go:66] Checking if "addons-937866" exists ...
	I0813 23:48:37.238592   17389 addons.go:234] Setting addon storage-provisioner=true in "addons-937866"
	I0813 23:48:37.239611   17389 host.go:66] Checking if "addons-937866" exists ...
	I0813 23:48:37.239491   17389 addons.go:69] Setting volumesnapshots=true in profile "addons-937866"
	I0813 23:48:37.239698   17389 addons.go:234] Setting addon volumesnapshots=true in "addons-937866"
	I0813 23:48:37.239745   17389 host.go:66] Checking if "addons-937866" exists ...
	I0813 23:48:37.239014   17389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0813 23:48:37.239055   17389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0813 23:48:37.239927   17389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 23:48:37.239944   17389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0813 23:48:37.238571   17389 host.go:66] Checking if "addons-937866" exists ...
	I0813 23:48:37.239977   17389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 23:48:37.238977   17389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 23:48:37.240002   17389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0813 23:48:37.240089   17389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 23:48:37.240124   17389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0813 23:48:37.240128   17389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 23:48:37.240155   17389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0813 23:48:37.238525   17389 host.go:66] Checking if "addons-937866" exists ...
	I0813 23:48:37.240318   17389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 23:48:37.240381   17389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0813 23:48:37.240526   17389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 23:48:37.240551   17389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0813 23:48:37.238977   17389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 23:48:37.242196   17389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0813 23:48:37.238598   17389 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-937866"
	I0813 23:48:37.257636   17389 host.go:66] Checking if "addons-937866" exists ...
	I0813 23:48:37.248104   17389 out.go:177] * Verifying Kubernetes components...
	I0813 23:48:37.248197   17389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0813 23:48:37.258202   17389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 23:48:37.258463   17389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0813 23:48:37.260051   17389 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0813 23:48:37.260182   17389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37591
	I0813 23:48:37.260360   17389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36391
	I0813 23:48:37.260476   17389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36609
	I0813 23:48:37.260796   17389 main.go:141] libmachine: () Calling .GetVersion
	I0813 23:48:37.260899   17389 main.go:141] libmachine: () Calling .GetVersion
	I0813 23:48:37.260910   17389 main.go:141] libmachine: () Calling .GetVersion
	I0813 23:48:37.261623   17389 main.go:141] libmachine: Using API Version  1
	I0813 23:48:37.261642   17389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0813 23:48:37.261641   17389 main.go:141] libmachine: Using API Version  1
	I0813 23:48:37.261697   17389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0813 23:48:37.261731   17389 main.go:141] libmachine: Using API Version  1
	I0813 23:48:37.261758   17389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0813 23:48:37.262256   17389 main.go:141] libmachine: () Calling .GetMachineName
	I0813 23:48:37.262272   17389 main.go:141] libmachine: () Calling .GetMachineName
	I0813 23:48:37.262345   17389 main.go:141] libmachine: () Calling .GetMachineName
	I0813 23:48:37.262804   17389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 23:48:37.262819   17389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 23:48:37.262844   17389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0813 23:48:37.262877   17389 main.go:141] libmachine: (addons-937866) Calling .GetState
	I0813 23:48:37.262929   17389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39621
	I0813 23:48:37.263136   17389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0813 23:48:37.263222   17389 main.go:141] libmachine: () Calling .GetVersion
	I0813 23:48:37.263667   17389 main.go:141] libmachine: Using API Version  1
	I0813 23:48:37.263686   17389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0813 23:48:37.264107   17389 main.go:141] libmachine: () Calling .GetMachineName
	I0813 23:48:37.264364   17389 main.go:141] libmachine: (addons-937866) Calling .GetState
	I0813 23:48:37.265241   17389 host.go:66] Checking if "addons-937866" exists ...
	I0813 23:48:37.268896   17389 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-937866"
	I0813 23:48:37.268943   17389 host.go:66] Checking if "addons-937866" exists ...
	I0813 23:48:37.269393   17389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 23:48:37.269438   17389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0813 23:48:37.270187   17389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39759
	I0813 23:48:37.270635   17389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 23:48:37.270662   17389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0813 23:48:37.272649   17389 main.go:141] libmachine: () Calling .GetVersion
	I0813 23:48:37.273156   17389 main.go:141] libmachine: Using API Version  1
	I0813 23:48:37.273179   17389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0813 23:48:37.273530   17389 main.go:141] libmachine: () Calling .GetMachineName
	I0813 23:48:37.274034   17389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 23:48:37.274086   17389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0813 23:48:37.280482   17389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41577
	I0813 23:48:37.280616   17389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40301
	I0813 23:48:37.281171   17389 main.go:141] libmachine: () Calling .GetVersion
	I0813 23:48:37.281754   17389 main.go:141] libmachine: Using API Version  1
	I0813 23:48:37.281772   17389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0813 23:48:37.282148   17389 main.go:141] libmachine: () Calling .GetMachineName
	I0813 23:48:37.282677   17389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 23:48:37.282717   17389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0813 23:48:37.283694   17389 main.go:141] libmachine: () Calling .GetVersion
	I0813 23:48:37.290557   17389 main.go:141] libmachine: Using API Version  1
	I0813 23:48:37.290589   17389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0813 23:48:37.294466   17389 main.go:141] libmachine: () Calling .GetMachineName
	I0813 23:48:37.298270   17389 main.go:141] libmachine: (addons-937866) Calling .GetState
	I0813 23:48:37.300168   17389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32953
	I0813 23:48:37.300327   17389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38617
	I0813 23:48:37.300764   17389 main.go:141] libmachine: () Calling .GetVersion
	I0813 23:48:37.301355   17389 main.go:141] libmachine: Using API Version  1
	I0813 23:48:37.301375   17389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0813 23:48:37.301773   17389 main.go:141] libmachine: () Calling .GetMachineName
	I0813 23:48:37.302429   17389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 23:48:37.302467   17389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0813 23:48:37.302661   17389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34473
	I0813 23:48:37.303022   17389 main.go:141] libmachine: () Calling .GetVersion
	I0813 23:48:37.303112   17389 main.go:141] libmachine: () Calling .GetVersion
	I0813 23:48:37.303635   17389 main.go:141] libmachine: Using API Version  1
	I0813 23:48:37.303655   17389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0813 23:48:37.303802   17389 main.go:141] libmachine: Using API Version  1
	I0813 23:48:37.303814   17389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0813 23:48:37.305118   17389 addons.go:234] Setting addon default-storageclass=true in "addons-937866"
	I0813 23:48:37.305158   17389 host.go:66] Checking if "addons-937866" exists ...
	I0813 23:48:37.305566   17389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 23:48:37.305596   17389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0813 23:48:37.305819   17389 main.go:141] libmachine: () Calling .GetMachineName
	I0813 23:48:37.305881   17389 main.go:141] libmachine: () Calling .GetMachineName
	I0813 23:48:37.305920   17389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41791
	I0813 23:48:37.306130   17389 main.go:141] libmachine: (addons-937866) Calling .GetState
	I0813 23:48:37.306400   17389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 23:48:37.306439   17389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0813 23:48:37.306835   17389 main.go:141] libmachine: () Calling .GetVersion
	I0813 23:48:37.307330   17389 main.go:141] libmachine: Using API Version  1
	I0813 23:48:37.307354   17389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0813 23:48:37.307685   17389 main.go:141] libmachine: () Calling .GetMachineName
	I0813 23:48:37.307735   17389 main.go:141] libmachine: (addons-937866) Calling .DriverName
	I0813 23:48:37.308627   17389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 23:48:37.308663   17389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0813 23:48:37.308842   17389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37261
	I0813 23:48:37.309270   17389 main.go:141] libmachine: () Calling .GetVersion
	I0813 23:48:37.309725   17389 main.go:141] libmachine: Using API Version  1
	I0813 23:48:37.309740   17389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0813 23:48:37.310035   17389 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0813 23:48:37.310194   17389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37991
	I0813 23:48:37.310545   17389 main.go:141] libmachine: () Calling .GetVersion
	I0813 23:48:37.310629   17389 main.go:141] libmachine: () Calling .GetMachineName
	I0813 23:48:37.311133   17389 main.go:141] libmachine: Using API Version  1
	I0813 23:48:37.311154   17389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0813 23:48:37.311362   17389 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0813 23:48:37.311377   17389 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0813 23:48:37.311395   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHHostname
	I0813 23:48:37.311505   17389 main.go:141] libmachine: () Calling .GetMachineName
	I0813 23:48:37.311560   17389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 23:48:37.311589   17389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0813 23:48:37.311720   17389 main.go:141] libmachine: (addons-937866) Calling .GetState
	I0813 23:48:37.311721   17389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36393
	I0813 23:48:37.312089   17389 main.go:141] libmachine: () Calling .GetVersion
	I0813 23:48:37.312559   17389 main.go:141] libmachine: Using API Version  1
	I0813 23:48:37.312580   17389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0813 23:48:37.312890   17389 main.go:141] libmachine: () Calling .GetMachineName
	I0813 23:48:37.313401   17389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 23:48:37.313442   17389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0813 23:48:37.315743   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:37.316392   17389 main.go:141] libmachine: (addons-937866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:c3:1c", ip: ""} in network mk-addons-937866: {Iface:virbr1 ExpiryTime:2024-08-14 00:48:10 +0000 UTC Type:0 Mac:52:54:00:a3:c3:1c Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:addons-937866 Clientid:01:52:54:00:a3:c3:1c}
	I0813 23:48:37.316417   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined IP address 192.168.39.8 and MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:37.316557   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHPort
	I0813 23:48:37.316665   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHKeyPath
	I0813 23:48:37.316744   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHUsername
	I0813 23:48:37.316827   17389 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/addons-937866/id_rsa Username:docker}
	I0813 23:48:37.317557   17389 main.go:141] libmachine: (addons-937866) Calling .DriverName
	I0813 23:48:37.317844   17389 main.go:141] libmachine: Making call to close driver server
	I0813 23:48:37.317872   17389 main.go:141] libmachine: (addons-937866) Calling .Close
	I0813 23:48:37.319654   17389 main.go:141] libmachine: (addons-937866) DBG | Closing plugin on server side
	I0813 23:48:37.319676   17389 main.go:141] libmachine: Successfully made call to close driver server
	I0813 23:48:37.319690   17389 main.go:141] libmachine: Making call to close connection to plugin binary
	I0813 23:48:37.319712   17389 main.go:141] libmachine: Making call to close driver server
	I0813 23:48:37.319721   17389 main.go:141] libmachine: (addons-937866) Calling .Close
	I0813 23:48:37.319949   17389 main.go:141] libmachine: (addons-937866) DBG | Closing plugin on server side
	I0813 23:48:37.319957   17389 main.go:141] libmachine: Successfully made call to close driver server
	I0813 23:48:37.319968   17389 main.go:141] libmachine: Making call to close connection to plugin binary
	W0813 23:48:37.320057   17389 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0813 23:48:37.321837   17389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32959
	I0813 23:48:37.322281   17389 main.go:141] libmachine: () Calling .GetVersion
	I0813 23:48:37.322721   17389 main.go:141] libmachine: Using API Version  1
	I0813 23:48:37.322742   17389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0813 23:48:37.323044   17389 main.go:141] libmachine: () Calling .GetMachineName
	I0813 23:48:37.323184   17389 main.go:141] libmachine: (addons-937866) Calling .GetState
	I0813 23:48:37.324831   17389 main.go:141] libmachine: (addons-937866) Calling .DriverName
	I0813 23:48:37.326598   17389 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0813 23:48:37.327831   17389 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0813 23:48:37.327854   17389 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0813 23:48:37.327873   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHHostname
	I0813 23:48:37.331722   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:37.332319   17389 main.go:141] libmachine: (addons-937866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:c3:1c", ip: ""} in network mk-addons-937866: {Iface:virbr1 ExpiryTime:2024-08-14 00:48:10 +0000 UTC Type:0 Mac:52:54:00:a3:c3:1c Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:addons-937866 Clientid:01:52:54:00:a3:c3:1c}
	I0813 23:48:37.332339   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined IP address 192.168.39.8 and MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:37.332612   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHPort
	I0813 23:48:37.332829   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHKeyPath
	I0813 23:48:37.333006   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHUsername
	I0813 23:48:37.333155   17389 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/addons-937866/id_rsa Username:docker}
	I0813 23:48:37.334728   17389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40195
	I0813 23:48:37.334909   17389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44503
	I0813 23:48:37.335240   17389 main.go:141] libmachine: () Calling .GetVersion
	I0813 23:48:37.335323   17389 main.go:141] libmachine: () Calling .GetVersion
	I0813 23:48:37.335714   17389 main.go:141] libmachine: Using API Version  1
	I0813 23:48:37.335728   17389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0813 23:48:37.336027   17389 main.go:141] libmachine: () Calling .GetMachineName
	I0813 23:48:37.336430   17389 main.go:141] libmachine: Using API Version  1
	I0813 23:48:37.336445   17389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0813 23:48:37.336536   17389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 23:48:37.336571   17389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0813 23:48:37.336852   17389 main.go:141] libmachine: () Calling .GetMachineName
	I0813 23:48:37.337057   17389 main.go:141] libmachine: (addons-937866) Calling .GetState
	I0813 23:48:37.337468   17389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35009
	I0813 23:48:37.338663   17389 main.go:141] libmachine: () Calling .GetVersion
	I0813 23:48:37.339088   17389 main.go:141] libmachine: (addons-937866) Calling .DriverName
	I0813 23:48:37.339137   17389 main.go:141] libmachine: Using API Version  1
	I0813 23:48:37.339159   17389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0813 23:48:37.339500   17389 main.go:141] libmachine: () Calling .GetMachineName
	I0813 23:48:37.339696   17389 main.go:141] libmachine: (addons-937866) Calling .GetState
	I0813 23:48:37.340978   17389 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0813 23:48:37.341302   17389 main.go:141] libmachine: (addons-937866) Calling .DriverName
	I0813 23:48:37.342494   17389 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0813 23:48:37.342514   17389 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0813 23:48:37.342532   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHHostname
	I0813 23:48:37.343109   17389 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0813 23:48:37.344075   17389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35205
	I0813 23:48:37.345593   17389 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0813 23:48:37.345704   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:37.346017   17389 main.go:141] libmachine: (addons-937866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:c3:1c", ip: ""} in network mk-addons-937866: {Iface:virbr1 ExpiryTime:2024-08-14 00:48:10 +0000 UTC Type:0 Mac:52:54:00:a3:c3:1c Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:addons-937866 Clientid:01:52:54:00:a3:c3:1c}
	I0813 23:48:37.346036   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined IP address 192.168.39.8 and MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:37.346200   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHPort
	I0813 23:48:37.346377   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHKeyPath
	I0813 23:48:37.346526   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHUsername
	I0813 23:48:37.346650   17389 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/addons-937866/id_rsa Username:docker}
	I0813 23:48:37.348218   17389 main.go:141] libmachine: () Calling .GetVersion
	I0813 23:48:37.348311   17389 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0813 23:48:37.348751   17389 main.go:141] libmachine: Using API Version  1
	I0813 23:48:37.348767   17389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0813 23:48:37.348951   17389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42491
	I0813 23:48:37.349138   17389 main.go:141] libmachine: () Calling .GetMachineName
	I0813 23:48:37.349544   17389 main.go:141] libmachine: () Calling .GetVersion
	I0813 23:48:37.349676   17389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40343
	I0813 23:48:37.349785   17389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 23:48:37.349817   17389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0813 23:48:37.350072   17389 main.go:141] libmachine: () Calling .GetVersion
	I0813 23:48:37.350149   17389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33955
	I0813 23:48:37.350572   17389 main.go:141] libmachine: () Calling .GetVersion
	I0813 23:48:37.350630   17389 main.go:141] libmachine: Using API Version  1
	I0813 23:48:37.350648   17389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0813 23:48:37.350786   17389 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0813 23:48:37.350913   17389 main.go:141] libmachine: () Calling .GetMachineName
	I0813 23:48:37.351297   17389 main.go:141] libmachine: Using API Version  1
	I0813 23:48:37.351324   17389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0813 23:48:37.351416   17389 main.go:141] libmachine: Using API Version  1
	I0813 23:48:37.351434   17389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0813 23:48:37.352264   17389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 23:48:37.352306   17389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0813 23:48:37.352529   17389 main.go:141] libmachine: () Calling .GetMachineName
	I0813 23:48:37.353005   17389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 23:48:37.353036   17389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0813 23:48:37.353203   17389 main.go:141] libmachine: () Calling .GetMachineName
	I0813 23:48:37.353392   17389 main.go:141] libmachine: (addons-937866) Calling .GetState
	I0813 23:48:37.353547   17389 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0813 23:48:37.354523   17389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40647
	I0813 23:48:37.354943   17389 main.go:141] libmachine: () Calling .GetVersion
	I0813 23:48:37.355163   17389 main.go:141] libmachine: (addons-937866) Calling .DriverName
	I0813 23:48:37.355585   17389 main.go:141] libmachine: Using API Version  1
	I0813 23:48:37.355608   17389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0813 23:48:37.355758   17389 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0813 23:48:37.355932   17389 main.go:141] libmachine: () Calling .GetMachineName
	I0813 23:48:37.356275   17389 main.go:141] libmachine: (addons-937866) Calling .DriverName
	I0813 23:48:37.359157   17389 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0813 23:48:37.359420   17389 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0813 23:48:37.360262   17389 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0813 23:48:37.360283   17389 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0813 23:48:37.360304   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHHostname
	I0813 23:48:37.360322   17389 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0813 23:48:37.360404   17389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32789
	I0813 23:48:37.360854   17389 main.go:141] libmachine: () Calling .GetVersion
	I0813 23:48:37.361304   17389 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0813 23:48:37.361323   17389 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0813 23:48:37.361342   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHHostname
	I0813 23:48:37.361422   17389 main.go:141] libmachine: Using API Version  1
	I0813 23:48:37.361436   17389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0813 23:48:37.361790   17389 main.go:141] libmachine: () Calling .GetMachineName
	I0813 23:48:37.361990   17389 main.go:141] libmachine: (addons-937866) Calling .GetState
	I0813 23:48:37.363699   17389 main.go:141] libmachine: (addons-937866) Calling .DriverName
	I0813 23:48:37.364069   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:37.364239   17389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39593
	I0813 23:48:37.364428   17389 main.go:141] libmachine: (addons-937866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:c3:1c", ip: ""} in network mk-addons-937866: {Iface:virbr1 ExpiryTime:2024-08-14 00:48:10 +0000 UTC Type:0 Mac:52:54:00:a3:c3:1c Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:addons-937866 Clientid:01:52:54:00:a3:c3:1c}
	I0813 23:48:37.364445   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined IP address 192.168.39.8 and MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:37.364849   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHPort
	I0813 23:48:37.364922   17389 main.go:141] libmachine: () Calling .GetVersion
	I0813 23:48:37.365087   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHKeyPath
	I0813 23:48:37.365450   17389 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0813 23:48:37.365687   17389 main.go:141] libmachine: Using API Version  1
	I0813 23:48:37.365703   17389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0813 23:48:37.365762   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHUsername
	I0813 23:48:37.365960   17389 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/addons-937866/id_rsa Username:docker}
	I0813 23:48:37.366202   17389 main.go:141] libmachine: () Calling .GetMachineName
	I0813 23:48:37.366840   17389 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0813 23:48:37.366860   17389 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0813 23:48:37.366878   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHHostname
	I0813 23:48:37.366940   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:37.367250   17389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 23:48:37.367291   17389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0813 23:48:37.368335   17389 main.go:141] libmachine: (addons-937866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:c3:1c", ip: ""} in network mk-addons-937866: {Iface:virbr1 ExpiryTime:2024-08-14 00:48:10 +0000 UTC Type:0 Mac:52:54:00:a3:c3:1c Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:addons-937866 Clientid:01:52:54:00:a3:c3:1c}
	I0813 23:48:37.368370   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined IP address 192.168.39.8 and MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:37.368603   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHPort
	I0813 23:48:37.368808   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHKeyPath
	I0813 23:48:37.369087   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHUsername
	I0813 23:48:37.369264   17389 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/addons-937866/id_rsa Username:docker}
	I0813 23:48:37.370236   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:37.370750   17389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43345
	I0813 23:48:37.371153   17389 main.go:141] libmachine: (addons-937866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:c3:1c", ip: ""} in network mk-addons-937866: {Iface:virbr1 ExpiryTime:2024-08-14 00:48:10 +0000 UTC Type:0 Mac:52:54:00:a3:c3:1c Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:addons-937866 Clientid:01:52:54:00:a3:c3:1c}
	I0813 23:48:37.371169   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined IP address 192.168.39.8 and MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:37.371204   17389 main.go:141] libmachine: () Calling .GetVersion
	I0813 23:48:37.371688   17389 main.go:141] libmachine: Using API Version  1
	I0813 23:48:37.371704   17389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0813 23:48:37.371761   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHPort
	I0813 23:48:37.371976   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHKeyPath
	I0813 23:48:37.372176   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHUsername
	I0813 23:48:37.372370   17389 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/addons-937866/id_rsa Username:docker}
	I0813 23:48:37.372965   17389 main.go:141] libmachine: () Calling .GetMachineName
	I0813 23:48:37.373180   17389 main.go:141] libmachine: (addons-937866) Calling .GetState
	I0813 23:48:37.376430   17389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41063
	I0813 23:48:37.376456   17389 main.go:141] libmachine: (addons-937866) Calling .DriverName
	I0813 23:48:37.376949   17389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35521
	I0813 23:48:37.377097   17389 main.go:141] libmachine: () Calling .GetVersion
	I0813 23:48:37.377363   17389 main.go:141] libmachine: () Calling .GetVersion
	I0813 23:48:37.377920   17389 main.go:141] libmachine: Using API Version  1
	I0813 23:48:37.377944   17389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0813 23:48:37.378230   17389 main.go:141] libmachine: Using API Version  1
	I0813 23:48:37.378259   17389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0813 23:48:37.378309   17389 main.go:141] libmachine: () Calling .GetMachineName
	I0813 23:48:37.378411   17389 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0813 23:48:37.378542   17389 main.go:141] libmachine: (addons-937866) Calling .GetState
	I0813 23:48:37.379211   17389 main.go:141] libmachine: () Calling .GetMachineName
	I0813 23:48:37.379383   17389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40407
	I0813 23:48:37.379820   17389 main.go:141] libmachine: () Calling .GetVersion
	I0813 23:48:37.379986   17389 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0813 23:48:37.379999   17389 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0813 23:48:37.380017   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHHostname
	I0813 23:48:37.380268   17389 main.go:141] libmachine: Using API Version  1
	I0813 23:48:37.380282   17389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0813 23:48:37.380337   17389 main.go:141] libmachine: (addons-937866) Calling .DriverName
	I0813 23:48:37.381128   17389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 23:48:37.381169   17389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0813 23:48:37.382193   17389 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 23:48:37.382547   17389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39397
	I0813 23:48:37.383048   17389 main.go:141] libmachine: () Calling .GetVersion
	I0813 23:48:37.383827   17389 main.go:141] libmachine: Using API Version  1
	I0813 23:48:37.383846   17389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0813 23:48:37.383862   17389 main.go:141] libmachine: () Calling .GetMachineName
	I0813 23:48:37.383867   17389 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 23:48:37.383903   17389 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0813 23:48:37.383918   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHHostname
	I0813 23:48:37.383959   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:37.384205   17389 main.go:141] libmachine: (addons-937866) Calling .GetState
	I0813 23:48:37.384387   17389 main.go:141] libmachine: (addons-937866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:c3:1c", ip: ""} in network mk-addons-937866: {Iface:virbr1 ExpiryTime:2024-08-14 00:48:10 +0000 UTC Type:0 Mac:52:54:00:a3:c3:1c Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:addons-937866 Clientid:01:52:54:00:a3:c3:1c}
	I0813 23:48:37.384407   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined IP address 192.168.39.8 and MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:37.384912   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHPort
	I0813 23:48:37.385138   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHKeyPath
	I0813 23:48:37.385200   17389 main.go:141] libmachine: () Calling .GetMachineName
	I0813 23:48:37.385468   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHUsername
	I0813 23:48:37.385541   17389 main.go:141] libmachine: (addons-937866) Calling .GetState
	I0813 23:48:37.385642   17389 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/addons-937866/id_rsa Username:docker}
	I0813 23:48:37.386559   17389 main.go:141] libmachine: (addons-937866) Calling .DriverName
	I0813 23:48:37.388084   17389 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0813 23:48:37.389187   17389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41965
	I0813 23:48:37.389301   17389 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0813 23:48:37.389316   17389 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0813 23:48:37.389334   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHHostname
	I0813 23:48:37.389683   17389 main.go:141] libmachine: () Calling .GetVersion
	I0813 23:48:37.390112   17389 main.go:141] libmachine: (addons-937866) Calling .DriverName
	I0813 23:48:37.390165   17389 main.go:141] libmachine: Using API Version  1
	I0813 23:48:37.390177   17389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0813 23:48:37.390972   17389 main.go:141] libmachine: () Calling .GetMachineName
	I0813 23:48:37.391036   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:37.391216   17389 main.go:141] libmachine: (addons-937866) Calling .GetState
	I0813 23:48:37.391347   17389 main.go:141] libmachine: (addons-937866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:c3:1c", ip: ""} in network mk-addons-937866: {Iface:virbr1 ExpiryTime:2024-08-14 00:48:10 +0000 UTC Type:0 Mac:52:54:00:a3:c3:1c Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:addons-937866 Clientid:01:52:54:00:a3:c3:1c}
	I0813 23:48:37.391368   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined IP address 192.168.39.8 and MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:37.391566   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHPort
	I0813 23:48:37.391717   17389 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0813 23:48:37.391734   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHKeyPath
	I0813 23:48:37.391906   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHUsername
	I0813 23:48:37.392164   17389 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/addons-937866/id_rsa Username:docker}
	I0813 23:48:37.393004   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:37.393252   17389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40807
	I0813 23:48:37.393494   17389 main.go:141] libmachine: (addons-937866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:c3:1c", ip: ""} in network mk-addons-937866: {Iface:virbr1 ExpiryTime:2024-08-14 00:48:10 +0000 UTC Type:0 Mac:52:54:00:a3:c3:1c Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:addons-937866 Clientid:01:52:54:00:a3:c3:1c}
	I0813 23:48:37.393514   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined IP address 192.168.39.8 and MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:37.393593   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHPort
	I0813 23:48:37.393862   17389 main.go:141] libmachine: () Calling .GetVersion
	I0813 23:48:37.393890   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHKeyPath
	I0813 23:48:37.394127   17389 main.go:141] libmachine: (addons-937866) Calling .DriverName
	I0813 23:48:37.394135   17389 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0813 23:48:37.394180   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHUsername
	I0813 23:48:37.394458   17389 main.go:141] libmachine: Using API Version  1
	I0813 23:48:37.394477   17389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0813 23:48:37.394789   17389 main.go:141] libmachine: () Calling .GetMachineName
	I0813 23:48:37.394838   17389 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/addons-937866/id_rsa Username:docker}
	I0813 23:48:37.395253   17389 main.go:141] libmachine: (addons-937866) Calling .GetState
	I0813 23:48:37.395366   17389 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0813 23:48:37.396480   17389 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0813 23:48:37.396686   17389 main.go:141] libmachine: (addons-937866) Calling .DriverName
	I0813 23:48:37.396986   17389 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0813 23:48:37.397001   17389 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0813 23:48:37.397018   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHHostname
	I0813 23:48:37.397061   17389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43561
	I0813 23:48:37.397404   17389 main.go:141] libmachine: () Calling .GetVersion
	I0813 23:48:37.397554   17389 out.go:177]   - Using image docker.io/busybox:stable
	I0813 23:48:37.397815   17389 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0813 23:48:37.397835   17389 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0813 23:48:37.397849   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHHostname
	I0813 23:48:37.397875   17389 main.go:141] libmachine: Using API Version  1
	I0813 23:48:37.397887   17389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0813 23:48:37.398239   17389 main.go:141] libmachine: () Calling .GetMachineName
	I0813 23:48:37.398450   17389 main.go:141] libmachine: (addons-937866) Calling .GetState
	I0813 23:48:37.398870   17389 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0813 23:48:37.398888   17389 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0813 23:48:37.398904   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHHostname
	I0813 23:48:37.400880   17389 main.go:141] libmachine: (addons-937866) Calling .DriverName
	I0813 23:48:37.402871   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:37.402886   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:37.402902   17389 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
	I0813 23:48:37.402910   17389 main.go:141] libmachine: (addons-937866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:c3:1c", ip: ""} in network mk-addons-937866: {Iface:virbr1 ExpiryTime:2024-08-14 00:48:10 +0000 UTC Type:0 Mac:52:54:00:a3:c3:1c Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:addons-937866 Clientid:01:52:54:00:a3:c3:1c}
	I0813 23:48:37.402982   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined IP address 192.168.39.8 and MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:37.403030   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHPort
	I0813 23:48:37.403192   17389 main.go:141] libmachine: (addons-937866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:c3:1c", ip: ""} in network mk-addons-937866: {Iface:virbr1 ExpiryTime:2024-08-14 00:48:10 +0000 UTC Type:0 Mac:52:54:00:a3:c3:1c Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:addons-937866 Clientid:01:52:54:00:a3:c3:1c}
	I0813 23:48:37.403214   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined IP address 192.168.39.8 and MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:37.403247   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHKeyPath
	I0813 23:48:37.403303   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:37.403487   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHUsername
	I0813 23:48:37.403494   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHPort
	I0813 23:48:37.403663   17389 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/addons-937866/id_rsa Username:docker}
	I0813 23:48:37.403916   17389 main.go:141] libmachine: (addons-937866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:c3:1c", ip: ""} in network mk-addons-937866: {Iface:virbr1 ExpiryTime:2024-08-14 00:48:10 +0000 UTC Type:0 Mac:52:54:00:a3:c3:1c Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:addons-937866 Clientid:01:52:54:00:a3:c3:1c}
	I0813 23:48:37.403932   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHKeyPath
	I0813 23:48:37.403939   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined IP address 192.168.39.8 and MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:37.404031   17389 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0813 23:48:37.404047   17389 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0813 23:48:37.404065   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHHostname
	I0813 23:48:37.404102   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHUsername
	I0813 23:48:37.404684   17389 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/addons-937866/id_rsa Username:docker}
	I0813 23:48:37.405062   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHPort
	I0813 23:48:37.405269   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHKeyPath
	I0813 23:48:37.405432   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHUsername
	I0813 23:48:37.405556   17389 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/addons-937866/id_rsa Username:docker}
	I0813 23:48:37.406374   17389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45491
	I0813 23:48:37.407095   17389 main.go:141] libmachine: () Calling .GetVersion
	W0813 23:48:37.407234   17389 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:37800->192.168.39.8:22: read: connection reset by peer
	I0813 23:48:37.407255   17389 retry.go:31] will retry after 192.929271ms: ssh: handshake failed: read tcp 192.168.39.1:37800->192.168.39.8:22: read: connection reset by peer
	I0813 23:48:37.407563   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:37.407770   17389 main.go:141] libmachine: Using API Version  1
	I0813 23:48:37.407789   17389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0813 23:48:37.408030   17389 main.go:141] libmachine: (addons-937866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:c3:1c", ip: ""} in network mk-addons-937866: {Iface:virbr1 ExpiryTime:2024-08-14 00:48:10 +0000 UTC Type:0 Mac:52:54:00:a3:c3:1c Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:addons-937866 Clientid:01:52:54:00:a3:c3:1c}
	I0813 23:48:37.408049   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined IP address 192.168.39.8 and MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:37.408191   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHPort
	I0813 23:48:37.408214   17389 main.go:141] libmachine: () Calling .GetMachineName
	I0813 23:48:37.408370   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHKeyPath
	I0813 23:48:37.408373   17389 main.go:141] libmachine: (addons-937866) Calling .GetState
	I0813 23:48:37.408506   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHUsername
	I0813 23:48:37.408632   17389 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/addons-937866/id_rsa Username:docker}
	I0813 23:48:37.409889   17389 main.go:141] libmachine: (addons-937866) Calling .DriverName
	I0813 23:48:37.411353   17389 out.go:177]   - Using image docker.io/registry:2.8.3
	I0813 23:48:37.412557   17389 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0813 23:48:37.413669   17389 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0813 23:48:37.413684   17389 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0813 23:48:37.413706   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHHostname
	I0813 23:48:37.416878   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:37.417291   17389 main.go:141] libmachine: (addons-937866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:c3:1c", ip: ""} in network mk-addons-937866: {Iface:virbr1 ExpiryTime:2024-08-14 00:48:10 +0000 UTC Type:0 Mac:52:54:00:a3:c3:1c Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:addons-937866 Clientid:01:52:54:00:a3:c3:1c}
	I0813 23:48:37.417314   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined IP address 192.168.39.8 and MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:37.417454   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHPort
	I0813 23:48:37.417656   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHKeyPath
	I0813 23:48:37.417798   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHUsername
	I0813 23:48:37.417945   17389 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/addons-937866/id_rsa Username:docker}
	I0813 23:48:37.706081   17389 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0813 23:48:37.707962   17389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0813 23:48:37.726150   17389 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0813 23:48:37.726173   17389 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0813 23:48:37.747197   17389 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0813 23:48:37.747217   17389 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0813 23:48:37.823431   17389 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0813 23:48:37.928090   17389 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0813 23:48:37.928111   17389 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0813 23:48:37.928344   17389 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0813 23:48:37.971749   17389 node_ready.go:35] waiting up to 6m0s for node "addons-937866" to be "Ready" ...
	I0813 23:48:37.978242   17389 node_ready.go:49] node "addons-937866" has status "Ready":"True"
	I0813 23:48:37.978271   17389 node_ready.go:38] duration metric: took 6.494553ms for node "addons-937866" to be "Ready" ...
	I0813 23:48:37.978284   17389 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 23:48:37.979105   17389 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0813 23:48:37.979122   17389 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0813 23:48:38.017932   17389 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0813 23:48:38.017957   17389 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0813 23:48:38.026363   17389 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 23:48:38.029945   17389 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0813 23:48:38.031413   17389 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-mq64k" in "kube-system" namespace to be "Ready" ...
	I0813 23:48:38.032312   17389 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0813 23:48:38.032326   17389 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0813 23:48:38.034494   17389 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0813 23:48:38.034508   17389 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0813 23:48:38.048324   17389 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0813 23:48:38.048347   17389 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0813 23:48:38.048953   17389 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0813 23:48:38.058771   17389 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0813 23:48:38.080228   17389 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0813 23:48:38.146953   17389 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0813 23:48:38.146976   17389 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0813 23:48:38.170939   17389 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0813 23:48:38.170962   17389 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0813 23:48:38.174228   17389 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 23:48:38.174245   17389 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0813 23:48:38.198818   17389 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0813 23:48:38.198838   17389 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0813 23:48:38.280387   17389 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0813 23:48:38.291444   17389 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0813 23:48:38.291470   17389 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0813 23:48:38.345715   17389 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0813 23:48:38.345745   17389 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0813 23:48:38.364841   17389 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0813 23:48:38.364868   17389 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0813 23:48:38.390098   17389 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0813 23:48:38.390122   17389 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0813 23:48:38.421644   17389 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 23:48:38.446482   17389 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0813 23:48:38.446504   17389 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0813 23:48:38.553826   17389 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0813 23:48:38.553850   17389 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0813 23:48:38.576406   17389 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0813 23:48:38.576428   17389 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0813 23:48:38.579688   17389 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0813 23:48:38.579705   17389 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0813 23:48:38.653128   17389 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0813 23:48:38.735032   17389 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0813 23:48:38.735055   17389 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0813 23:48:38.792541   17389 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0813 23:48:38.792567   17389 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0813 23:48:38.796030   17389 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0813 23:48:38.796053   17389 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0813 23:48:38.824222   17389 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0813 23:48:38.824247   17389 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0813 23:48:38.979555   17389 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0813 23:48:39.011618   17389 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0813 23:48:39.011641   17389 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0813 23:48:39.031256   17389 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0813 23:48:39.031276   17389 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0813 23:48:39.141039   17389 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0813 23:48:39.164078   17389 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0813 23:48:39.164102   17389 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0813 23:48:39.164763   17389 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0813 23:48:39.164779   17389 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0813 23:48:39.428182   17389 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0813 23:48:39.428209   17389 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0813 23:48:39.496847   17389 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0813 23:48:39.496873   17389 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0813 23:48:39.724770   17389 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0813 23:48:39.767305   17389 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0813 23:48:39.767331   17389 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0813 23:48:39.862585   17389 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.154592722s)
	I0813 23:48:39.862621   17389 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0813 23:48:40.038812   17389 pod_ready.go:102] pod "coredns-6f6b679f8f-mq64k" in "kube-system" namespace has status "Ready":"False"
	I0813 23:48:40.140702   17389 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0813 23:48:40.140723   17389 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0813 23:48:40.353191   17389 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0813 23:48:40.353217   17389 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0813 23:48:40.368324   17389 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-937866" context rescaled to 1 replicas
	I0813 23:48:40.667892   17389 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0813 23:48:42.062104   17389 pod_ready.go:102] pod "coredns-6f6b679f8f-mq64k" in "kube-system" namespace has status "Ready":"False"
	I0813 23:48:44.116373   17389 pod_ready.go:92] pod "coredns-6f6b679f8f-mq64k" in "kube-system" namespace has status "Ready":"True"
	I0813 23:48:44.116404   17389 pod_ready.go:81] duration metric: took 6.084969999s for pod "coredns-6f6b679f8f-mq64k" in "kube-system" namespace to be "Ready" ...
	I0813 23:48:44.116416   17389 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-xg8fx" in "kube-system" namespace to be "Ready" ...
	I0813 23:48:44.387202   17389 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0813 23:48:44.387235   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHHostname
	I0813 23:48:44.390396   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:44.390786   17389 main.go:141] libmachine: (addons-937866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:c3:1c", ip: ""} in network mk-addons-937866: {Iface:virbr1 ExpiryTime:2024-08-14 00:48:10 +0000 UTC Type:0 Mac:52:54:00:a3:c3:1c Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:addons-937866 Clientid:01:52:54:00:a3:c3:1c}
	I0813 23:48:44.390817   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined IP address 192.168.39.8 and MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:44.390970   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHPort
	I0813 23:48:44.391170   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHKeyPath
	I0813 23:48:44.391340   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHUsername
	I0813 23:48:44.391485   17389 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/addons-937866/id_rsa Username:docker}
	I0813 23:48:44.926184   17389 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0813 23:48:44.984658   17389 addons.go:234] Setting addon gcp-auth=true in "addons-937866"
	I0813 23:48:44.984722   17389 host.go:66] Checking if "addons-937866" exists ...
	I0813 23:48:44.985199   17389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 23:48:44.985244   17389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0813 23:48:45.000914   17389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42427
	I0813 23:48:45.001353   17389 main.go:141] libmachine: () Calling .GetVersion
	I0813 23:48:45.001892   17389 main.go:141] libmachine: Using API Version  1
	I0813 23:48:45.001920   17389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0813 23:48:45.002238   17389 main.go:141] libmachine: () Calling .GetMachineName
	I0813 23:48:45.002833   17389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 23:48:45.002868   17389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0813 23:48:45.018910   17389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35611
	I0813 23:48:45.019280   17389 main.go:141] libmachine: () Calling .GetVersion
	I0813 23:48:45.019785   17389 main.go:141] libmachine: Using API Version  1
	I0813 23:48:45.019809   17389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0813 23:48:45.020151   17389 main.go:141] libmachine: () Calling .GetMachineName
	I0813 23:48:45.020344   17389 main.go:141] libmachine: (addons-937866) Calling .GetState
	I0813 23:48:45.021996   17389 main.go:141] libmachine: (addons-937866) Calling .DriverName
	I0813 23:48:45.022228   17389 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0813 23:48:45.022248   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHHostname
	I0813 23:48:45.024869   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:45.025249   17389 main.go:141] libmachine: (addons-937866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:c3:1c", ip: ""} in network mk-addons-937866: {Iface:virbr1 ExpiryTime:2024-08-14 00:48:10 +0000 UTC Type:0 Mac:52:54:00:a3:c3:1c Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:addons-937866 Clientid:01:52:54:00:a3:c3:1c}
	I0813 23:48:45.025272   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined IP address 192.168.39.8 and MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:45.025430   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHPort
	I0813 23:48:45.025597   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHKeyPath
	I0813 23:48:45.025737   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHUsername
	I0813 23:48:45.025854   17389 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/addons-937866/id_rsa Username:docker}
	I0813 23:48:45.638443   17389 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.814982912s)
	I0813 23:48:45.638496   17389 main.go:141] libmachine: Making call to close driver server
	I0813 23:48:45.638498   17389 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.710127754s)
	I0813 23:48:45.638539   17389 main.go:141] libmachine: Making call to close driver server
	I0813 23:48:45.638556   17389 main.go:141] libmachine: (addons-937866) Calling .Close
	I0813 23:48:45.638508   17389 main.go:141] libmachine: (addons-937866) Calling .Close
	I0813 23:48:45.638609   17389 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.589637115s)
	I0813 23:48:45.638631   17389 main.go:141] libmachine: Making call to close driver server
	I0813 23:48:45.638645   17389 main.go:141] libmachine: (addons-937866) Calling .Close
	I0813 23:48:45.638677   17389 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.579879698s)
	I0813 23:48:45.638541   17389 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.612157426s)
	I0813 23:48:45.638699   17389 main.go:141] libmachine: Making call to close driver server
	I0813 23:48:45.638712   17389 main.go:141] libmachine: (addons-937866) Calling .Close
	I0813 23:48:45.638744   17389 main.go:141] libmachine: Making call to close driver server
	I0813 23:48:45.638744   17389 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.558489355s)
	I0813 23:48:45.638762   17389 main.go:141] libmachine: (addons-937866) Calling .Close
	I0813 23:48:45.638777   17389 main.go:141] libmachine: Making call to close driver server
	I0813 23:48:45.638783   17389 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.358369214s)
	I0813 23:48:45.638788   17389 main.go:141] libmachine: (addons-937866) Calling .Close
	I0813 23:48:45.638800   17389 main.go:141] libmachine: Making call to close driver server
	I0813 23:48:45.638810   17389 main.go:141] libmachine: (addons-937866) Calling .Close
	I0813 23:48:45.638582   17389 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.608615632s)
	I0813 23:48:45.638829   17389 main.go:141] libmachine: Making call to close driver server
	I0813 23:48:45.638838   17389 main.go:141] libmachine: (addons-937866) Calling .Close
	I0813 23:48:45.638909   17389 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.217236424s)
	I0813 23:48:45.638934   17389 main.go:141] libmachine: Making call to close driver server
	I0813 23:48:45.638945   17389 main.go:141] libmachine: (addons-937866) Calling .Close
	I0813 23:48:45.639014   17389 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (6.985811076s)
	I0813 23:48:45.639035   17389 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.659448322s)
	I0813 23:48:45.639047   17389 main.go:141] libmachine: Making call to close driver server
	I0813 23:48:45.639059   17389 main.go:141] libmachine: (addons-937866) Calling .Close
	I0813 23:48:45.639058   17389 main.go:141] libmachine: Making call to close driver server
	I0813 23:48:45.639071   17389 main.go:141] libmachine: (addons-937866) Calling .Close
	I0813 23:48:45.639203   17389 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.498130942s)
	W0813 23:48:45.639268   17389 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0813 23:48:45.639290   17389 retry.go:31] will retry after 269.107791ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0813 23:48:45.639355   17389 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.914546449s)
	I0813 23:48:45.639379   17389 main.go:141] libmachine: Making call to close driver server
	I0813 23:48:45.639399   17389 main.go:141] libmachine: (addons-937866) Calling .Close
	I0813 23:48:45.642604   17389 main.go:141] libmachine: (addons-937866) DBG | Closing plugin on server side
	I0813 23:48:45.642611   17389 main.go:141] libmachine: (addons-937866) DBG | Closing plugin on server side
	I0813 23:48:45.642618   17389 main.go:141] libmachine: Successfully made call to close driver server
	I0813 23:48:45.642668   17389 main.go:141] libmachine: Successfully made call to close driver server
	I0813 23:48:45.642686   17389 main.go:141] libmachine: Making call to close connection to plugin binary
	I0813 23:48:45.642690   17389 main.go:141] libmachine: (addons-937866) DBG | Closing plugin on server side
	I0813 23:48:45.642701   17389 main.go:141] libmachine: Making call to close driver server
	I0813 23:48:45.642713   17389 main.go:141] libmachine: (addons-937866) Calling .Close
	I0813 23:48:45.642729   17389 main.go:141] libmachine: (addons-937866) DBG | Closing plugin on server side
	I0813 23:48:45.642716   17389 main.go:141] libmachine: Successfully made call to close driver server
	I0813 23:48:45.642769   17389 main.go:141] libmachine: Making call to close connection to plugin binary
	I0813 23:48:45.642770   17389 main.go:141] libmachine: (addons-937866) DBG | Closing plugin on server side
	I0813 23:48:45.642645   17389 main.go:141] libmachine: (addons-937866) DBG | Closing plugin on server side
	I0813 23:48:45.642782   17389 main.go:141] libmachine: Making call to close driver server
	I0813 23:48:45.642752   17389 main.go:141] libmachine: (addons-937866) DBG | Closing plugin on server side
	I0813 23:48:45.642757   17389 main.go:141] libmachine: Successfully made call to close driver server
	I0813 23:48:45.642790   17389 main.go:141] libmachine: (addons-937866) Calling .Close
	I0813 23:48:45.642795   17389 main.go:141] libmachine: Making call to close connection to plugin binary
	I0813 23:48:45.642810   17389 main.go:141] libmachine: Making call to close driver server
	I0813 23:48:45.642823   17389 main.go:141] libmachine: (addons-937866) Calling .Close
	I0813 23:48:45.642840   17389 main.go:141] libmachine: (addons-937866) DBG | Closing plugin on server side
	I0813 23:48:45.642881   17389 main.go:141] libmachine: Successfully made call to close driver server
	I0813 23:48:45.642904   17389 main.go:141] libmachine: Making call to close connection to plugin binary
	I0813 23:48:45.642923   17389 main.go:141] libmachine: Making call to close driver server
	I0813 23:48:45.642942   17389 main.go:141] libmachine: (addons-937866) Calling .Close
	I0813 23:48:45.642652   17389 main.go:141] libmachine: Successfully made call to close driver server
	I0813 23:48:45.642979   17389 main.go:141] libmachine: Successfully made call to close driver server
	I0813 23:48:45.642981   17389 main.go:141] libmachine: Making call to close connection to plugin binary
	I0813 23:48:45.642987   17389 main.go:141] libmachine: Making call to close connection to plugin binary
	I0813 23:48:45.642991   17389 main.go:141] libmachine: Making call to close driver server
	I0813 23:48:45.642995   17389 main.go:141] libmachine: Making call to close driver server
	I0813 23:48:45.643008   17389 main.go:141] libmachine: (addons-937866) Calling .Close
	I0813 23:48:45.642999   17389 main.go:141] libmachine: (addons-937866) Calling .Close
	I0813 23:48:45.642964   17389 main.go:141] libmachine: (addons-937866) DBG | Closing plugin on server side
	I0813 23:48:45.643098   17389 main.go:141] libmachine: (addons-937866) DBG | Closing plugin on server side
	I0813 23:48:45.643127   17389 main.go:141] libmachine: Successfully made call to close driver server
	I0813 23:48:45.643134   17389 main.go:141] libmachine: Making call to close connection to plugin binary
	I0813 23:48:45.643186   17389 main.go:141] libmachine: (addons-937866) DBG | Closing plugin on server side
	I0813 23:48:45.642671   17389 main.go:141] libmachine: Successfully made call to close driver server
	I0813 23:48:45.643206   17389 main.go:141] libmachine: Making call to close connection to plugin binary
	I0813 23:48:45.643214   17389 main.go:141] libmachine: Making call to close driver server
	I0813 23:48:45.643221   17389 main.go:141] libmachine: (addons-937866) Calling .Close
	I0813 23:48:45.643263   17389 main.go:141] libmachine: Successfully made call to close driver server
	I0813 23:48:45.643283   17389 main.go:141] libmachine: Successfully made call to close driver server
	I0813 23:48:45.643307   17389 main.go:141] libmachine: Making call to close connection to plugin binary
	I0813 23:48:45.643322   17389 main.go:141] libmachine: Making call to close driver server
	I0813 23:48:45.643331   17389 main.go:141] libmachine: (addons-937866) Calling .Close
	I0813 23:48:45.643388   17389 main.go:141] libmachine: (addons-937866) DBG | Closing plugin on server side
	I0813 23:48:45.642676   17389 main.go:141] libmachine: Making call to close connection to plugin binary
	I0813 23:48:45.643427   17389 main.go:141] libmachine: Making call to close driver server
	I0813 23:48:45.643435   17389 main.go:141] libmachine: (addons-937866) Calling .Close
	I0813 23:48:45.643477   17389 main.go:141] libmachine: (addons-937866) DBG | Closing plugin on server side
	I0813 23:48:45.642933   17389 main.go:141] libmachine: Successfully made call to close driver server
	I0813 23:48:45.643518   17389 main.go:141] libmachine: Making call to close connection to plugin binary
	I0813 23:48:45.643535   17389 main.go:141] libmachine: Making call to close driver server
	I0813 23:48:45.643543   17389 main.go:141] libmachine: (addons-937866) Calling .Close
	I0813 23:48:45.643572   17389 main.go:141] libmachine: Successfully made call to close driver server
	I0813 23:48:45.642949   17389 main.go:141] libmachine: Successfully made call to close driver server
	I0813 23:48:45.643601   17389 main.go:141] libmachine: Making call to close connection to plugin binary
	I0813 23:48:45.643613   17389 main.go:141] libmachine: Making call to close driver server
	I0813 23:48:45.643626   17389 main.go:141] libmachine: (addons-937866) Calling .Close
	I0813 23:48:45.643288   17389 main.go:141] libmachine: Making call to close connection to plugin binary
	I0813 23:48:45.643734   17389 main.go:141] libmachine: Successfully made call to close driver server
	I0813 23:48:45.643755   17389 main.go:141] libmachine: Making call to close connection to plugin binary
	I0813 23:48:45.643823   17389 main.go:141] libmachine: (addons-937866) DBG | Closing plugin on server side
	I0813 23:48:45.643863   17389 main.go:141] libmachine: Successfully made call to close driver server
	I0813 23:48:45.643887   17389 main.go:141] libmachine: Making call to close connection to plugin binary
	I0813 23:48:45.644059   17389 main.go:141] libmachine: Successfully made call to close driver server
	I0813 23:48:45.644075   17389 main.go:141] libmachine: Making call to close connection to plugin binary
	I0813 23:48:45.644084   17389 main.go:141] libmachine: Making call to close driver server
	I0813 23:48:45.644092   17389 main.go:141] libmachine: (addons-937866) Calling .Close
	I0813 23:48:45.644113   17389 main.go:141] libmachine: (addons-937866) DBG | Closing plugin on server side
	I0813 23:48:45.644139   17389 main.go:141] libmachine: Successfully made call to close driver server
	I0813 23:48:45.644147   17389 main.go:141] libmachine: Making call to close connection to plugin binary
	I0813 23:48:45.644155   17389 addons.go:475] Verifying addon ingress=true in "addons-937866"
	I0813 23:48:45.644397   17389 main.go:141] libmachine: (addons-937866) DBG | Closing plugin on server side
	I0813 23:48:45.644415   17389 main.go:141] libmachine: (addons-937866) DBG | Closing plugin on server side
	I0813 23:48:45.644436   17389 main.go:141] libmachine: Successfully made call to close driver server
	I0813 23:48:45.644445   17389 main.go:141] libmachine: Making call to close connection to plugin binary
	I0813 23:48:45.645291   17389 main.go:141] libmachine: (addons-937866) DBG | Closing plugin on server side
	I0813 23:48:45.645316   17389 main.go:141] libmachine: Successfully made call to close driver server
	I0813 23:48:45.645323   17389 main.go:141] libmachine: Making call to close connection to plugin binary
	I0813 23:48:45.643590   17389 main.go:141] libmachine: Making call to close connection to plugin binary
	I0813 23:48:45.643693   17389 main.go:141] libmachine: (addons-937866) DBG | Closing plugin on server side
	I0813 23:48:45.646069   17389 main.go:141] libmachine: Successfully made call to close driver server
	I0813 23:48:45.646079   17389 main.go:141] libmachine: Making call to close connection to plugin binary
	I0813 23:48:45.646087   17389 addons.go:475] Verifying addon registry=true in "addons-937866"
	I0813 23:48:45.646278   17389 out.go:177] * Verifying ingress addon...
	I0813 23:48:45.646818   17389 main.go:141] libmachine: Successfully made call to close driver server
	I0813 23:48:45.646828   17389 main.go:141] libmachine: Making call to close connection to plugin binary
	I0813 23:48:45.647278   17389 main.go:141] libmachine: (addons-937866) DBG | Closing plugin on server side
	I0813 23:48:45.647278   17389 main.go:141] libmachine: Successfully made call to close driver server
	I0813 23:48:45.647284   17389 main.go:141] libmachine: (addons-937866) DBG | Closing plugin on server side
	I0813 23:48:45.647292   17389 main.go:141] libmachine: Making call to close connection to plugin binary
	I0813 23:48:45.647301   17389 main.go:141] libmachine: Successfully made call to close driver server
	I0813 23:48:45.647310   17389 main.go:141] libmachine: Making call to close connection to plugin binary
	I0813 23:48:45.647317   17389 addons.go:475] Verifying addon metrics-server=true in "addons-937866"
	I0813 23:48:45.647389   17389 out.go:177] * Verifying registry addon...
	I0813 23:48:45.648252   17389 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0813 23:48:45.648645   17389 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-937866 service yakd-dashboard -n yakd-dashboard
	
	I0813 23:48:45.649441   17389 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0813 23:48:45.664291   17389 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0813 23:48:45.664314   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:48:45.664474   17389 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0813 23:48:45.664498   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:48:45.702625   17389 main.go:141] libmachine: Making call to close driver server
	I0813 23:48:45.702645   17389 main.go:141] libmachine: (addons-937866) Calling .Close
	I0813 23:48:45.702926   17389 main.go:141] libmachine: Successfully made call to close driver server
	I0813 23:48:45.702946   17389 main.go:141] libmachine: Making call to close connection to plugin binary
	W0813 23:48:45.703022   17389 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0813 23:48:45.702926   17389 main.go:141] libmachine: (addons-937866) DBG | Closing plugin on server side
	I0813 23:48:45.707332   17389 main.go:141] libmachine: Making call to close driver server
	I0813 23:48:45.707348   17389 main.go:141] libmachine: (addons-937866) Calling .Close
	I0813 23:48:45.707572   17389 main.go:141] libmachine: Successfully made call to close driver server
	I0813 23:48:45.707589   17389 main.go:141] libmachine: Making call to close connection to plugin binary
	I0813 23:48:45.908712   17389 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0813 23:48:46.127205   17389 pod_ready.go:102] pod "coredns-6f6b679f8f-xg8fx" in "kube-system" namespace has status "Ready":"False"
	I0813 23:48:46.156242   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:48:46.156435   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:48:46.588768   17389 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.920810342s)
	I0813 23:48:46.588818   17389 main.go:141] libmachine: Making call to close driver server
	I0813 23:48:46.588832   17389 main.go:141] libmachine: (addons-937866) Calling .Close
	I0813 23:48:46.588775   17389 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.566524559s)
	I0813 23:48:46.589100   17389 main.go:141] libmachine: (addons-937866) DBG | Closing plugin on server side
	I0813 23:48:46.589120   17389 main.go:141] libmachine: Successfully made call to close driver server
	I0813 23:48:46.589132   17389 main.go:141] libmachine: Making call to close connection to plugin binary
	I0813 23:48:46.589146   17389 main.go:141] libmachine: Making call to close driver server
	I0813 23:48:46.589159   17389 main.go:141] libmachine: (addons-937866) Calling .Close
	I0813 23:48:46.589413   17389 main.go:141] libmachine: (addons-937866) DBG | Closing plugin on server side
	I0813 23:48:46.589430   17389 main.go:141] libmachine: Successfully made call to close driver server
	I0813 23:48:46.589437   17389 main.go:141] libmachine: Making call to close connection to plugin binary
	I0813 23:48:46.589459   17389 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-937866"
	I0813 23:48:46.591107   17389 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0813 23:48:46.591113   17389 out.go:177] * Verifying csi-hostpath-driver addon...
	I0813 23:48:46.592494   17389 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0813 23:48:46.593099   17389 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0813 23:48:46.593493   17389 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0813 23:48:46.593508   17389 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0813 23:48:46.607372   17389 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0813 23:48:46.607391   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:48:46.671972   17389 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0813 23:48:46.671994   17389 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0813 23:48:46.673979   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:48:46.674312   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:48:46.754905   17389 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0813 23:48:46.754927   17389 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0813 23:48:46.835695   17389 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0813 23:48:47.103669   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:48:47.152591   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:48:47.152915   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:48:47.598634   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:48:47.652519   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:48:47.652912   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:48:47.738249   17389 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.829486095s)
	I0813 23:48:47.738303   17389 main.go:141] libmachine: Making call to close driver server
	I0813 23:48:47.738317   17389 main.go:141] libmachine: (addons-937866) Calling .Close
	I0813 23:48:47.738641   17389 main.go:141] libmachine: Successfully made call to close driver server
	I0813 23:48:47.738661   17389 main.go:141] libmachine: Making call to close connection to plugin binary
	I0813 23:48:47.738661   17389 main.go:141] libmachine: (addons-937866) DBG | Closing plugin on server side
	I0813 23:48:47.738671   17389 main.go:141] libmachine: Making call to close driver server
	I0813 23:48:47.738679   17389 main.go:141] libmachine: (addons-937866) Calling .Close
	I0813 23:48:47.738885   17389 main.go:141] libmachine: Successfully made call to close driver server
	I0813 23:48:47.738897   17389 main.go:141] libmachine: Making call to close connection to plugin binary
	I0813 23:48:48.175682   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:48:48.180153   17389 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.344420445s)
	I0813 23:48:48.180209   17389 main.go:141] libmachine: Making call to close driver server
	I0813 23:48:48.180226   17389 main.go:141] libmachine: (addons-937866) Calling .Close
	I0813 23:48:48.180547   17389 main.go:141] libmachine: Successfully made call to close driver server
	I0813 23:48:48.180610   17389 main.go:141] libmachine: Making call to close connection to plugin binary
	I0813 23:48:48.180635   17389 main.go:141] libmachine: Making call to close driver server
	I0813 23:48:48.180651   17389 main.go:141] libmachine: (addons-937866) Calling .Close
	I0813 23:48:48.180608   17389 main.go:141] libmachine: (addons-937866) DBG | Closing plugin on server side
	I0813 23:48:48.180923   17389 main.go:141] libmachine: Successfully made call to close driver server
	I0813 23:48:48.180942   17389 main.go:141] libmachine: Making call to close connection to plugin binary
	I0813 23:48:48.182726   17389 addons.go:475] Verifying addon gcp-auth=true in "addons-937866"
	I0813 23:48:48.184296   17389 out.go:177] * Verifying gcp-auth addon...
	I0813 23:48:48.186137   17389 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0813 23:48:48.209211   17389 pod_ready.go:102] pod "coredns-6f6b679f8f-xg8fx" in "kube-system" namespace has status "Ready":"False"
	I0813 23:48:48.232972   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:48:48.233134   17389 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0813 23:48:48.233159   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:48:48.233550   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:48:48.598896   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:48:48.653497   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:48:48.653611   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:48:48.690412   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:48:49.097830   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:48:49.153209   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:48:49.153525   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:48:49.190218   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:48:49.597264   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:48:49.653896   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:48:49.654417   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:48:49.689768   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:48:50.097569   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:48:50.122112   17389 pod_ready.go:97] pod "coredns-6f6b679f8f-xg8fx" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-13 23:48:49 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-13 23:48:37 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-13 23:48:37 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-13 23:48:37 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-13 23:48:37 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.8 HostIPs:[{IP:192.168.39.8
}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-08-13 23:48:37 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-08-13 23:48:41 +0000 UTC,FinishedAt:2024-08-13 23:48:48 +0000 UTC,ContainerID:cri-o://57188d13697467e6140175385ca067455c09a2e9f44f868ff2c79498b0bf8ccf,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://57188d13697467e6140175385ca067455c09a2e9f44f868ff2c79498b0bf8ccf Started:0xc00280d810 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc00281a900} {Name:kube-api-access-45gsc MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc00281a910}] User:nil Al
locatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0813 23:48:50.122147   17389 pod_ready.go:81] duration metric: took 6.005722085s for pod "coredns-6f6b679f8f-xg8fx" in "kube-system" namespace to be "Ready" ...
	E0813 23:48:50.122168   17389 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-6f6b679f8f-xg8fx" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-13 23:48:49 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-13 23:48:37 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-13 23:48:37 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-13 23:48:37 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-13 23:48:37 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.8 HostIPs:[{IP:192.168.39.8}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-08-13 23:48:37 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-08-13 23:48:41 +0000 UTC,FinishedAt:2024-08-13 23:48:48 +0000 UTC,ContainerID:cri-o://57188d13697467e6140175385ca067455c09a2e9f44f868ff2c79498b0bf8ccf,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://57188d13697467e6140175385ca067455c09a2e9f44f868ff2c79498b0bf8ccf Started:0xc00280d810 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc00281a900} {Name:kube-api-access-45gsc MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOn
ly:0xc00281a910}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0813 23:48:50.122183   17389 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-937866" in "kube-system" namespace to be "Ready" ...
	I0813 23:48:50.126292   17389 pod_ready.go:92] pod "etcd-addons-937866" in "kube-system" namespace has status "Ready":"True"
	I0813 23:48:50.126313   17389 pod_ready.go:81] duration metric: took 4.120047ms for pod "etcd-addons-937866" in "kube-system" namespace to be "Ready" ...
	I0813 23:48:50.126325   17389 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-937866" in "kube-system" namespace to be "Ready" ...
	I0813 23:48:50.130912   17389 pod_ready.go:92] pod "kube-apiserver-addons-937866" in "kube-system" namespace has status "Ready":"True"
	I0813 23:48:50.130931   17389 pod_ready.go:81] duration metric: took 4.598167ms for pod "kube-apiserver-addons-937866" in "kube-system" namespace to be "Ready" ...
	I0813 23:48:50.130942   17389 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-937866" in "kube-system" namespace to be "Ready" ...
	I0813 23:48:50.135325   17389 pod_ready.go:92] pod "kube-controller-manager-addons-937866" in "kube-system" namespace has status "Ready":"True"
	I0813 23:48:50.135340   17389 pod_ready.go:81] duration metric: took 4.391855ms for pod "kube-controller-manager-addons-937866" in "kube-system" namespace to be "Ready" ...
	I0813 23:48:50.135351   17389 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-824wz" in "kube-system" namespace to be "Ready" ...
	I0813 23:48:50.140106   17389 pod_ready.go:92] pod "kube-proxy-824wz" in "kube-system" namespace has status "Ready":"True"
	I0813 23:48:50.140122   17389 pod_ready.go:81] duration metric: took 4.764171ms for pod "kube-proxy-824wz" in "kube-system" namespace to be "Ready" ...
	I0813 23:48:50.140131   17389 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-937866" in "kube-system" namespace to be "Ready" ...
	I0813 23:48:50.152083   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:48:50.155833   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:48:50.190421   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:48:50.520537   17389 pod_ready.go:92] pod "kube-scheduler-addons-937866" in "kube-system" namespace has status "Ready":"True"
	I0813 23:48:50.520567   17389 pod_ready.go:81] duration metric: took 380.427953ms for pod "kube-scheduler-addons-937866" in "kube-system" namespace to be "Ready" ...
	I0813 23:48:50.520580   17389 pod_ready.go:38] duration metric: took 12.542275153s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 23:48:50.520598   17389 api_server.go:52] waiting for apiserver process to appear ...
	I0813 23:48:50.520675   17389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 23:48:50.563714   17389 api_server.go:72] duration metric: took 13.32538518s to wait for apiserver process to appear ...
	I0813 23:48:50.563748   17389 api_server.go:88] waiting for apiserver healthz status ...
	I0813 23:48:50.563771   17389 api_server.go:253] Checking apiserver healthz at https://192.168.39.8:8443/healthz ...
	I0813 23:48:50.571204   17389 api_server.go:279] https://192.168.39.8:8443/healthz returned 200:
	ok
	I0813 23:48:50.572748   17389 api_server.go:141] control plane version: v1.31.0
	I0813 23:48:50.572774   17389 api_server.go:131] duration metric: took 9.018119ms to wait for apiserver health ...
	I0813 23:48:50.572783   17389 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 23:48:50.600576   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:48:50.655035   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:48:50.658972   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:48:50.690600   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:48:50.728332   17389 system_pods.go:59] 18 kube-system pods found
	I0813 23:48:50.728364   17389 system_pods.go:61] "coredns-6f6b679f8f-mq64k" [0528e757-cec5-40d0-9a8e-12819640a8db] Running
	I0813 23:48:50.728372   17389 system_pods.go:61] "csi-hostpath-attacher-0" [e4801af2-e316-4c00-bb1a-f69134d81190] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0813 23:48:50.728378   17389 system_pods.go:61] "csi-hostpath-resizer-0" [f5bda74c-dfef-4e1c-857d-7d252de5db1e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0813 23:48:50.728393   17389 system_pods.go:61] "csi-hostpathplugin-vxpnf" [17d9d31f-6635-4275-9b5e-4bfa444ec3da] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0813 23:48:50.728398   17389 system_pods.go:61] "etcd-addons-937866" [6d636c7e-8378-4d77-8a06-c97743bddc68] Running
	I0813 23:48:50.728402   17389 system_pods.go:61] "kube-apiserver-addons-937866" [9191440b-abcb-45ce-901c-ef6578bec1e0] Running
	I0813 23:48:50.728407   17389 system_pods.go:61] "kube-controller-manager-addons-937866" [8063133c-4ca8-4683-882a-37dbd1cd0ac0] Running
	I0813 23:48:50.728412   17389 system_pods.go:61] "kube-ingress-dns-minikube" [1b4c2a31-5938-43b5-9fa3-fe5b3ebf19bf] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0813 23:48:50.728415   17389 system_pods.go:61] "kube-proxy-824wz" [8453a99d-976e-4371-9c3b-104af4136766] Running
	I0813 23:48:50.728419   17389 system_pods.go:61] "kube-scheduler-addons-937866" [b1f5df74-7ed9-4837-8cfb-deef2ecb11ca] Running
	I0813 23:48:50.728423   17389 system_pods.go:61] "metrics-server-8988944d9-mnlqq" [82850aaa-4f93-49e5-b89b-e86bc208fd74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 23:48:50.728430   17389 system_pods.go:61] "nvidia-device-plugin-daemonset-mg5kj" [decbf56f-a46d-4b32-a963-1abb25adfab9] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0813 23:48:50.728443   17389 system_pods.go:61] "registry-6fb4cdfc84-d8ptz" [03e452f4-85d3-486e-bf4e-30e1bf8b8929] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0813 23:48:50.728449   17389 system_pods.go:61] "registry-proxy-9lq9k" [1cb9d48b-73e5-4500-bb30-902eac13720e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0813 23:48:50.728455   17389 system_pods.go:61] "snapshot-controller-56fcc65765-fnm49" [98fb76a3-1db4-4ad5-b71c-c64a3e5c97d6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0813 23:48:50.728479   17389 system_pods.go:61] "snapshot-controller-56fcc65765-jg4b7" [fd5994b7-7852-4377-9f88-fa1d4de1138f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0813 23:48:50.728490   17389 system_pods.go:61] "storage-provisioner" [9ba3f553-c9e7-46cf-b4b9-a0e0246b026a] Running
	I0813 23:48:50.728496   17389 system_pods.go:61] "tiller-deploy-b48cc5f79-p2hvc" [66ce562c-db93-4b51-b8be-ce14bacba0f8] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0813 23:48:50.728501   17389 system_pods.go:74] duration metric: took 155.713541ms to wait for pod list to return data ...
	I0813 23:48:50.728509   17389 default_sa.go:34] waiting for default service account to be created ...
	I0813 23:48:50.920520   17389 default_sa.go:45] found service account: "default"
	I0813 23:48:50.920547   17389 default_sa.go:55] duration metric: took 192.03021ms for default service account to be created ...
	I0813 23:48:50.920555   17389 system_pods.go:116] waiting for k8s-apps to be running ...
	I0813 23:48:51.098244   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:48:51.128050   17389 system_pods.go:86] 18 kube-system pods found
	I0813 23:48:51.128079   17389 system_pods.go:89] "coredns-6f6b679f8f-mq64k" [0528e757-cec5-40d0-9a8e-12819640a8db] Running
	I0813 23:48:51.128088   17389 system_pods.go:89] "csi-hostpath-attacher-0" [e4801af2-e316-4c00-bb1a-f69134d81190] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0813 23:48:51.128096   17389 system_pods.go:89] "csi-hostpath-resizer-0" [f5bda74c-dfef-4e1c-857d-7d252de5db1e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0813 23:48:51.128106   17389 system_pods.go:89] "csi-hostpathplugin-vxpnf" [17d9d31f-6635-4275-9b5e-4bfa444ec3da] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0813 23:48:51.128113   17389 system_pods.go:89] "etcd-addons-937866" [6d636c7e-8378-4d77-8a06-c97743bddc68] Running
	I0813 23:48:51.128122   17389 system_pods.go:89] "kube-apiserver-addons-937866" [9191440b-abcb-45ce-901c-ef6578bec1e0] Running
	I0813 23:48:51.128133   17389 system_pods.go:89] "kube-controller-manager-addons-937866" [8063133c-4ca8-4683-882a-37dbd1cd0ac0] Running
	I0813 23:48:51.128143   17389 system_pods.go:89] "kube-ingress-dns-minikube" [1b4c2a31-5938-43b5-9fa3-fe5b3ebf19bf] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0813 23:48:51.128155   17389 system_pods.go:89] "kube-proxy-824wz" [8453a99d-976e-4371-9c3b-104af4136766] Running
	I0813 23:48:51.128164   17389 system_pods.go:89] "kube-scheduler-addons-937866" [b1f5df74-7ed9-4837-8cfb-deef2ecb11ca] Running
	I0813 23:48:51.128172   17389 system_pods.go:89] "metrics-server-8988944d9-mnlqq" [82850aaa-4f93-49e5-b89b-e86bc208fd74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 23:48:51.128183   17389 system_pods.go:89] "nvidia-device-plugin-daemonset-mg5kj" [decbf56f-a46d-4b32-a963-1abb25adfab9] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0813 23:48:51.128193   17389 system_pods.go:89] "registry-6fb4cdfc84-d8ptz" [03e452f4-85d3-486e-bf4e-30e1bf8b8929] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0813 23:48:51.128209   17389 system_pods.go:89] "registry-proxy-9lq9k" [1cb9d48b-73e5-4500-bb30-902eac13720e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0813 23:48:51.128222   17389 system_pods.go:89] "snapshot-controller-56fcc65765-fnm49" [98fb76a3-1db4-4ad5-b71c-c64a3e5c97d6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0813 23:48:51.128235   17389 system_pods.go:89] "snapshot-controller-56fcc65765-jg4b7" [fd5994b7-7852-4377-9f88-fa1d4de1138f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0813 23:48:51.128243   17389 system_pods.go:89] "storage-provisioner" [9ba3f553-c9e7-46cf-b4b9-a0e0246b026a] Running
	I0813 23:48:51.128249   17389 system_pods.go:89] "tiller-deploy-b48cc5f79-p2hvc" [66ce562c-db93-4b51-b8be-ce14bacba0f8] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0813 23:48:51.128258   17389 system_pods.go:126] duration metric: took 207.697714ms to wait for k8s-apps to be running ...
	I0813 23:48:51.128271   17389 system_svc.go:44] waiting for kubelet service to be running ....
	I0813 23:48:51.128319   17389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0813 23:48:51.153565   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:48:51.154896   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:48:51.169250   17389 system_svc.go:56] duration metric: took 40.970036ms WaitForService to wait for kubelet
	I0813 23:48:51.169280   17389 kubeadm.go:582] duration metric: took 13.930952977s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0813 23:48:51.169304   17389 node_conditions.go:102] verifying NodePressure condition ...
	I0813 23:48:51.190455   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:48:51.320331   17389 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0813 23:48:51.320354   17389 node_conditions.go:123] node cpu capacity is 2
	I0813 23:48:51.320365   17389 node_conditions.go:105] duration metric: took 151.056247ms to run NodePressure ...
	I0813 23:48:51.320376   17389 start.go:241] waiting for startup goroutines ...
	I0813 23:48:51.598588   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:48:51.653107   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:48:51.653326   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:48:51.692851   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:48:52.099804   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:48:52.153281   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:48:52.155502   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:48:52.190023   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:48:52.597256   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:48:52.652564   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:48:52.652718   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:48:52.689665   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:48:53.206551   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:48:53.304933   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:48:53.305048   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:48:53.305217   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:48:53.598948   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:48:53.654857   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:48:53.656449   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:48:53.690461   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:48:54.098849   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:48:54.154017   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:48:54.155736   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:48:54.190375   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:48:54.598273   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:48:54.652538   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:48:54.653119   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:48:54.689208   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:48:55.098261   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:48:55.153346   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:48:55.154087   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:48:55.197475   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:48:55.597907   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:48:55.653179   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:48:55.653350   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:48:55.689501   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:48:56.098424   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:48:56.152239   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:48:56.153435   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:48:56.189617   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:48:56.598247   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:48:56.652305   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:48:56.652802   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:48:56.689085   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:48:57.097959   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:48:57.155254   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:48:57.155346   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:48:57.190012   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:48:57.599203   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:48:57.653988   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:48:57.654065   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:48:57.688912   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:48:58.097780   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:48:58.152705   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:48:58.153359   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:48:58.190051   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:48:58.597770   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:48:58.652609   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:48:58.653210   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:48:58.689489   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:48:59.097730   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:48:59.152398   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:48:59.154600   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:48:59.189354   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:48:59.598236   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:48:59.652890   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:48:59.653745   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:48:59.689296   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:00.097482   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:00.153192   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:00.153934   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:00.189627   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:00.597977   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:00.653060   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:00.653570   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:00.689755   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:01.097940   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:01.152212   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:01.152970   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:01.189175   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:01.597601   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:01.651972   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:01.652248   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:01.697613   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:02.097919   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:02.152873   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:02.153289   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:02.189844   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:02.599988   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:02.652114   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:02.653489   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:02.691051   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:03.098643   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:03.153666   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:03.153790   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:03.191787   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:03.600120   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:03.658600   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:03.664700   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:03.690079   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:04.098263   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:04.152701   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:04.155825   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:04.189286   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:04.597928   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:04.652535   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:04.652780   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:04.688939   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:05.099623   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:05.199919   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:05.200010   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:05.200230   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:05.598190   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:05.653973   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:05.657138   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:05.700398   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:06.099861   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:06.153860   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:06.155802   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:06.189190   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:06.597871   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:06.652693   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:06.653223   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:06.691423   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:07.098729   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:07.152395   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:07.152823   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:07.190229   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:07.598319   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:07.653662   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:07.654230   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:07.697679   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:08.098263   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:08.154157   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:08.154357   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:08.196256   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:08.598169   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:08.653539   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:08.653651   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:08.692981   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:09.098420   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:09.153840   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:09.153901   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:09.189316   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:09.597721   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:09.653112   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:09.653493   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:09.689569   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:10.098378   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:10.152117   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:10.153926   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:10.189318   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:10.597507   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:10.652372   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:10.653558   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:10.688493   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:11.097975   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:11.152929   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:11.152942   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:11.189508   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:11.597522   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:11.652757   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:11.652794   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:11.697235   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:12.097550   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:12.151778   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:12.153054   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:12.189953   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:12.604938   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:12.653616   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:12.654531   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:12.690372   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:13.097263   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:13.152309   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:13.153397   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:13.190275   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:13.726269   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:13.726496   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:13.727028   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:13.727063   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:14.098430   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:14.152594   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:14.153283   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:14.189661   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:14.597775   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:14.653719   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:14.653865   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:14.689102   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:15.097049   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:15.152721   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:15.152905   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:15.189378   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:15.598094   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:15.653060   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:15.653187   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:15.689956   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:16.098173   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:16.152118   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:16.153288   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:16.189681   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:16.597194   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:16.652564   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:16.653449   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:16.689527   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:17.098000   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:17.152516   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:17.156033   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:17.189581   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:17.772720   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:17.773124   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:17.773397   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:17.774222   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:18.098231   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:18.152110   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:18.152685   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:18.189532   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:18.597523   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:18.653735   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:18.654773   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:18.689434   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:19.097563   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:19.153811   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:19.154102   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:19.189332   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:19.597314   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:19.651857   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:19.653523   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:19.688933   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:20.098253   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:20.152039   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:20.152515   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:20.189753   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:20.598186   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:20.652014   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:20.652892   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:20.689003   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:21.097642   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:21.152684   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:21.153025   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:21.189536   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:21.598201   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:21.653417   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:21.653444   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:21.698186   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:22.098268   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:22.153100   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:22.154569   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:22.189184   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:22.598027   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:22.653179   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:22.653440   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:22.689931   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:23.097490   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:23.152937   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:23.153244   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:23.189842   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:23.598247   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:23.653362   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:23.653833   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:23.689539   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:24.098766   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:24.153612   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:24.154300   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:24.189793   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:24.598322   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:24.651940   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:24.652921   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:24.689539   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:25.097167   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:25.152344   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:25.153606   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:25.189243   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:25.597675   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:25.653937   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:25.655255   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:25.689834   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:26.097463   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:26.152859   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:26.153771   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:26.189546   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:26.598071   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:26.652885   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:26.654494   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:26.689487   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:27.097880   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:27.153214   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:27.153863   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:27.189129   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:27.597247   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:27.652083   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:27.654300   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:27.690693   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:28.098615   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:28.152275   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:28.152637   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:28.188948   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:28.598512   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:28.654283   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:28.654380   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:28.689455   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:29.097724   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:29.152620   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:29.153738   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:29.189198   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:29.597687   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:29.653718   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:29.654359   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:29.689224   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:30.097865   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:30.153281   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:30.154098   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:30.189507   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:30.597484   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:30.653660   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:30.654511   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:30.689582   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:31.097711   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:31.152718   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:31.154133   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:31.189990   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:31.597519   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:31.653204   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:31.654323   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:31.697436   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:32.098215   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:32.155025   17389 kapi.go:107] duration metric: took 46.505581339s to wait for kubernetes.io/minikube-addons=registry ...
	I0813 23:49:32.155788   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:32.192983   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:32.600434   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:32.652789   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:32.688420   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:33.098237   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:33.198631   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:33.199073   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:33.599560   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:33.654097   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:33.689242   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:34.097163   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:34.153237   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:34.190006   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:34.598928   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:34.652448   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:34.689665   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:35.098428   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:35.153212   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:35.188915   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:35.598165   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:35.652174   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:35.689560   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:36.098308   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:36.153445   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:36.189675   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:36.598442   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:36.653079   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:36.689580   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:37.281636   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:37.285390   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:37.285552   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:37.597177   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:37.652193   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:37.689282   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:38.097616   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:38.152691   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:38.193651   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:38.598250   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:38.652483   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:38.689808   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:39.097860   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:39.152447   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:39.189490   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:39.597650   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:39.652429   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:39.689534   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:40.097643   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:40.197969   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:40.198740   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:40.596958   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:40.652981   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:40.688874   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:41.098375   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:41.156100   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:41.191405   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:41.845801   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:41.856947   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:41.857426   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:42.099214   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:42.151804   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:42.188826   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:42.597876   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:42.653391   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:42.689868   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:43.098128   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:43.151804   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:43.188947   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:43.601946   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:43.653712   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:43.689455   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:44.097578   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:44.152547   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:44.189645   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:44.598160   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:44.652295   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:44.688690   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:45.098686   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:45.199166   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:45.199410   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:45.597279   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:45.698602   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:45.698915   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:46.099244   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:46.156098   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:46.189992   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:46.598717   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:46.652248   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:46.690114   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:47.097932   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:47.197384   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:47.198506   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:47.600598   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:47.652849   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:47.688458   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:48.097890   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:48.152506   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:48.190132   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:48.845674   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:48.846388   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:48.846655   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:49.098289   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:49.198332   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:49.198592   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:49.596768   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:49.652304   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:49.689468   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:50.106563   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:50.205831   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:50.206391   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:50.597976   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:50.652730   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:50.689555   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:51.097816   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:51.152602   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:51.189886   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:51.598378   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:51.653210   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:51.689835   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:52.100217   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:52.153278   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:52.198828   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:52.598724   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:52.653955   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:52.688945   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:53.101206   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:53.152200   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:53.191806   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:53.599210   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:53.652020   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:53.689308   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:54.512519   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:54.513123   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:54.513136   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:54.597360   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:54.652101   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:54.689111   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:55.097634   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:55.197597   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:55.198434   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:55.597107   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:55.651805   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:55.689537   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:56.098210   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:56.153621   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:56.189318   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:56.983628   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:56.983878   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:56.985650   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:57.098695   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:57.156082   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:57.255160   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:57.600639   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:57.698982   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:57.699919   17389 kapi.go:107] duration metric: took 1m12.051665488s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0813 23:49:58.097288   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:58.190700   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:58.601685   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:58.700568   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:59.097875   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:59.190313   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:59.597703   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:59.689543   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:50:00.097747   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:50:00.190247   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:50:00.597439   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:50:00.692788   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:50:01.098061   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:50:01.189499   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:50:01.597920   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:50:01.692116   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:50:02.099675   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:50:02.189632   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:50:02.597817   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:50:02.694771   17389 kapi.go:107] duration metric: took 1m14.508630277s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0813 23:50:02.695953   17389 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-937866 cluster.
	I0813 23:50:02.696927   17389 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0813 23:50:02.697846   17389 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0813 23:50:03.098257   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:50:03.599315   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:50:04.098179   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:50:04.598577   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:50:05.099833   17389 kapi.go:107] duration metric: took 1m18.506732231s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0813 23:50:05.101490   17389 out.go:177] * Enabled addons: inspektor-gadget, ingress-dns, helm-tiller, storage-provisioner, nvidia-device-plugin, cloud-spanner, metrics-server, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0813 23:50:05.102657   17389 addons.go:510] duration metric: took 1m27.864313653s for enable addons: enabled=[inspektor-gadget ingress-dns helm-tiller storage-provisioner nvidia-device-plugin cloud-spanner metrics-server yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0813 23:50:05.102688   17389 start.go:246] waiting for cluster config update ...
	I0813 23:50:05.102703   17389 start.go:255] writing updated cluster config ...
	I0813 23:50:05.102934   17389 ssh_runner.go:195] Run: rm -f paused
	I0813 23:50:05.153742   17389 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0813 23:50:05.155757   17389 out.go:177] * Done! kubectl is now configured to use "addons-937866" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 13 23:53:31 addons-937866 crio[672]: time="2024-08-13 23:53:31.724857902Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723593211724829748,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590423,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3822ca2d-e539-454d-a11f-c10a26596286 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 13 23:53:31 addons-937866 crio[672]: time="2024-08-13 23:53:31.725298831Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5d4ea10b-f5cc-4907-96c7-f1fecd75cfdd name=/runtime.v1.RuntimeService/ListContainers
	Aug 13 23:53:31 addons-937866 crio[672]: time="2024-08-13 23:53:31.725379698Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5d4ea10b-f5cc-4907-96c7-f1fecd75cfdd name=/runtime.v1.RuntimeService/ListContainers
	Aug 13 23:53:31 addons-937866 crio[672]: time="2024-08-13 23:53:31.725767634Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8f8a78c51396eedd90f1efb827e3068eb25b17e5a211cd3e0b4a03c8f733baf1,PodSandboxId:35b1ff51c158ff15821a2f082526de21c145d159fce3acaacd1640ee1dc7db11,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1723593204188162225,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-tgpcr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8a9febae-6afb-415b-9902-a227a7298d06,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b8516b3c92e0cc27ed3cdbff2eea3887caa7f28512183c7f1cb8639cbbb3f0a,PodSandboxId:eac73cadfda845564addf7539292840751eabedf41c780e867eaa4607576dcfb,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1723593063774166595,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 58e32069-0078-4b2c-83a7-45c915783932,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:080bae7736a72de2879ee0d3a4f237eb9b3a908007b9b977f2a0de7752529957,PodSandboxId:b5b91661d0a0429d4a31359d707f551150e546a4fc9438d9db9306db70d31d24,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723593008973481421,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5c89c9ad-4cc6-4702-9
bca-4e1f1aaba12a,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b3eb8cb74e5c8bbe869443c3385cd0557a2f86a513126a4e495fd05265544f7,PodSandboxId:57ea8e916cbfcde2614dfb9a6dab892c9cbe713392f1b19f7fdc781c11cd6a05,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1723592980005498668,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-5mdxt,io.kubernetes.pod.namespace: ingress-nginx,io.kuber
netes.pod.uid: 303d6323-b5d9-46c0-8649-417ca113606c,},Annotations:map[string]string{io.kubernetes.container.hash: 8e23eadd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7a487b9c1b97f5e12ab6cebc84fcf33cf5387927a365df87e94622183e1891b,PodSandboxId:a3a85c77ee40de21d3fcbd68b2c4585770ffc266a4d7e07f2db9a68bf6d514b1,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1723592979879124201,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-lmfpb,io.kubernetes
.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ae537c5c-d655-414f-9a61-97356e2198da,},Annotations:map[string]string{io.kubernetes.container.hash: 7b54fe70,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aef71853ff093cc09890a375f0e40a633b9946dfe086983308015a12d79c0ad1,PodSandboxId:5a356b85f2d87c5618dada30cae3fb6065e89fe0d6017a13ac0ff56bed6ec299,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1723592962867510636,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes
.pod.name: local-path-provisioner-86d989889c-wpqrr,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: c2c1ef92-b0ad-4867-8557-bd97061d6a77,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2384f234584637de7dc22678138c01c69ac4583bcef705f2c9092b9bfcdb9c3a,PodSandboxId:a02180723e74c3fa5bf4395df48247dc026937859e7c6127b3c6117d8c5e3609,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1723592945059195931,Labels:map[string]string{
io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-8988944d9-mnlqq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82850aaa-4f93-49e5-b89b-e86bc208fd74,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27c34da461ec94bd2a036cc3e5a0bcb06066c6e533a5d76d7bd8689b9ace0e1b,PodSandboxId:250db49cedef2b39bccb69b7f3d4b8ddf31736e260362a70828c8b92c8d713dd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2f
bda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723592923504577297,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ba3f553-c9e7-46cf-b4b9-a0e0246b026a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ad9649d499b73f453514ebcdbb386fd901bcad1a452c8ce0865a7bbe03aa40e,PodSandboxId:40484d729855fddf393de4d963534b514e63415ed9854819fc4afc5e58bd9b14,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTA
INER_RUNNING,CreatedAt:1723592921051831977,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mq64k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0528e757-cec5-40d0-9a8e-12819640a8db,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e9d428f9086acfa295ebbe628e9f163619d77a3c764d06b84e4f2ade96f737b,PodSandboxId:d36dd11f973fb38b1936721687cfb0ab985a9e29e7a9415a3a8dcd4a8bfe4fb0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f9
33eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723592918705436898,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-824wz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8453a99d-976e-4371-9c3b-104af4136766,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d4207ddccb37f18323a1e3696612f510dd1ed74f5cf3e34d4fa6005dae1c9ae,PodSandboxId:59e77b940b475cfe19bf401b5a937b1bd8eb5e06c53bcf9400b277e89c9b2ae3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9
d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723592907453827348,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-937866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d74633517769850b725dccf9a0ffc53d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b454b279a20a235dbba873bdf29da83c94c918cdf25120909f89e43ab04e4f88,PodSandboxId:cea3c9b85f1fd6a2e273a1befa44d440d8d6351a3d62bd8aaa9bbf6ce12b9675,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},Us
erSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723592907495439394,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-937866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fce30fe107538c52cc2e261cb4c0133b,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:293d856a8715f2caab799754964d885457b07494e38abee7052087e67cc85340,PodSandboxId:9d7dd29c62160990aa4ce81efc620847f461ba1ab21f24dda57c75ad3c83816d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecif
iedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723592907458211050,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-937866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9571be376cc12fe482c4bfad58fba714,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a3ce8195d181b25c78f8341fd5c5da13fb0f3bf61d58093031de7c03a823424,PodSandboxId:4cc8cb6149f16822e50492f840788b5466199262b9ab4f70e4266f4feb1212a6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723592907432577828,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-937866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b584370eacfec4bbab6319ba572cc8a,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5d4ea10b-f5cc-4907-96c7-f1fecd75cfdd name=/runtime.v1.RuntimeService/ListContainers
	Aug 13 23:53:31 addons-937866 crio[672]: time="2024-08-13 23:53:31.760309449Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ee6d7b2d-5f34-490a-ae83-5255c4f68a25 name=/runtime.v1.RuntimeService/Version
	Aug 13 23:53:31 addons-937866 crio[672]: time="2024-08-13 23:53:31.760381489Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ee6d7b2d-5f34-490a-ae83-5255c4f68a25 name=/runtime.v1.RuntimeService/Version
	Aug 13 23:53:31 addons-937866 crio[672]: time="2024-08-13 23:53:31.766869653Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e79bd6f5-ab88-4e28-8c77-a2fd4a7535f3 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 13 23:53:31 addons-937866 crio[672]: time="2024-08-13 23:53:31.768079933Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723593211768050426,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590423,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e79bd6f5-ab88-4e28-8c77-a2fd4a7535f3 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 13 23:53:31 addons-937866 crio[672]: time="2024-08-13 23:53:31.768719643Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4f64d1db-5e46-4b61-b257-4c146ec34242 name=/runtime.v1.RuntimeService/ListContainers
	Aug 13 23:53:31 addons-937866 crio[672]: time="2024-08-13 23:53:31.768776489Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4f64d1db-5e46-4b61-b257-4c146ec34242 name=/runtime.v1.RuntimeService/ListContainers
	Aug 13 23:53:31 addons-937866 crio[672]: time="2024-08-13 23:53:31.769099310Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8f8a78c51396eedd90f1efb827e3068eb25b17e5a211cd3e0b4a03c8f733baf1,PodSandboxId:35b1ff51c158ff15821a2f082526de21c145d159fce3acaacd1640ee1dc7db11,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1723593204188162225,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-tgpcr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8a9febae-6afb-415b-9902-a227a7298d06,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b8516b3c92e0cc27ed3cdbff2eea3887caa7f28512183c7f1cb8639cbbb3f0a,PodSandboxId:eac73cadfda845564addf7539292840751eabedf41c780e867eaa4607576dcfb,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1723593063774166595,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 58e32069-0078-4b2c-83a7-45c915783932,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:080bae7736a72de2879ee0d3a4f237eb9b3a908007b9b977f2a0de7752529957,PodSandboxId:b5b91661d0a0429d4a31359d707f551150e546a4fc9438d9db9306db70d31d24,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723593008973481421,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5c89c9ad-4cc6-4702-9
bca-4e1f1aaba12a,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b3eb8cb74e5c8bbe869443c3385cd0557a2f86a513126a4e495fd05265544f7,PodSandboxId:57ea8e916cbfcde2614dfb9a6dab892c9cbe713392f1b19f7fdc781c11cd6a05,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1723592980005498668,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-5mdxt,io.kubernetes.pod.namespace: ingress-nginx,io.kuber
netes.pod.uid: 303d6323-b5d9-46c0-8649-417ca113606c,},Annotations:map[string]string{io.kubernetes.container.hash: 8e23eadd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7a487b9c1b97f5e12ab6cebc84fcf33cf5387927a365df87e94622183e1891b,PodSandboxId:a3a85c77ee40de21d3fcbd68b2c4585770ffc266a4d7e07f2db9a68bf6d514b1,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1723592979879124201,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-lmfpb,io.kubernetes
.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ae537c5c-d655-414f-9a61-97356e2198da,},Annotations:map[string]string{io.kubernetes.container.hash: 7b54fe70,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aef71853ff093cc09890a375f0e40a633b9946dfe086983308015a12d79c0ad1,PodSandboxId:5a356b85f2d87c5618dada30cae3fb6065e89fe0d6017a13ac0ff56bed6ec299,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1723592962867510636,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes
.pod.name: local-path-provisioner-86d989889c-wpqrr,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: c2c1ef92-b0ad-4867-8557-bd97061d6a77,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2384f234584637de7dc22678138c01c69ac4583bcef705f2c9092b9bfcdb9c3a,PodSandboxId:a02180723e74c3fa5bf4395df48247dc026937859e7c6127b3c6117d8c5e3609,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1723592945059195931,Labels:map[string]string{
io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-8988944d9-mnlqq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82850aaa-4f93-49e5-b89b-e86bc208fd74,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27c34da461ec94bd2a036cc3e5a0bcb06066c6e533a5d76d7bd8689b9ace0e1b,PodSandboxId:250db49cedef2b39bccb69b7f3d4b8ddf31736e260362a70828c8b92c8d713dd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2f
bda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723592923504577297,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ba3f553-c9e7-46cf-b4b9-a0e0246b026a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ad9649d499b73f453514ebcdbb386fd901bcad1a452c8ce0865a7bbe03aa40e,PodSandboxId:40484d729855fddf393de4d963534b514e63415ed9854819fc4afc5e58bd9b14,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTA
INER_RUNNING,CreatedAt:1723592921051831977,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mq64k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0528e757-cec5-40d0-9a8e-12819640a8db,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e9d428f9086acfa295ebbe628e9f163619d77a3c764d06b84e4f2ade96f737b,PodSandboxId:d36dd11f973fb38b1936721687cfb0ab985a9e29e7a9415a3a8dcd4a8bfe4fb0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f9
33eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723592918705436898,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-824wz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8453a99d-976e-4371-9c3b-104af4136766,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d4207ddccb37f18323a1e3696612f510dd1ed74f5cf3e34d4fa6005dae1c9ae,PodSandboxId:59e77b940b475cfe19bf401b5a937b1bd8eb5e06c53bcf9400b277e89c9b2ae3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9
d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723592907453827348,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-937866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d74633517769850b725dccf9a0ffc53d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b454b279a20a235dbba873bdf29da83c94c918cdf25120909f89e43ab04e4f88,PodSandboxId:cea3c9b85f1fd6a2e273a1befa44d440d8d6351a3d62bd8aaa9bbf6ce12b9675,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},Us
erSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723592907495439394,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-937866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fce30fe107538c52cc2e261cb4c0133b,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:293d856a8715f2caab799754964d885457b07494e38abee7052087e67cc85340,PodSandboxId:9d7dd29c62160990aa4ce81efc620847f461ba1ab21f24dda57c75ad3c83816d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecif
iedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723592907458211050,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-937866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9571be376cc12fe482c4bfad58fba714,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a3ce8195d181b25c78f8341fd5c5da13fb0f3bf61d58093031de7c03a823424,PodSandboxId:4cc8cb6149f16822e50492f840788b5466199262b9ab4f70e4266f4feb1212a6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723592907432577828,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-937866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b584370eacfec4bbab6319ba572cc8a,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4f64d1db-5e46-4b61-b257-4c146ec34242 name=/runtime.v1.RuntimeService/ListContainers
	Aug 13 23:53:31 addons-937866 crio[672]: time="2024-08-13 23:53:31.806252938Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=226782dd-e16b-41e9-9df6-ef727cd57371 name=/runtime.v1.RuntimeService/Version
	Aug 13 23:53:31 addons-937866 crio[672]: time="2024-08-13 23:53:31.806326864Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=226782dd-e16b-41e9-9df6-ef727cd57371 name=/runtime.v1.RuntimeService/Version
	Aug 13 23:53:31 addons-937866 crio[672]: time="2024-08-13 23:53:31.807537412Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7ad779af-9d28-4686-bd68-faabd9869d4f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 13 23:53:31 addons-937866 crio[672]: time="2024-08-13 23:53:31.809022891Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723593211808991411,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590423,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7ad779af-9d28-4686-bd68-faabd9869d4f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 13 23:53:31 addons-937866 crio[672]: time="2024-08-13 23:53:31.809512877Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a9d09027-fff7-47a5-8db3-23eef61c5100 name=/runtime.v1.RuntimeService/ListContainers
	Aug 13 23:53:31 addons-937866 crio[672]: time="2024-08-13 23:53:31.809585161Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a9d09027-fff7-47a5-8db3-23eef61c5100 name=/runtime.v1.RuntimeService/ListContainers
	Aug 13 23:53:31 addons-937866 crio[672]: time="2024-08-13 23:53:31.809976585Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8f8a78c51396eedd90f1efb827e3068eb25b17e5a211cd3e0b4a03c8f733baf1,PodSandboxId:35b1ff51c158ff15821a2f082526de21c145d159fce3acaacd1640ee1dc7db11,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1723593204188162225,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-tgpcr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8a9febae-6afb-415b-9902-a227a7298d06,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b8516b3c92e0cc27ed3cdbff2eea3887caa7f28512183c7f1cb8639cbbb3f0a,PodSandboxId:eac73cadfda845564addf7539292840751eabedf41c780e867eaa4607576dcfb,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1723593063774166595,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 58e32069-0078-4b2c-83a7-45c915783932,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:080bae7736a72de2879ee0d3a4f237eb9b3a908007b9b977f2a0de7752529957,PodSandboxId:b5b91661d0a0429d4a31359d707f551150e546a4fc9438d9db9306db70d31d24,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723593008973481421,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5c89c9ad-4cc6-4702-9
bca-4e1f1aaba12a,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b3eb8cb74e5c8bbe869443c3385cd0557a2f86a513126a4e495fd05265544f7,PodSandboxId:57ea8e916cbfcde2614dfb9a6dab892c9cbe713392f1b19f7fdc781c11cd6a05,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1723592980005498668,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-5mdxt,io.kubernetes.pod.namespace: ingress-nginx,io.kuber
netes.pod.uid: 303d6323-b5d9-46c0-8649-417ca113606c,},Annotations:map[string]string{io.kubernetes.container.hash: 8e23eadd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7a487b9c1b97f5e12ab6cebc84fcf33cf5387927a365df87e94622183e1891b,PodSandboxId:a3a85c77ee40de21d3fcbd68b2c4585770ffc266a4d7e07f2db9a68bf6d514b1,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1723592979879124201,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-lmfpb,io.kubernetes
.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ae537c5c-d655-414f-9a61-97356e2198da,},Annotations:map[string]string{io.kubernetes.container.hash: 7b54fe70,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aef71853ff093cc09890a375f0e40a633b9946dfe086983308015a12d79c0ad1,PodSandboxId:5a356b85f2d87c5618dada30cae3fb6065e89fe0d6017a13ac0ff56bed6ec299,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1723592962867510636,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes
.pod.name: local-path-provisioner-86d989889c-wpqrr,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: c2c1ef92-b0ad-4867-8557-bd97061d6a77,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2384f234584637de7dc22678138c01c69ac4583bcef705f2c9092b9bfcdb9c3a,PodSandboxId:a02180723e74c3fa5bf4395df48247dc026937859e7c6127b3c6117d8c5e3609,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1723592945059195931,Labels:map[string]string{
io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-8988944d9-mnlqq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82850aaa-4f93-49e5-b89b-e86bc208fd74,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27c34da461ec94bd2a036cc3e5a0bcb06066c6e533a5d76d7bd8689b9ace0e1b,PodSandboxId:250db49cedef2b39bccb69b7f3d4b8ddf31736e260362a70828c8b92c8d713dd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2f
bda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723592923504577297,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ba3f553-c9e7-46cf-b4b9-a0e0246b026a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ad9649d499b73f453514ebcdbb386fd901bcad1a452c8ce0865a7bbe03aa40e,PodSandboxId:40484d729855fddf393de4d963534b514e63415ed9854819fc4afc5e58bd9b14,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTA
INER_RUNNING,CreatedAt:1723592921051831977,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mq64k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0528e757-cec5-40d0-9a8e-12819640a8db,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e9d428f9086acfa295ebbe628e9f163619d77a3c764d06b84e4f2ade96f737b,PodSandboxId:d36dd11f973fb38b1936721687cfb0ab985a9e29e7a9415a3a8dcd4a8bfe4fb0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f9
33eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723592918705436898,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-824wz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8453a99d-976e-4371-9c3b-104af4136766,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d4207ddccb37f18323a1e3696612f510dd1ed74f5cf3e34d4fa6005dae1c9ae,PodSandboxId:59e77b940b475cfe19bf401b5a937b1bd8eb5e06c53bcf9400b277e89c9b2ae3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9
d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723592907453827348,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-937866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d74633517769850b725dccf9a0ffc53d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b454b279a20a235dbba873bdf29da83c94c918cdf25120909f89e43ab04e4f88,PodSandboxId:cea3c9b85f1fd6a2e273a1befa44d440d8d6351a3d62bd8aaa9bbf6ce12b9675,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},Us
erSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723592907495439394,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-937866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fce30fe107538c52cc2e261cb4c0133b,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:293d856a8715f2caab799754964d885457b07494e38abee7052087e67cc85340,PodSandboxId:9d7dd29c62160990aa4ce81efc620847f461ba1ab21f24dda57c75ad3c83816d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecif
iedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723592907458211050,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-937866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9571be376cc12fe482c4bfad58fba714,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a3ce8195d181b25c78f8341fd5c5da13fb0f3bf61d58093031de7c03a823424,PodSandboxId:4cc8cb6149f16822e50492f840788b5466199262b9ab4f70e4266f4feb1212a6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723592907432577828,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-937866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b584370eacfec4bbab6319ba572cc8a,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a9d09027-fff7-47a5-8db3-23eef61c5100 name=/runtime.v1.RuntimeService/ListContainers
	Aug 13 23:53:31 addons-937866 crio[672]: time="2024-08-13 23:53:31.841427340Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1c82c304-d8b5-44c7-8b31-06cd9b327a6e name=/runtime.v1.RuntimeService/Version
	Aug 13 23:53:31 addons-937866 crio[672]: time="2024-08-13 23:53:31.841515432Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1c82c304-d8b5-44c7-8b31-06cd9b327a6e name=/runtime.v1.RuntimeService/Version
	Aug 13 23:53:31 addons-937866 crio[672]: time="2024-08-13 23:53:31.842732220Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4d1a2f35-689e-4c1c-8e80-a7908cf1ffc6 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 13 23:53:31 addons-937866 crio[672]: time="2024-08-13 23:53:31.843934209Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723593211843899507,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590423,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4d1a2f35-689e-4c1c-8e80-a7908cf1ffc6 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 13 23:53:31 addons-937866 crio[672]: time="2024-08-13 23:53:31.844429437Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=26779f4f-5955-48f9-813b-855578ab0838 name=/runtime.v1.RuntimeService/ListContainers
	Aug 13 23:53:31 addons-937866 crio[672]: time="2024-08-13 23:53:31.844538589Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=26779f4f-5955-48f9-813b-855578ab0838 name=/runtime.v1.RuntimeService/ListContainers
	Aug 13 23:53:31 addons-937866 crio[672]: time="2024-08-13 23:53:31.845122889Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8f8a78c51396eedd90f1efb827e3068eb25b17e5a211cd3e0b4a03c8f733baf1,PodSandboxId:35b1ff51c158ff15821a2f082526de21c145d159fce3acaacd1640ee1dc7db11,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1723593204188162225,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-tgpcr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8a9febae-6afb-415b-9902-a227a7298d06,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b8516b3c92e0cc27ed3cdbff2eea3887caa7f28512183c7f1cb8639cbbb3f0a,PodSandboxId:eac73cadfda845564addf7539292840751eabedf41c780e867eaa4607576dcfb,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1723593063774166595,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 58e32069-0078-4b2c-83a7-45c915783932,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:080bae7736a72de2879ee0d3a4f237eb9b3a908007b9b977f2a0de7752529957,PodSandboxId:b5b91661d0a0429d4a31359d707f551150e546a4fc9438d9db9306db70d31d24,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723593008973481421,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5c89c9ad-4cc6-4702-9
bca-4e1f1aaba12a,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b3eb8cb74e5c8bbe869443c3385cd0557a2f86a513126a4e495fd05265544f7,PodSandboxId:57ea8e916cbfcde2614dfb9a6dab892c9cbe713392f1b19f7fdc781c11cd6a05,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1723592980005498668,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-5mdxt,io.kubernetes.pod.namespace: ingress-nginx,io.kuber
netes.pod.uid: 303d6323-b5d9-46c0-8649-417ca113606c,},Annotations:map[string]string{io.kubernetes.container.hash: 8e23eadd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7a487b9c1b97f5e12ab6cebc84fcf33cf5387927a365df87e94622183e1891b,PodSandboxId:a3a85c77ee40de21d3fcbd68b2c4585770ffc266a4d7e07f2db9a68bf6d514b1,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1723592979879124201,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-lmfpb,io.kubernetes
.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ae537c5c-d655-414f-9a61-97356e2198da,},Annotations:map[string]string{io.kubernetes.container.hash: 7b54fe70,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aef71853ff093cc09890a375f0e40a633b9946dfe086983308015a12d79c0ad1,PodSandboxId:5a356b85f2d87c5618dada30cae3fb6065e89fe0d6017a13ac0ff56bed6ec299,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1723592962867510636,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes
.pod.name: local-path-provisioner-86d989889c-wpqrr,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: c2c1ef92-b0ad-4867-8557-bd97061d6a77,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2384f234584637de7dc22678138c01c69ac4583bcef705f2c9092b9bfcdb9c3a,PodSandboxId:a02180723e74c3fa5bf4395df48247dc026937859e7c6127b3c6117d8c5e3609,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1723592945059195931,Labels:map[string]string{
io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-8988944d9-mnlqq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82850aaa-4f93-49e5-b89b-e86bc208fd74,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27c34da461ec94bd2a036cc3e5a0bcb06066c6e533a5d76d7bd8689b9ace0e1b,PodSandboxId:250db49cedef2b39bccb69b7f3d4b8ddf31736e260362a70828c8b92c8d713dd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2f
bda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723592923504577297,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ba3f553-c9e7-46cf-b4b9-a0e0246b026a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ad9649d499b73f453514ebcdbb386fd901bcad1a452c8ce0865a7bbe03aa40e,PodSandboxId:40484d729855fddf393de4d963534b514e63415ed9854819fc4afc5e58bd9b14,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTA
INER_RUNNING,CreatedAt:1723592921051831977,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mq64k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0528e757-cec5-40d0-9a8e-12819640a8db,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e9d428f9086acfa295ebbe628e9f163619d77a3c764d06b84e4f2ade96f737b,PodSandboxId:d36dd11f973fb38b1936721687cfb0ab985a9e29e7a9415a3a8dcd4a8bfe4fb0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f9
33eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723592918705436898,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-824wz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8453a99d-976e-4371-9c3b-104af4136766,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d4207ddccb37f18323a1e3696612f510dd1ed74f5cf3e34d4fa6005dae1c9ae,PodSandboxId:59e77b940b475cfe19bf401b5a937b1bd8eb5e06c53bcf9400b277e89c9b2ae3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9
d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723592907453827348,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-937866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d74633517769850b725dccf9a0ffc53d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b454b279a20a235dbba873bdf29da83c94c918cdf25120909f89e43ab04e4f88,PodSandboxId:cea3c9b85f1fd6a2e273a1befa44d440d8d6351a3d62bd8aaa9bbf6ce12b9675,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},Us
erSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723592907495439394,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-937866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fce30fe107538c52cc2e261cb4c0133b,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:293d856a8715f2caab799754964d885457b07494e38abee7052087e67cc85340,PodSandboxId:9d7dd29c62160990aa4ce81efc620847f461ba1ab21f24dda57c75ad3c83816d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecif
iedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723592907458211050,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-937866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9571be376cc12fe482c4bfad58fba714,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a3ce8195d181b25c78f8341fd5c5da13fb0f3bf61d58093031de7c03a823424,PodSandboxId:4cc8cb6149f16822e50492f840788b5466199262b9ab4f70e4266f4feb1212a6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723592907432577828,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-937866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b584370eacfec4bbab6319ba572cc8a,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=26779f4f-5955-48f9-813b-855578ab0838 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8f8a78c51396e       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        7 seconds ago       Running             hello-world-app           0                   35b1ff51c158f       hello-world-app-55bf9c44b4-tgpcr
	4b8516b3c92e0       docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9                              2 minutes ago       Running             nginx                     0                   eac73cadfda84       nginx
	080bae7736a72       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   b5b91661d0a04       busybox
	1b3eb8cb74e5c       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2   3 minutes ago       Exited              patch                     0                   57ea8e916cbfc       ingress-nginx-admission-patch-5mdxt
	c7a487b9c1b97       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2   3 minutes ago       Exited              create                    0                   a3a85c77ee40d       ingress-nginx-admission-create-lmfpb
	aef71853ff093       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             4 minutes ago       Running             local-path-provisioner    0                   5a356b85f2d87       local-path-provisioner-86d989889c-wpqrr
	2384f23458463       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872        4 minutes ago       Running             metrics-server            0                   a02180723e74c       metrics-server-8988944d9-mnlqq
	27c34da461ec9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   250db49cedef2       storage-provisioner
	2ad9649d499b7       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                             4 minutes ago       Running             coredns                   0                   40484d729855f       coredns-6f6b679f8f-mq64k
	9e9d428f9086a       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                                             4 minutes ago       Running             kube-proxy                0                   d36dd11f973fb       kube-proxy-824wz
	b454b279a20a2       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                                             5 minutes ago       Running             kube-scheduler            0                   cea3c9b85f1fd       kube-scheduler-addons-937866
	293d856a8715f       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                                             5 minutes ago       Running             kube-controller-manager   0                   9d7dd29c62160       kube-controller-manager-addons-937866
	8d4207ddccb37       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             5 minutes ago       Running             etcd                      0                   59e77b940b475       etcd-addons-937866
	5a3ce8195d181       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                                             5 minutes ago       Running             kube-apiserver            0                   4cc8cb6149f16       kube-apiserver-addons-937866
	
	
	==> coredns [2ad9649d499b73f453514ebcdbb386fd901bcad1a452c8ce0865a7bbe03aa40e] <==
	[INFO] 10.244.0.6:43651 - 28708 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00017181s
	[INFO] 10.244.0.6:44940 - 168 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000070971s
	[INFO] 10.244.0.6:44940 - 51370 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000138763s
	[INFO] 10.244.0.6:46219 - 19579 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000060234s
	[INFO] 10.244.0.6:46219 - 3941 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000080171s
	[INFO] 10.244.0.6:50817 - 6558 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000076919s
	[INFO] 10.244.0.6:50817 - 60831 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000072477s
	[INFO] 10.244.0.6:45604 - 61411 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00016819s
	[INFO] 10.244.0.6:45604 - 27361 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000080329s
	[INFO] 10.244.0.6:47939 - 17859 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000100466s
	[INFO] 10.244.0.6:47939 - 42188 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000067174s
	[INFO] 10.244.0.6:33737 - 30997 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000093616s
	[INFO] 10.244.0.6:33737 - 4630 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000053071s
	[INFO] 10.244.0.6:44723 - 32836 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000106843s
	[INFO] 10.244.0.6:44723 - 55622 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000066281s
	[INFO] 10.244.0.22:55010 - 57326 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000458656s
	[INFO] 10.244.0.22:35695 - 36191 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000551967s
	[INFO] 10.244.0.22:52582 - 12013 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000120693s
	[INFO] 10.244.0.22:50456 - 22256 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000124883s
	[INFO] 10.244.0.22:37911 - 46857 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000087551s
	[INFO] 10.244.0.22:48471 - 41145 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00009798s
	[INFO] 10.244.0.22:59036 - 40120 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.0007384s
	[INFO] 10.244.0.22:47390 - 5688 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002527361s
	[INFO] 10.244.0.26:52827 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000459791s
	[INFO] 10.244.0.26:37226 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000230064s
	
	
	==> describe nodes <==
	Name:               addons-937866
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-937866
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a5ac17629aabe8562ef9220195ba6559cf416caf
	                    minikube.k8s.io/name=addons-937866
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_13T23_48_33_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-937866
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 13 Aug 2024 23:48:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-937866
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 13 Aug 2024 23:53:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 13 Aug 2024 23:51:37 +0000   Tue, 13 Aug 2024 23:48:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 13 Aug 2024 23:51:37 +0000   Tue, 13 Aug 2024 23:48:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 13 Aug 2024 23:51:37 +0000   Tue, 13 Aug 2024 23:48:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 13 Aug 2024 23:51:37 +0000   Tue, 13 Aug 2024 23:48:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.8
	  Hostname:    addons-937866
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	System Info:
	  Machine ID:                 bc5bcf134b3b4c709916fbf63733e2a0
	  System UUID:                bc5bcf13-4b3b-4c70-9916-fbf63733e2a0
	  Boot ID:                    eaf6e0ab-a5e4-44e2-800d-ca41f7b49a0b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m27s
	  default                     hello-world-app-55bf9c44b4-tgpcr           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  default                     nginx                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m33s
	  kube-system                 coredns-6f6b679f8f-mq64k                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m55s
	  kube-system                 etcd-addons-937866                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         5m
	  kube-system                 kube-apiserver-addons-937866               250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m
	  kube-system                 kube-controller-manager-addons-937866      200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m
	  kube-system                 kube-proxy-824wz                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m55s
	  kube-system                 kube-scheduler-addons-937866               100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m
	  kube-system                 metrics-server-8988944d9-mnlqq             100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         4m50s
	  kube-system                 storage-provisioner                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m51s
	  local-path-storage          local-path-provisioner-86d989889c-wpqrr    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (9%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m52s                kube-proxy       
	  Normal  NodeHasSufficientMemory  5m6s (x8 over 5m6s)  kubelet          Node addons-937866 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m6s (x8 over 5m6s)  kubelet          Node addons-937866 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m6s (x7 over 5m6s)  kubelet          Node addons-937866 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m6s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 5m                   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m                   kubelet          Node addons-937866 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m                   kubelet          Node addons-937866 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m                   kubelet          Node addons-937866 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m59s                kubelet          Node addons-937866 status is now: NodeReady
	  Normal  RegisteredNode           4m56s                node-controller  Node addons-937866 event: Registered Node addons-937866 in Controller
	
	
	==> dmesg <==
	[  +5.061112] kauditd_printk_skb: 111 callbacks suppressed
	[  +5.028728] kauditd_printk_skb: 156 callbacks suppressed
	[  +6.733719] kauditd_printk_skb: 36 callbacks suppressed
	[Aug13 23:49] kauditd_printk_skb: 2 callbacks suppressed
	[ +20.166878] kauditd_printk_skb: 2 callbacks suppressed
	[  +9.256576] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.794045] kauditd_printk_skb: 58 callbacks suppressed
	[  +5.471540] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.867276] kauditd_printk_skb: 22 callbacks suppressed
	[Aug13 23:50] kauditd_printk_skb: 9 callbacks suppressed
	[  +6.474459] kauditd_printk_skb: 52 callbacks suppressed
	[ +10.894825] kauditd_printk_skb: 1 callbacks suppressed
	[  +5.885782] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.074865] kauditd_printk_skb: 19 callbacks suppressed
	[  +5.085248] kauditd_printk_skb: 27 callbacks suppressed
	[  +5.014442] kauditd_printk_skb: 93 callbacks suppressed
	[  +5.846385] kauditd_printk_skb: 28 callbacks suppressed
	[  +6.899849] kauditd_printk_skb: 11 callbacks suppressed
	[  +6.273445] kauditd_printk_skb: 27 callbacks suppressed
	[Aug13 23:51] kauditd_printk_skb: 15 callbacks suppressed
	[  +6.155711] kauditd_printk_skb: 2 callbacks suppressed
	[ +22.401696] kauditd_printk_skb: 7 callbacks suppressed
	[  +8.114049] kauditd_printk_skb: 33 callbacks suppressed
	[Aug13 23:53] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.456489] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [8d4207ddccb37f18323a1e3696612f510dd1ed74f5cf3e34d4fa6005dae1c9ae] <==
	{"level":"warn","ts":"2024-08-13T23:49:56.961126Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"292.584809ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2024-08-13T23:49:56.961144Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"322.424532ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2024-08-13T23:49:56.961164Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"325.252916ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-13T23:49:56.962532Z","caller":"traceutil/trace.go:171","msg":"trace[823464642] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1146; }","duration":"381.823044ms","start":"2024-08-13T23:49:56.580698Z","end":"2024-08-13T23:49:56.962521Z","steps":["trace[823464642] 'agreement among raft nodes before linearized reading'  (duration: 379.370567ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-13T23:49:56.963432Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-13T23:49:56.580663Z","time spent":"382.761174ms","remote":"127.0.0.1:33668","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2024-08-13T23:49:56.963185Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-13T23:49:56.593125Z","time spent":"370.044007ms","remote":"127.0.0.1:33518","response type":"/etcdserverpb.KV/Range","request count":0,"request size":120,"response count":4,"response size":30,"request content":"key:\"/registry/apiextensions.k8s.io/customresourcedefinitions/\" range_end:\"/registry/apiextensions.k8s.io/customresourcedefinitions0\" count_only:true "}
	{"level":"info","ts":"2024-08-13T23:49:56.963291Z","caller":"traceutil/trace.go:171","msg":"trace[397263197] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1146; }","duration":"289.195793ms","start":"2024-08-13T23:49:56.674088Z","end":"2024-08-13T23:49:56.963284Z","steps":["trace[397263197] 'agreement among raft nodes before linearized reading'  (duration: 287.005682ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-13T23:49:56.963334Z","caller":"traceutil/trace.go:171","msg":"trace[1053156069] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1146; }","duration":"294.789693ms","start":"2024-08-13T23:49:56.668538Z","end":"2024-08-13T23:49:56.963328Z","steps":["trace[1053156069] 'agreement among raft nodes before linearized reading'  (duration: 292.578679ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-13T23:49:56.963372Z","caller":"traceutil/trace.go:171","msg":"trace[516487354] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1146; }","duration":"324.65087ms","start":"2024-08-13T23:49:56.638715Z","end":"2024-08-13T23:49:56.963366Z","steps":["trace[516487354] 'agreement among raft nodes before linearized reading'  (duration: 322.419972ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-13T23:49:56.963407Z","caller":"traceutil/trace.go:171","msg":"trace[1670613478] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1146; }","duration":"327.496015ms","start":"2024-08-13T23:49:56.635907Z","end":"2024-08-13T23:49:56.963403Z","steps":["trace[1670613478] 'agreement among raft nodes before linearized reading'  (duration: 325.246052ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-13T23:49:56.965206Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-13T23:49:56.635871Z","time spent":"329.324615ms","remote":"127.0.0.1:33668","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"info","ts":"2024-08-13T23:50:45.858362Z","caller":"traceutil/trace.go:171","msg":"trace[1305374889] linearizableReadLoop","detail":"{readStateIndex:1561; appliedIndex:1560; }","duration":"184.914883ms","start":"2024-08-13T23:50:45.673423Z","end":"2024-08-13T23:50:45.858337Z","steps":["trace[1305374889] 'read index received'  (duration: 184.770584ms)","trace[1305374889] 'applied index is now lower than readState.Index'  (duration: 143.863µs)"],"step_count":2}
	{"level":"warn","ts":"2024-08-13T23:50:45.858577Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"185.140781ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-13T23:50:45.858643Z","caller":"traceutil/trace.go:171","msg":"trace[1850772669] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1514; }","duration":"185.23461ms","start":"2024-08-13T23:50:45.673401Z","end":"2024-08-13T23:50:45.858636Z","steps":["trace[1850772669] 'agreement among raft nodes before linearized reading'  (duration: 185.075771ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-13T23:50:45.858843Z","caller":"traceutil/trace.go:171","msg":"trace[1638936505] transaction","detail":"{read_only:false; response_revision:1514; number_of_response:1; }","duration":"216.098948ms","start":"2024-08-13T23:50:45.642697Z","end":"2024-08-13T23:50:45.858796Z","steps":["trace[1638936505] 'process raft request'  (duration: 215.540086ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-13T23:50:53.584216Z","caller":"traceutil/trace.go:171","msg":"trace[1489961074] linearizableReadLoop","detail":"{readStateIndex:1589; appliedIndex:1588; }","duration":"205.886912ms","start":"2024-08-13T23:50:53.378314Z","end":"2024-08-13T23:50:53.584201Z","steps":["trace[1489961074] 'read index received'  (duration: 205.471506ms)","trace[1489961074] 'applied index is now lower than readState.Index'  (duration: 414.395µs)"],"step_count":2}
	{"level":"info","ts":"2024-08-13T23:50:53.584538Z","caller":"traceutil/trace.go:171","msg":"trace[1483766349] transaction","detail":"{read_only:false; response_revision:1540; number_of_response:1; }","duration":"284.731748ms","start":"2024-08-13T23:50:53.299785Z","end":"2024-08-13T23:50:53.584517Z","steps":["trace[1483766349] 'process raft request'  (duration: 284.097909ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-13T23:50:53.584486Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"206.140799ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumeclaims/default/hpvc\" ","response":"range_response_count:1 size:822"}
	{"level":"info","ts":"2024-08-13T23:50:53.585802Z","caller":"traceutil/trace.go:171","msg":"trace[1039593450] range","detail":"{range_begin:/registry/persistentvolumeclaims/default/hpvc; range_end:; response_count:1; response_revision:1540; }","duration":"207.480901ms","start":"2024-08-13T23:50:53.378311Z","end":"2024-08-13T23:50:53.585792Z","steps":["trace[1039593450] 'agreement among raft nodes before linearized reading'  (duration: 206.001828ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-13T23:51:15.615891Z","caller":"traceutil/trace.go:171","msg":"trace[2043624885] linearizableReadLoop","detail":"{readStateIndex:1740; appliedIndex:1739; }","duration":"133.627863ms","start":"2024-08-13T23:51:15.482249Z","end":"2024-08-13T23:51:15.615877Z","steps":["trace[2043624885] 'read index received'  (duration: 133.477937ms)","trace[2043624885] 'applied index is now lower than readState.Index'  (duration: 149.543µs)"],"step_count":2}
	{"level":"warn","ts":"2024-08-13T23:51:15.616013Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"133.726342ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-13T23:51:15.616037Z","caller":"traceutil/trace.go:171","msg":"trace[664777303] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1683; }","duration":"133.784057ms","start":"2024-08-13T23:51:15.482246Z","end":"2024-08-13T23:51:15.616030Z","steps":["trace[664777303] 'agreement among raft nodes before linearized reading'  (duration: 133.692896ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-13T23:51:15.616239Z","caller":"traceutil/trace.go:171","msg":"trace[1967623336] transaction","detail":"{read_only:false; response_revision:1683; number_of_response:1; }","duration":"304.299722ms","start":"2024-08-13T23:51:15.311890Z","end":"2024-08-13T23:51:15.616190Z","steps":["trace[1967623336] 'process raft request'  (duration: 303.899084ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-13T23:51:15.616339Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-13T23:51:15.311868Z","time spent":"304.411735ms","remote":"127.0.0.1:33748","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":538,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" mod_revision:1671 > success:<request_put:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" value_size:451 >> failure:<request_range:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" > >"}
	{"level":"info","ts":"2024-08-13T23:51:51.518256Z","caller":"traceutil/trace.go:171","msg":"trace[2092690914] transaction","detail":"{read_only:false; response_revision:1888; number_of_response:1; }","duration":"158.236909ms","start":"2024-08-13T23:51:51.360004Z","end":"2024-08-13T23:51:51.518241Z","steps":["trace[2092690914] 'process raft request'  (duration: 158.104001ms)"],"step_count":1}
	
	
	==> kernel <==
	 23:53:32 up 5 min,  0 users,  load average: 0.42, 1.01, 0.54
	Linux addons-937866 5.10.207 #1 SMP Tue Aug 13 22:05:29 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [5a3ce8195d181b25c78f8341fd5c5da13fb0f3bf61d58093031de7c03a823424] <==
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0813 23:50:13.354883       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0813 23:50:13.367174       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0813 23:50:16.644727       1 conn.go:339] Error on socket receive: read tcp 192.168.39.8:8443->192.168.39.1:38974: use of closed network connection
	E0813 23:50:16.844139       1 conn.go:339] Error on socket receive: read tcp 192.168.39.8:8443->192.168.39.1:38990: use of closed network connection
	I0813 23:50:41.798659       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.111.168.48"}
	I0813 23:50:59.839667       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0813 23:50:59.997213       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.104.9.84"}
	I0813 23:51:01.625081       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0813 23:51:02.673121       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0813 23:51:24.549983       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0813 23:51:40.742065       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"csi-hostpathplugin-sa\" not found]"
	I0813 23:51:47.259048       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0813 23:51:47.259122       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0813 23:51:47.296997       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0813 23:51:47.297093       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0813 23:51:47.311362       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0813 23:51:47.311412       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0813 23:51:47.337527       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0813 23:51:47.337688       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0813 23:51:48.297523       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0813 23:51:48.337775       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0813 23:51:48.348326       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0813 23:53:21.511584       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.97.13.200"}
	
	
	==> kube-controller-manager [293d856a8715f2caab799754964d885457b07494e38abee7052087e67cc85340] <==
	W0813 23:52:19.078741       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 23:52:19.078790       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0813 23:52:22.350869       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 23:52:22.350930       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0813 23:52:23.389804       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 23:52:23.389916       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0813 23:52:24.978658       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 23:52:24.978769       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0813 23:52:57.401498       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 23:52:57.401659       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0813 23:52:59.470654       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 23:52:59.470782       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0813 23:53:00.437166       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 23:53:00.437224       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0813 23:53:14.353294       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 23:53:14.353347       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0813 23:53:21.283780       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="24.942644ms"
	I0813 23:53:21.303844       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="19.70058ms"
	I0813 23:53:21.328391       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="24.45127ms"
	I0813 23:53:21.328525       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="39.716µs"
	I0813 23:53:23.949794       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-7559cbf597" duration="9.253µs"
	I0813 23:53:23.950239       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" delay="0s"
	I0813 23:53:23.960952       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="0s"
	I0813 23:53:24.356899       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="10.74332ms"
	I0813 23:53:24.356977       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="39.238µs"
	
	
	==> kube-proxy [9e9d428f9086acfa295ebbe628e9f163619d77a3c764d06b84e4f2ade96f737b] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0813 23:48:39.544557       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0813 23:48:39.566423       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.8"]
	E0813 23:48:39.566512       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0813 23:48:39.663392       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0813 23:48:39.663434       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0813 23:48:39.663465       1 server_linux.go:169] "Using iptables Proxier"
	I0813 23:48:39.665717       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0813 23:48:39.665964       1 server.go:483] "Version info" version="v1.31.0"
	I0813 23:48:39.665984       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0813 23:48:39.670933       1 config.go:197] "Starting service config controller"
	I0813 23:48:39.670969       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0813 23:48:39.670999       1 config.go:104] "Starting endpoint slice config controller"
	I0813 23:48:39.671004       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0813 23:48:39.679225       1 config.go:326] "Starting node config controller"
	I0813 23:48:39.679253       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0813 23:48:39.771960       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0813 23:48:39.772043       1 shared_informer.go:320] Caches are synced for service config
	I0813 23:48:39.779530       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [b454b279a20a235dbba873bdf29da83c94c918cdf25120909f89e43ab04e4f88] <==
	W0813 23:48:30.034785       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 23:48:30.034811       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0813 23:48:30.038916       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 23:48:30.038977       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0813 23:48:30.896847       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 23:48:30.896916       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0813 23:48:31.039478       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 23:48:31.039512       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0813 23:48:31.084928       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 23:48:31.085106       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0813 23:48:31.090241       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 23:48:31.090282       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0813 23:48:31.225454       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 23:48:31.225511       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0813 23:48:31.229342       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 23:48:31.229492       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0813 23:48:31.242459       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0813 23:48:31.242637       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0813 23:48:31.256334       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 23:48:31.256386       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0813 23:48:31.258547       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0813 23:48:31.258626       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0813 23:48:31.308579       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0813 23:48:31.308758       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0813 23:48:33.308676       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 13 23:53:21 addons-937866 kubelet[1207]: I0813 23:53:21.289305    1207 memory_manager.go:354] "RemoveStaleState removing state" podUID="17d9d31f-6635-4275-9b5e-4bfa444ec3da" containerName="hostpath"
	Aug 13 23:53:21 addons-937866 kubelet[1207]: I0813 23:53:21.403838    1207 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbdv6\" (UniqueName: \"kubernetes.io/projected/8a9febae-6afb-415b-9902-a227a7298d06-kube-api-access-bbdv6\") pod \"hello-world-app-55bf9c44b4-tgpcr\" (UID: \"8a9febae-6afb-415b-9902-a227a7298d06\") " pod="default/hello-world-app-55bf9c44b4-tgpcr"
	Aug 13 23:53:22 addons-937866 kubelet[1207]: I0813 23:53:22.518653    1207 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p44zx\" (UniqueName: \"kubernetes.io/projected/1b4c2a31-5938-43b5-9fa3-fe5b3ebf19bf-kube-api-access-p44zx\") pod \"1b4c2a31-5938-43b5-9fa3-fe5b3ebf19bf\" (UID: \"1b4c2a31-5938-43b5-9fa3-fe5b3ebf19bf\") "
	Aug 13 23:53:22 addons-937866 kubelet[1207]: I0813 23:53:22.520641    1207 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b4c2a31-5938-43b5-9fa3-fe5b3ebf19bf-kube-api-access-p44zx" (OuterVolumeSpecName: "kube-api-access-p44zx") pod "1b4c2a31-5938-43b5-9fa3-fe5b3ebf19bf" (UID: "1b4c2a31-5938-43b5-9fa3-fe5b3ebf19bf"). InnerVolumeSpecName "kube-api-access-p44zx". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 13 23:53:22 addons-937866 kubelet[1207]: I0813 23:53:22.619422    1207 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-p44zx\" (UniqueName: \"kubernetes.io/projected/1b4c2a31-5938-43b5-9fa3-fe5b3ebf19bf-kube-api-access-p44zx\") on node \"addons-937866\" DevicePath \"\""
	Aug 13 23:53:23 addons-937866 kubelet[1207]: E0813 23:53:23.031905    1207 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723593203031423088,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:581817,},InodesUsed:&UInt64Value{Value:205,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 13 23:53:23 addons-937866 kubelet[1207]: E0813 23:53:23.031943    1207 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723593203031423088,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:581817,},InodesUsed:&UInt64Value{Value:205,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 13 23:53:23 addons-937866 kubelet[1207]: I0813 23:53:23.325441    1207 scope.go:117] "RemoveContainer" containerID="0d8d30b2ebbf4d52322cdb198072019ab27bc11777c410e5b1d34203669e5a3e"
	Aug 13 23:53:23 addons-937866 kubelet[1207]: I0813 23:53:23.344401    1207 scope.go:117] "RemoveContainer" containerID="0d8d30b2ebbf4d52322cdb198072019ab27bc11777c410e5b1d34203669e5a3e"
	Aug 13 23:53:23 addons-937866 kubelet[1207]: E0813 23:53:23.345305    1207 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d8d30b2ebbf4d52322cdb198072019ab27bc11777c410e5b1d34203669e5a3e\": container with ID starting with 0d8d30b2ebbf4d52322cdb198072019ab27bc11777c410e5b1d34203669e5a3e not found: ID does not exist" containerID="0d8d30b2ebbf4d52322cdb198072019ab27bc11777c410e5b1d34203669e5a3e"
	Aug 13 23:53:23 addons-937866 kubelet[1207]: I0813 23:53:23.345383    1207 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d8d30b2ebbf4d52322cdb198072019ab27bc11777c410e5b1d34203669e5a3e"} err="failed to get container status \"0d8d30b2ebbf4d52322cdb198072019ab27bc11777c410e5b1d34203669e5a3e\": rpc error: code = NotFound desc = could not find container \"0d8d30b2ebbf4d52322cdb198072019ab27bc11777c410e5b1d34203669e5a3e\": container with ID starting with 0d8d30b2ebbf4d52322cdb198072019ab27bc11777c410e5b1d34203669e5a3e not found: ID does not exist"
	Aug 13 23:53:24 addons-937866 kubelet[1207]: I0813 23:53:24.759478    1207 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b4c2a31-5938-43b5-9fa3-fe5b3ebf19bf" path="/var/lib/kubelet/pods/1b4c2a31-5938-43b5-9fa3-fe5b3ebf19bf/volumes"
	Aug 13 23:53:24 addons-937866 kubelet[1207]: I0813 23:53:24.759942    1207 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="303d6323-b5d9-46c0-8649-417ca113606c" path="/var/lib/kubelet/pods/303d6323-b5d9-46c0-8649-417ca113606c/volumes"
	Aug 13 23:53:24 addons-937866 kubelet[1207]: I0813 23:53:24.763089    1207 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae537c5c-d655-414f-9a61-97356e2198da" path="/var/lib/kubelet/pods/ae537c5c-d655-414f-9a61-97356e2198da/volumes"
	Aug 13 23:53:27 addons-937866 kubelet[1207]: I0813 23:53:27.253767    1207 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4b8d9b1d-5eb8-4a34-9ce9-9721cd9fa0f1-webhook-cert\") pod \"4b8d9b1d-5eb8-4a34-9ce9-9721cd9fa0f1\" (UID: \"4b8d9b1d-5eb8-4a34-9ce9-9721cd9fa0f1\") "
	Aug 13 23:53:27 addons-937866 kubelet[1207]: I0813 23:53:27.253847    1207 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-244f2\" (UniqueName: \"kubernetes.io/projected/4b8d9b1d-5eb8-4a34-9ce9-9721cd9fa0f1-kube-api-access-244f2\") pod \"4b8d9b1d-5eb8-4a34-9ce9-9721cd9fa0f1\" (UID: \"4b8d9b1d-5eb8-4a34-9ce9-9721cd9fa0f1\") "
	Aug 13 23:53:27 addons-937866 kubelet[1207]: I0813 23:53:27.255739    1207 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b8d9b1d-5eb8-4a34-9ce9-9721cd9fa0f1-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "4b8d9b1d-5eb8-4a34-9ce9-9721cd9fa0f1" (UID: "4b8d9b1d-5eb8-4a34-9ce9-9721cd9fa0f1"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Aug 13 23:53:27 addons-937866 kubelet[1207]: I0813 23:53:27.256427    1207 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b8d9b1d-5eb8-4a34-9ce9-9721cd9fa0f1-kube-api-access-244f2" (OuterVolumeSpecName: "kube-api-access-244f2") pod "4b8d9b1d-5eb8-4a34-9ce9-9721cd9fa0f1" (UID: "4b8d9b1d-5eb8-4a34-9ce9-9721cd9fa0f1"). InnerVolumeSpecName "kube-api-access-244f2". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 13 23:53:27 addons-937866 kubelet[1207]: I0813 23:53:27.345549    1207 scope.go:117] "RemoveContainer" containerID="4423399ce8d96587343285c9a085f4cb38a6191778d3c964fb2c88ec9f894aab"
	Aug 13 23:53:27 addons-937866 kubelet[1207]: I0813 23:53:27.354098    1207 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-244f2\" (UniqueName: \"kubernetes.io/projected/4b8d9b1d-5eb8-4a34-9ce9-9721cd9fa0f1-kube-api-access-244f2\") on node \"addons-937866\" DevicePath \"\""
	Aug 13 23:53:27 addons-937866 kubelet[1207]: I0813 23:53:27.354132    1207 reconciler_common.go:288] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4b8d9b1d-5eb8-4a34-9ce9-9721cd9fa0f1-webhook-cert\") on node \"addons-937866\" DevicePath \"\""
	Aug 13 23:53:27 addons-937866 kubelet[1207]: I0813 23:53:27.360091    1207 scope.go:117] "RemoveContainer" containerID="4423399ce8d96587343285c9a085f4cb38a6191778d3c964fb2c88ec9f894aab"
	Aug 13 23:53:27 addons-937866 kubelet[1207]: E0813 23:53:27.360476    1207 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4423399ce8d96587343285c9a085f4cb38a6191778d3c964fb2c88ec9f894aab\": container with ID starting with 4423399ce8d96587343285c9a085f4cb38a6191778d3c964fb2c88ec9f894aab not found: ID does not exist" containerID="4423399ce8d96587343285c9a085f4cb38a6191778d3c964fb2c88ec9f894aab"
	Aug 13 23:53:27 addons-937866 kubelet[1207]: I0813 23:53:27.360523    1207 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4423399ce8d96587343285c9a085f4cb38a6191778d3c964fb2c88ec9f894aab"} err="failed to get container status \"4423399ce8d96587343285c9a085f4cb38a6191778d3c964fb2c88ec9f894aab\": rpc error: code = NotFound desc = could not find container \"4423399ce8d96587343285c9a085f4cb38a6191778d3c964fb2c88ec9f894aab\": container with ID starting with 4423399ce8d96587343285c9a085f4cb38a6191778d3c964fb2c88ec9f894aab not found: ID does not exist"
	Aug 13 23:53:28 addons-937866 kubelet[1207]: I0813 23:53:28.758343    1207 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b8d9b1d-5eb8-4a34-9ce9-9721cd9fa0f1" path="/var/lib/kubelet/pods/4b8d9b1d-5eb8-4a34-9ce9-9721cd9fa0f1/volumes"
	
	
	==> storage-provisioner [27c34da461ec94bd2a036cc3e5a0bcb06066c6e533a5d76d7bd8689b9ace0e1b] <==
	I0813 23:48:44.466997       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0813 23:48:44.595691       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0813 23:48:44.596286       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0813 23:48:44.652286       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0813 23:48:44.652811       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"89976f48-0746-4e54-ba26-c348dc5cce52", APIVersion:"v1", ResourceVersion:"690", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-937866_9dfff4cf-f63c-4b3b-a619-d7d854f560ef became leader
	I0813 23:48:44.652948       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-937866_9dfff4cf-f63c-4b3b-a619-d7d854f560ef!
	I0813 23:48:44.856336       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-937866_9dfff4cf-f63c-4b3b-a619-d7d854f560ef!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-937866 -n addons-937866
helpers_test.go:261: (dbg) Run:  kubectl --context addons-937866 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (153.26s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (346.66s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.340171ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-8988944d9-mnlqq" [82850aaa-4f93-49e5-b89b-e86bc208fd74] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.004174963s
addons_test.go:417: (dbg) Run:  kubectl --context addons-937866 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-937866 top pods -n kube-system: exit status 1 (66.645228ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-mq64k, age: 2m10.695392688s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-937866 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-937866 top pods -n kube-system: exit status 1 (60.338393ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-mq64k, age: 2m14.207996307s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-937866 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-937866 top pods -n kube-system: exit status 1 (64.416253ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-mq64k, age: 2m19.527279354s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-937866 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-937866 top pods -n kube-system: exit status 1 (76.712372ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-mq64k, age: 2m24.633253057s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-937866 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-937866 top pods -n kube-system: exit status 1 (64.989969ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-mq64k, age: 2m34.828243449s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-937866 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-937866 top pods -n kube-system: exit status 1 (65.227704ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-mq64k, age: 2m54.356564252s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-937866 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-937866 top pods -n kube-system: exit status 1 (59.413465ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-mq64k, age: 3m12.062011812s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-937866 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-937866 top pods -n kube-system: exit status 1 (60.561599ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-mq64k, age: 3m54.332684187s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-937866 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-937866 top pods -n kube-system: exit status 1 (64.91727ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-mq64k, age: 4m59.874913327s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-937866 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-937866 top pods -n kube-system: exit status 1 (62.403393ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-mq64k, age: 5m55.19998061s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-937866 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-937866 top pods -n kube-system: exit status 1 (59.261538ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-mq64k, age: 6m28.913659915s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-937866 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-937866 top pods -n kube-system: exit status 1 (63.489232ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-mq64k, age: 7m13.683902532s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-937866 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-937866 top pods -n kube-system: exit status 1 (60.528679ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-mq64k, age: 7m48.806050594s

                                                
                                                
** /stderr **
addons_test.go:431: failed checking metric server: exit status 1
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-937866 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-937866 -n addons-937866
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-937866 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-937866 logs -n 25: (1.188098368s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-307809                                                                     | download-only-307809 | jenkins | v1.33.1 | 13 Aug 24 23:47 UTC | 13 Aug 24 23:47 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-857485 | jenkins | v1.33.1 | 13 Aug 24 23:47 UTC |                     |
	|         | binary-mirror-857485                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:46401                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-857485                                                                     | binary-mirror-857485 | jenkins | v1.33.1 | 13 Aug 24 23:47 UTC | 13 Aug 24 23:47 UTC |
	| addons  | enable dashboard -p                                                                         | addons-937866        | jenkins | v1.33.1 | 13 Aug 24 23:47 UTC |                     |
	|         | addons-937866                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-937866        | jenkins | v1.33.1 | 13 Aug 24 23:47 UTC |                     |
	|         | addons-937866                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-937866 --wait=true                                                                | addons-937866        | jenkins | v1.33.1 | 13 Aug 24 23:47 UTC | 13 Aug 24 23:50 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-937866 addons disable                                                                | addons-937866        | jenkins | v1.33.1 | 13 Aug 24 23:50 UTC | 13 Aug 24 23:50 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-937866        | jenkins | v1.33.1 | 13 Aug 24 23:50 UTC | 13 Aug 24 23:50 UTC |
	|         | addons-937866                                                                               |                      |         |         |                     |                     |
	| addons  | addons-937866 addons disable                                                                | addons-937866        | jenkins | v1.33.1 | 13 Aug 24 23:50 UTC | 13 Aug 24 23:50 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| ssh     | addons-937866 ssh cat                                                                       | addons-937866        | jenkins | v1.33.1 | 13 Aug 24 23:50 UTC | 13 Aug 24 23:50 UTC |
	|         | /opt/local-path-provisioner/pvc-a7fb6e01-e9d6-4ee0-9569-672424823465_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-937866 addons disable                                                                | addons-937866        | jenkins | v1.33.1 | 13 Aug 24 23:50 UTC | 13 Aug 24 23:50 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-937866 ip                                                                            | addons-937866        | jenkins | v1.33.1 | 13 Aug 24 23:50 UTC | 13 Aug 24 23:50 UTC |
	| addons  | addons-937866 addons disable                                                                | addons-937866        | jenkins | v1.33.1 | 13 Aug 24 23:50 UTC | 13 Aug 24 23:50 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-937866        | jenkins | v1.33.1 | 13 Aug 24 23:50 UTC | 13 Aug 24 23:50 UTC |
	|         | -p addons-937866                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-937866        | jenkins | v1.33.1 | 13 Aug 24 23:50 UTC | 13 Aug 24 23:50 UTC |
	|         | -p addons-937866                                                                            |                      |         |         |                     |                     |
	| addons  | addons-937866 addons disable                                                                | addons-937866        | jenkins | v1.33.1 | 13 Aug 24 23:50 UTC | 13 Aug 24 23:50 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-937866 addons disable                                                                | addons-937866        | jenkins | v1.33.1 | 13 Aug 24 23:50 UTC | 13 Aug 24 23:50 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-937866        | jenkins | v1.33.1 | 13 Aug 24 23:51 UTC | 13 Aug 24 23:51 UTC |
	|         | addons-937866                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-937866 ssh curl -s                                                                   | addons-937866        | jenkins | v1.33.1 | 13 Aug 24 23:51 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-937866 addons                                                                        | addons-937866        | jenkins | v1.33.1 | 13 Aug 24 23:51 UTC | 13 Aug 24 23:51 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-937866 addons                                                                        | addons-937866        | jenkins | v1.33.1 | 13 Aug 24 23:51 UTC | 13 Aug 24 23:51 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-937866 ip                                                                            | addons-937866        | jenkins | v1.33.1 | 13 Aug 24 23:53 UTC | 13 Aug 24 23:53 UTC |
	| addons  | addons-937866 addons disable                                                                | addons-937866        | jenkins | v1.33.1 | 13 Aug 24 23:53 UTC | 13 Aug 24 23:53 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-937866 addons disable                                                                | addons-937866        | jenkins | v1.33.1 | 13 Aug 24 23:53 UTC | 13 Aug 24 23:53 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-937866 addons                                                                        | addons-937866        | jenkins | v1.33.1 | 13 Aug 24 23:56 UTC | 13 Aug 24 23:56 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/13 23:47:56
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 23:47:56.015304   17389 out.go:291] Setting OutFile to fd 1 ...
	I0813 23:47:56.015406   17389 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 23:47:56.015415   17389 out.go:304] Setting ErrFile to fd 2...
	I0813 23:47:56.015419   17389 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 23:47:56.015581   17389 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19429-9425/.minikube/bin
	I0813 23:47:56.016105   17389 out.go:298] Setting JSON to false
	I0813 23:47:56.016907   17389 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1822,"bootTime":1723591054,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0813 23:47:56.016958   17389 start.go:139] virtualization: kvm guest
	I0813 23:47:56.018901   17389 out.go:177] * [addons-937866] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0813 23:47:56.019990   17389 notify.go:220] Checking for updates...
	I0813 23:47:56.020005   17389 out.go:177]   - MINIKUBE_LOCATION=19429
	I0813 23:47:56.021169   17389 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0813 23:47:56.022232   17389 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19429-9425/kubeconfig
	I0813 23:47:56.023457   17389 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19429-9425/.minikube
	I0813 23:47:56.024684   17389 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0813 23:47:56.025798   17389 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0813 23:47:56.027084   17389 driver.go:392] Setting default libvirt URI to qemu:///system
	I0813 23:47:56.057890   17389 out.go:177] * Using the kvm2 driver based on user configuration
	I0813 23:47:56.059131   17389 start.go:297] selected driver: kvm2
	I0813 23:47:56.059143   17389 start.go:901] validating driver "kvm2" against <nil>
	I0813 23:47:56.059153   17389 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0813 23:47:56.059796   17389 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 23:47:56.059851   17389 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19429-9425/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0813 23:47:56.074106   17389 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0813 23:47:56.074157   17389 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0813 23:47:56.074366   17389 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0813 23:47:56.074397   17389 cni.go:84] Creating CNI manager for ""
	I0813 23:47:56.074404   17389 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0813 23:47:56.074411   17389 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0813 23:47:56.074463   17389 start.go:340] cluster config:
	{Name:addons-937866 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-937866 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0813 23:47:56.074542   17389 iso.go:125] acquiring lock: {Name:mk654171f0e78c238a265344dbbd1eacb21d0f1b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 23:47:56.076063   17389 out.go:177] * Starting "addons-937866" primary control-plane node in "addons-937866" cluster
	I0813 23:47:56.077069   17389 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0813 23:47:56.077097   17389 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0813 23:47:56.077105   17389 cache.go:56] Caching tarball of preloaded images
	I0813 23:47:56.077157   17389 preload.go:172] Found /home/jenkins/minikube-integration/19429-9425/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0813 23:47:56.077167   17389 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0813 23:47:56.077449   17389 profile.go:143] Saving config to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/addons-937866/config.json ...
	I0813 23:47:56.077466   17389 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/addons-937866/config.json: {Name:mk8a28a8ad54dcd755c2ce1cbf17fe2ba8c5cf3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 23:47:56.077572   17389 start.go:360] acquireMachinesLock for addons-937866: {Name:mk8ab1b491404dd6cc95a37a50c74a14d6d0e4c9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0813 23:47:56.077620   17389 start.go:364] duration metric: took 35.654µs to acquireMachinesLock for "addons-937866"
	I0813 23:47:56.077636   17389 start.go:93] Provisioning new machine with config: &{Name:addons-937866 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:addons-937866 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0813 23:47:56.077693   17389 start.go:125] createHost starting for "" (driver="kvm2")
	I0813 23:47:56.079130   17389 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0813 23:47:56.079247   17389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 23:47:56.079279   17389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0813 23:47:56.092702   17389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40889
	I0813 23:47:56.093045   17389 main.go:141] libmachine: () Calling .GetVersion
	I0813 23:47:56.093535   17389 main.go:141] libmachine: Using API Version  1
	I0813 23:47:56.093554   17389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0813 23:47:56.093910   17389 main.go:141] libmachine: () Calling .GetMachineName
	I0813 23:47:56.094090   17389 main.go:141] libmachine: (addons-937866) Calling .GetMachineName
	I0813 23:47:56.094219   17389 main.go:141] libmachine: (addons-937866) Calling .DriverName
	I0813 23:47:56.094374   17389 start.go:159] libmachine.API.Create for "addons-937866" (driver="kvm2")
	I0813 23:47:56.094407   17389 client.go:168] LocalClient.Create starting
	I0813 23:47:56.094445   17389 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem
	I0813 23:47:56.302561   17389 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem
	I0813 23:47:56.629781   17389 main.go:141] libmachine: Running pre-create checks...
	I0813 23:47:56.629806   17389 main.go:141] libmachine: (addons-937866) Calling .PreCreateCheck
	I0813 23:47:56.630298   17389 main.go:141] libmachine: (addons-937866) Calling .GetConfigRaw
	I0813 23:47:56.630768   17389 main.go:141] libmachine: Creating machine...
	I0813 23:47:56.630784   17389 main.go:141] libmachine: (addons-937866) Calling .Create
	I0813 23:47:56.630895   17389 main.go:141] libmachine: (addons-937866) Creating KVM machine...
	I0813 23:47:56.632127   17389 main.go:141] libmachine: (addons-937866) DBG | found existing default KVM network
	I0813 23:47:56.632809   17389 main.go:141] libmachine: (addons-937866) DBG | I0813 23:47:56.632684   17411 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ad0}
	I0813 23:47:56.632825   17389 main.go:141] libmachine: (addons-937866) DBG | created network xml: 
	I0813 23:47:56.632834   17389 main.go:141] libmachine: (addons-937866) DBG | <network>
	I0813 23:47:56.632844   17389 main.go:141] libmachine: (addons-937866) DBG |   <name>mk-addons-937866</name>
	I0813 23:47:56.632855   17389 main.go:141] libmachine: (addons-937866) DBG |   <dns enable='no'/>
	I0813 23:47:56.632864   17389 main.go:141] libmachine: (addons-937866) DBG |   
	I0813 23:47:56.632874   17389 main.go:141] libmachine: (addons-937866) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0813 23:47:56.632880   17389 main.go:141] libmachine: (addons-937866) DBG |     <dhcp>
	I0813 23:47:56.632886   17389 main.go:141] libmachine: (addons-937866) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0813 23:47:56.632892   17389 main.go:141] libmachine: (addons-937866) DBG |     </dhcp>
	I0813 23:47:56.632914   17389 main.go:141] libmachine: (addons-937866) DBG |   </ip>
	I0813 23:47:56.632925   17389 main.go:141] libmachine: (addons-937866) DBG |   
	I0813 23:47:56.632932   17389 main.go:141] libmachine: (addons-937866) DBG | </network>
	I0813 23:47:56.632977   17389 main.go:141] libmachine: (addons-937866) DBG | 
	I0813 23:47:56.638257   17389 main.go:141] libmachine: (addons-937866) DBG | trying to create private KVM network mk-addons-937866 192.168.39.0/24...
	I0813 23:47:56.699410   17389 main.go:141] libmachine: (addons-937866) DBG | private KVM network mk-addons-937866 192.168.39.0/24 created
	I0813 23:47:56.699443   17389 main.go:141] libmachine: (addons-937866) Setting up store path in /home/jenkins/minikube-integration/19429-9425/.minikube/machines/addons-937866 ...
	I0813 23:47:56.699466   17389 main.go:141] libmachine: (addons-937866) DBG | I0813 23:47:56.699356   17411 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19429-9425/.minikube
	I0813 23:47:56.699484   17389 main.go:141] libmachine: (addons-937866) Building disk image from file:///home/jenkins/minikube-integration/19429-9425/.minikube/cache/iso/amd64/minikube-v1.33.1-1723567878-19429-amd64.iso
	I0813 23:47:56.699505   17389 main.go:141] libmachine: (addons-937866) Downloading /home/jenkins/minikube-integration/19429-9425/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19429-9425/.minikube/cache/iso/amd64/minikube-v1.33.1-1723567878-19429-amd64.iso...
	I0813 23:47:56.965916   17389 main.go:141] libmachine: (addons-937866) DBG | I0813 23:47:56.965748   17411 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19429-9425/.minikube/machines/addons-937866/id_rsa...
	I0813 23:47:57.109728   17389 main.go:141] libmachine: (addons-937866) DBG | I0813 23:47:57.109628   17411 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19429-9425/.minikube/machines/addons-937866/addons-937866.rawdisk...
	I0813 23:47:57.109756   17389 main.go:141] libmachine: (addons-937866) DBG | Writing magic tar header
	I0813 23:47:57.109767   17389 main.go:141] libmachine: (addons-937866) DBG | Writing SSH key tar header
	I0813 23:47:57.109775   17389 main.go:141] libmachine: (addons-937866) DBG | I0813 23:47:57.109735   17411 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19429-9425/.minikube/machines/addons-937866 ...
	I0813 23:47:57.109888   17389 main.go:141] libmachine: (addons-937866) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19429-9425/.minikube/machines/addons-937866
	I0813 23:47:57.109908   17389 main.go:141] libmachine: (addons-937866) Setting executable bit set on /home/jenkins/minikube-integration/19429-9425/.minikube/machines/addons-937866 (perms=drwx------)
	I0813 23:47:57.109916   17389 main.go:141] libmachine: (addons-937866) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19429-9425/.minikube/machines
	I0813 23:47:57.109926   17389 main.go:141] libmachine: (addons-937866) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19429-9425/.minikube
	I0813 23:47:57.109936   17389 main.go:141] libmachine: (addons-937866) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19429-9425
	I0813 23:47:57.109949   17389 main.go:141] libmachine: (addons-937866) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0813 23:47:57.109957   17389 main.go:141] libmachine: (addons-937866) DBG | Checking permissions on dir: /home/jenkins
	I0813 23:47:57.109969   17389 main.go:141] libmachine: (addons-937866) DBG | Checking permissions on dir: /home
	I0813 23:47:57.109978   17389 main.go:141] libmachine: (addons-937866) DBG | Skipping /home - not owner
	I0813 23:47:57.109998   17389 main.go:141] libmachine: (addons-937866) Setting executable bit set on /home/jenkins/minikube-integration/19429-9425/.minikube/machines (perms=drwxr-xr-x)
	I0813 23:47:57.110011   17389 main.go:141] libmachine: (addons-937866) Setting executable bit set on /home/jenkins/minikube-integration/19429-9425/.minikube (perms=drwxr-xr-x)
	I0813 23:47:57.110018   17389 main.go:141] libmachine: (addons-937866) Setting executable bit set on /home/jenkins/minikube-integration/19429-9425 (perms=drwxrwxr-x)
	I0813 23:47:57.110058   17389 main.go:141] libmachine: (addons-937866) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0813 23:47:57.110074   17389 main.go:141] libmachine: (addons-937866) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0813 23:47:57.110083   17389 main.go:141] libmachine: (addons-937866) Creating domain...
	I0813 23:47:57.110973   17389 main.go:141] libmachine: (addons-937866) define libvirt domain using xml: 
	I0813 23:47:57.111006   17389 main.go:141] libmachine: (addons-937866) <domain type='kvm'>
	I0813 23:47:57.111016   17389 main.go:141] libmachine: (addons-937866)   <name>addons-937866</name>
	I0813 23:47:57.111028   17389 main.go:141] libmachine: (addons-937866)   <memory unit='MiB'>4000</memory>
	I0813 23:47:57.111037   17389 main.go:141] libmachine: (addons-937866)   <vcpu>2</vcpu>
	I0813 23:47:57.111046   17389 main.go:141] libmachine: (addons-937866)   <features>
	I0813 23:47:57.111054   17389 main.go:141] libmachine: (addons-937866)     <acpi/>
	I0813 23:47:57.111062   17389 main.go:141] libmachine: (addons-937866)     <apic/>
	I0813 23:47:57.111070   17389 main.go:141] libmachine: (addons-937866)     <pae/>
	I0813 23:47:57.111080   17389 main.go:141] libmachine: (addons-937866)     
	I0813 23:47:57.111088   17389 main.go:141] libmachine: (addons-937866)   </features>
	I0813 23:47:57.111100   17389 main.go:141] libmachine: (addons-937866)   <cpu mode='host-passthrough'>
	I0813 23:47:57.111108   17389 main.go:141] libmachine: (addons-937866)   
	I0813 23:47:57.111116   17389 main.go:141] libmachine: (addons-937866)   </cpu>
	I0813 23:47:57.111127   17389 main.go:141] libmachine: (addons-937866)   <os>
	I0813 23:47:57.111135   17389 main.go:141] libmachine: (addons-937866)     <type>hvm</type>
	I0813 23:47:57.111147   17389 main.go:141] libmachine: (addons-937866)     <boot dev='cdrom'/>
	I0813 23:47:57.111157   17389 main.go:141] libmachine: (addons-937866)     <boot dev='hd'/>
	I0813 23:47:57.111175   17389 main.go:141] libmachine: (addons-937866)     <bootmenu enable='no'/>
	I0813 23:47:57.111197   17389 main.go:141] libmachine: (addons-937866)   </os>
	I0813 23:47:57.111207   17389 main.go:141] libmachine: (addons-937866)   <devices>
	I0813 23:47:57.111215   17389 main.go:141] libmachine: (addons-937866)     <disk type='file' device='cdrom'>
	I0813 23:47:57.111226   17389 main.go:141] libmachine: (addons-937866)       <source file='/home/jenkins/minikube-integration/19429-9425/.minikube/machines/addons-937866/boot2docker.iso'/>
	I0813 23:47:57.111234   17389 main.go:141] libmachine: (addons-937866)       <target dev='hdc' bus='scsi'/>
	I0813 23:47:57.111239   17389 main.go:141] libmachine: (addons-937866)       <readonly/>
	I0813 23:47:57.111246   17389 main.go:141] libmachine: (addons-937866)     </disk>
	I0813 23:47:57.111253   17389 main.go:141] libmachine: (addons-937866)     <disk type='file' device='disk'>
	I0813 23:47:57.111261   17389 main.go:141] libmachine: (addons-937866)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0813 23:47:57.111269   17389 main.go:141] libmachine: (addons-937866)       <source file='/home/jenkins/minikube-integration/19429-9425/.minikube/machines/addons-937866/addons-937866.rawdisk'/>
	I0813 23:47:57.111277   17389 main.go:141] libmachine: (addons-937866)       <target dev='hda' bus='virtio'/>
	I0813 23:47:57.111283   17389 main.go:141] libmachine: (addons-937866)     </disk>
	I0813 23:47:57.111290   17389 main.go:141] libmachine: (addons-937866)     <interface type='network'>
	I0813 23:47:57.111296   17389 main.go:141] libmachine: (addons-937866)       <source network='mk-addons-937866'/>
	I0813 23:47:57.111305   17389 main.go:141] libmachine: (addons-937866)       <model type='virtio'/>
	I0813 23:47:57.111326   17389 main.go:141] libmachine: (addons-937866)     </interface>
	I0813 23:47:57.111344   17389 main.go:141] libmachine: (addons-937866)     <interface type='network'>
	I0813 23:47:57.111357   17389 main.go:141] libmachine: (addons-937866)       <source network='default'/>
	I0813 23:47:57.111381   17389 main.go:141] libmachine: (addons-937866)       <model type='virtio'/>
	I0813 23:47:57.111391   17389 main.go:141] libmachine: (addons-937866)     </interface>
	I0813 23:47:57.111398   17389 main.go:141] libmachine: (addons-937866)     <serial type='pty'>
	I0813 23:47:57.111403   17389 main.go:141] libmachine: (addons-937866)       <target port='0'/>
	I0813 23:47:57.111409   17389 main.go:141] libmachine: (addons-937866)     </serial>
	I0813 23:47:57.111415   17389 main.go:141] libmachine: (addons-937866)     <console type='pty'>
	I0813 23:47:57.111426   17389 main.go:141] libmachine: (addons-937866)       <target type='serial' port='0'/>
	I0813 23:47:57.111433   17389 main.go:141] libmachine: (addons-937866)     </console>
	I0813 23:47:57.111438   17389 main.go:141] libmachine: (addons-937866)     <rng model='virtio'>
	I0813 23:47:57.111445   17389 main.go:141] libmachine: (addons-937866)       <backend model='random'>/dev/random</backend>
	I0813 23:47:57.111451   17389 main.go:141] libmachine: (addons-937866)     </rng>
	I0813 23:47:57.111456   17389 main.go:141] libmachine: (addons-937866)     
	I0813 23:47:57.111462   17389 main.go:141] libmachine: (addons-937866)     
	I0813 23:47:57.111467   17389 main.go:141] libmachine: (addons-937866)   </devices>
	I0813 23:47:57.111473   17389 main.go:141] libmachine: (addons-937866) </domain>
	I0813 23:47:57.111480   17389 main.go:141] libmachine: (addons-937866) 
	I0813 23:47:57.117248   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined MAC address 52:54:00:6e:88:2b in network default
	I0813 23:47:57.117773   17389 main.go:141] libmachine: (addons-937866) Ensuring networks are active...
	I0813 23:47:57.117794   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:47:57.118407   17389 main.go:141] libmachine: (addons-937866) Ensuring network default is active
	I0813 23:47:57.118729   17389 main.go:141] libmachine: (addons-937866) Ensuring network mk-addons-937866 is active
	I0813 23:47:57.119257   17389 main.go:141] libmachine: (addons-937866) Getting domain xml...
	I0813 23:47:57.119908   17389 main.go:141] libmachine: (addons-937866) Creating domain...
	I0813 23:47:58.500071   17389 main.go:141] libmachine: (addons-937866) Waiting to get IP...
	I0813 23:47:58.500884   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:47:58.501350   17389 main.go:141] libmachine: (addons-937866) DBG | unable to find current IP address of domain addons-937866 in network mk-addons-937866
	I0813 23:47:58.501400   17389 main.go:141] libmachine: (addons-937866) DBG | I0813 23:47:58.501343   17411 retry.go:31] will retry after 295.085727ms: waiting for machine to come up
	I0813 23:47:58.797710   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:47:58.798089   17389 main.go:141] libmachine: (addons-937866) DBG | unable to find current IP address of domain addons-937866 in network mk-addons-937866
	I0813 23:47:58.798125   17389 main.go:141] libmachine: (addons-937866) DBG | I0813 23:47:58.798019   17411 retry.go:31] will retry after 366.444505ms: waiting for machine to come up
	I0813 23:47:59.165565   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:47:59.165989   17389 main.go:141] libmachine: (addons-937866) DBG | unable to find current IP address of domain addons-937866 in network mk-addons-937866
	I0813 23:47:59.166017   17389 main.go:141] libmachine: (addons-937866) DBG | I0813 23:47:59.165940   17411 retry.go:31] will retry after 420.97021ms: waiting for machine to come up
	I0813 23:47:59.589904   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:47:59.590365   17389 main.go:141] libmachine: (addons-937866) DBG | unable to find current IP address of domain addons-937866 in network mk-addons-937866
	I0813 23:47:59.590393   17389 main.go:141] libmachine: (addons-937866) DBG | I0813 23:47:59.590315   17411 retry.go:31] will retry after 443.200792ms: waiting for machine to come up
	I0813 23:48:00.035144   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:00.035702   17389 main.go:141] libmachine: (addons-937866) DBG | unable to find current IP address of domain addons-937866 in network mk-addons-937866
	I0813 23:48:00.035741   17389 main.go:141] libmachine: (addons-937866) DBG | I0813 23:48:00.035649   17411 retry.go:31] will retry after 681.201668ms: waiting for machine to come up
	I0813 23:48:00.718414   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:00.718796   17389 main.go:141] libmachine: (addons-937866) DBG | unable to find current IP address of domain addons-937866 in network mk-addons-937866
	I0813 23:48:00.718850   17389 main.go:141] libmachine: (addons-937866) DBG | I0813 23:48:00.718782   17411 retry.go:31] will retry after 643.430207ms: waiting for machine to come up
	I0813 23:48:01.364137   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:01.364511   17389 main.go:141] libmachine: (addons-937866) DBG | unable to find current IP address of domain addons-937866 in network mk-addons-937866
	I0813 23:48:01.364538   17389 main.go:141] libmachine: (addons-937866) DBG | I0813 23:48:01.364461   17411 retry.go:31] will retry after 752.692025ms: waiting for machine to come up
	I0813 23:48:02.118473   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:02.118872   17389 main.go:141] libmachine: (addons-937866) DBG | unable to find current IP address of domain addons-937866 in network mk-addons-937866
	I0813 23:48:02.118893   17389 main.go:141] libmachine: (addons-937866) DBG | I0813 23:48:02.118846   17411 retry.go:31] will retry after 1.147620092s: waiting for machine to come up
	I0813 23:48:03.268025   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:03.268468   17389 main.go:141] libmachine: (addons-937866) DBG | unable to find current IP address of domain addons-937866 in network mk-addons-937866
	I0813 23:48:03.268496   17389 main.go:141] libmachine: (addons-937866) DBG | I0813 23:48:03.268417   17411 retry.go:31] will retry after 1.646773744s: waiting for machine to come up
	I0813 23:48:04.916483   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:04.916812   17389 main.go:141] libmachine: (addons-937866) DBG | unable to find current IP address of domain addons-937866 in network mk-addons-937866
	I0813 23:48:04.916840   17389 main.go:141] libmachine: (addons-937866) DBG | I0813 23:48:04.916767   17411 retry.go:31] will retry after 1.966715915s: waiting for machine to come up
	I0813 23:48:06.884641   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:06.885074   17389 main.go:141] libmachine: (addons-937866) DBG | unable to find current IP address of domain addons-937866 in network mk-addons-937866
	I0813 23:48:06.885103   17389 main.go:141] libmachine: (addons-937866) DBG | I0813 23:48:06.885022   17411 retry.go:31] will retry after 1.868597461s: waiting for machine to come up
	I0813 23:48:08.755960   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:08.756378   17389 main.go:141] libmachine: (addons-937866) DBG | unable to find current IP address of domain addons-937866 in network mk-addons-937866
	I0813 23:48:08.756408   17389 main.go:141] libmachine: (addons-937866) DBG | I0813 23:48:08.756332   17411 retry.go:31] will retry after 3.478823879s: waiting for machine to come up
	I0813 23:48:12.237211   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:12.237564   17389 main.go:141] libmachine: (addons-937866) DBG | unable to find current IP address of domain addons-937866 in network mk-addons-937866
	I0813 23:48:12.237589   17389 main.go:141] libmachine: (addons-937866) DBG | I0813 23:48:12.237536   17411 retry.go:31] will retry after 4.371295963s: waiting for machine to come up
	I0813 23:48:16.610789   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:16.611358   17389 main.go:141] libmachine: (addons-937866) Found IP for machine: 192.168.39.8
	I0813 23:48:16.611385   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has current primary IP address 192.168.39.8 and MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:16.611398   17389 main.go:141] libmachine: (addons-937866) Reserving static IP address...
	I0813 23:48:16.611716   17389 main.go:141] libmachine: (addons-937866) DBG | unable to find host DHCP lease matching {name: "addons-937866", mac: "52:54:00:a3:c3:1c", ip: "192.168.39.8"} in network mk-addons-937866
	I0813 23:48:16.680908   17389 main.go:141] libmachine: (addons-937866) DBG | Getting to WaitForSSH function...
	I0813 23:48:16.680933   17389 main.go:141] libmachine: (addons-937866) Reserved static IP address: 192.168.39.8
	I0813 23:48:16.680945   17389 main.go:141] libmachine: (addons-937866) Waiting for SSH to be available...
	I0813 23:48:16.683392   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:16.683811   17389 main.go:141] libmachine: (addons-937866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:c3:1c", ip: ""} in network mk-addons-937866: {Iface:virbr1 ExpiryTime:2024-08-14 00:48:10 +0000 UTC Type:0 Mac:52:54:00:a3:c3:1c Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a3:c3:1c}
	I0813 23:48:16.683834   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined IP address 192.168.39.8 and MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:16.683999   17389 main.go:141] libmachine: (addons-937866) DBG | Using SSH client type: external
	I0813 23:48:16.684024   17389 main.go:141] libmachine: (addons-937866) DBG | Using SSH private key: /home/jenkins/minikube-integration/19429-9425/.minikube/machines/addons-937866/id_rsa (-rw-------)
	I0813 23:48:16.684055   17389 main.go:141] libmachine: (addons-937866) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.8 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19429-9425/.minikube/machines/addons-937866/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0813 23:48:16.684071   17389 main.go:141] libmachine: (addons-937866) DBG | About to run SSH command:
	I0813 23:48:16.684082   17389 main.go:141] libmachine: (addons-937866) DBG | exit 0
	I0813 23:48:16.814089   17389 main.go:141] libmachine: (addons-937866) DBG | SSH cmd err, output: <nil>: 
	I0813 23:48:16.814340   17389 main.go:141] libmachine: (addons-937866) KVM machine creation complete!
	I0813 23:48:16.814634   17389 main.go:141] libmachine: (addons-937866) Calling .GetConfigRaw
	I0813 23:48:16.815102   17389 main.go:141] libmachine: (addons-937866) Calling .DriverName
	I0813 23:48:16.815290   17389 main.go:141] libmachine: (addons-937866) Calling .DriverName
	I0813 23:48:16.815462   17389 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0813 23:48:16.815475   17389 main.go:141] libmachine: (addons-937866) Calling .GetState
	I0813 23:48:16.816739   17389 main.go:141] libmachine: Detecting operating system of created instance...
	I0813 23:48:16.816752   17389 main.go:141] libmachine: Waiting for SSH to be available...
	I0813 23:48:16.816758   17389 main.go:141] libmachine: Getting to WaitForSSH function...
	I0813 23:48:16.816764   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHHostname
	I0813 23:48:16.819160   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:16.819504   17389 main.go:141] libmachine: (addons-937866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:c3:1c", ip: ""} in network mk-addons-937866: {Iface:virbr1 ExpiryTime:2024-08-14 00:48:10 +0000 UTC Type:0 Mac:52:54:00:a3:c3:1c Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:addons-937866 Clientid:01:52:54:00:a3:c3:1c}
	I0813 23:48:16.819531   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined IP address 192.168.39.8 and MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:16.819638   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHPort
	I0813 23:48:16.819812   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHKeyPath
	I0813 23:48:16.819964   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHKeyPath
	I0813 23:48:16.820100   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHUsername
	I0813 23:48:16.820273   17389 main.go:141] libmachine: Using SSH client type: native
	I0813 23:48:16.820440   17389 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.8 22 <nil> <nil>}
	I0813 23:48:16.820450   17389 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0813 23:48:16.925176   17389 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0813 23:48:16.925199   17389 main.go:141] libmachine: Detecting the provisioner...
	I0813 23:48:16.925210   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHHostname
	I0813 23:48:16.927699   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:16.928115   17389 main.go:141] libmachine: (addons-937866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:c3:1c", ip: ""} in network mk-addons-937866: {Iface:virbr1 ExpiryTime:2024-08-14 00:48:10 +0000 UTC Type:0 Mac:52:54:00:a3:c3:1c Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:addons-937866 Clientid:01:52:54:00:a3:c3:1c}
	I0813 23:48:16.928137   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined IP address 192.168.39.8 and MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:16.928287   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHPort
	I0813 23:48:16.928496   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHKeyPath
	I0813 23:48:16.928725   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHKeyPath
	I0813 23:48:16.928889   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHUsername
	I0813 23:48:16.929061   17389 main.go:141] libmachine: Using SSH client type: native
	I0813 23:48:16.929250   17389 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.8 22 <nil> <nil>}
	I0813 23:48:16.929267   17389 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0813 23:48:17.034140   17389 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0813 23:48:17.034237   17389 main.go:141] libmachine: found compatible host: buildroot
	I0813 23:48:17.034252   17389 main.go:141] libmachine: Provisioning with buildroot...
	I0813 23:48:17.034261   17389 main.go:141] libmachine: (addons-937866) Calling .GetMachineName
	I0813 23:48:17.034546   17389 buildroot.go:166] provisioning hostname "addons-937866"
	I0813 23:48:17.034567   17389 main.go:141] libmachine: (addons-937866) Calling .GetMachineName
	I0813 23:48:17.034726   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHHostname
	I0813 23:48:17.037219   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:17.037509   17389 main.go:141] libmachine: (addons-937866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:c3:1c", ip: ""} in network mk-addons-937866: {Iface:virbr1 ExpiryTime:2024-08-14 00:48:10 +0000 UTC Type:0 Mac:52:54:00:a3:c3:1c Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:addons-937866 Clientid:01:52:54:00:a3:c3:1c}
	I0813 23:48:17.037541   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined IP address 192.168.39.8 and MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:17.037642   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHPort
	I0813 23:48:17.037788   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHKeyPath
	I0813 23:48:17.037926   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHKeyPath
	I0813 23:48:17.038090   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHUsername
	I0813 23:48:17.038264   17389 main.go:141] libmachine: Using SSH client type: native
	I0813 23:48:17.038459   17389 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.8 22 <nil> <nil>}
	I0813 23:48:17.038476   17389 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-937866 && echo "addons-937866" | sudo tee /etc/hostname
	I0813 23:48:17.158940   17389 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-937866
	
	I0813 23:48:17.158971   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHHostname
	I0813 23:48:17.161357   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:17.161682   17389 main.go:141] libmachine: (addons-937866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:c3:1c", ip: ""} in network mk-addons-937866: {Iface:virbr1 ExpiryTime:2024-08-14 00:48:10 +0000 UTC Type:0 Mac:52:54:00:a3:c3:1c Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:addons-937866 Clientid:01:52:54:00:a3:c3:1c}
	I0813 23:48:17.161711   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined IP address 192.168.39.8 and MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:17.161836   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHPort
	I0813 23:48:17.162029   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHKeyPath
	I0813 23:48:17.162181   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHKeyPath
	I0813 23:48:17.162349   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHUsername
	I0813 23:48:17.162509   17389 main.go:141] libmachine: Using SSH client type: native
	I0813 23:48:17.162674   17389 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.8 22 <nil> <nil>}
	I0813 23:48:17.162689   17389 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-937866' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-937866/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-937866' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 23:48:17.277816   17389 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0813 23:48:17.277845   17389 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19429-9425/.minikube CaCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19429-9425/.minikube}
	I0813 23:48:17.277893   17389 buildroot.go:174] setting up certificates
	I0813 23:48:17.277907   17389 provision.go:84] configureAuth start
	I0813 23:48:17.277920   17389 main.go:141] libmachine: (addons-937866) Calling .GetMachineName
	I0813 23:48:17.278254   17389 main.go:141] libmachine: (addons-937866) Calling .GetIP
	I0813 23:48:17.280752   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:17.281042   17389 main.go:141] libmachine: (addons-937866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:c3:1c", ip: ""} in network mk-addons-937866: {Iface:virbr1 ExpiryTime:2024-08-14 00:48:10 +0000 UTC Type:0 Mac:52:54:00:a3:c3:1c Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:addons-937866 Clientid:01:52:54:00:a3:c3:1c}
	I0813 23:48:17.281065   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined IP address 192.168.39.8 and MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:17.281184   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHHostname
	I0813 23:48:17.283453   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:17.283754   17389 main.go:141] libmachine: (addons-937866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:c3:1c", ip: ""} in network mk-addons-937866: {Iface:virbr1 ExpiryTime:2024-08-14 00:48:10 +0000 UTC Type:0 Mac:52:54:00:a3:c3:1c Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:addons-937866 Clientid:01:52:54:00:a3:c3:1c}
	I0813 23:48:17.283782   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined IP address 192.168.39.8 and MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:17.283958   17389 provision.go:143] copyHostCerts
	I0813 23:48:17.284030   17389 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem (1082 bytes)
	I0813 23:48:17.284177   17389 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem (1123 bytes)
	I0813 23:48:17.284259   17389 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem (1675 bytes)
	I0813 23:48:17.284325   17389 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem org=jenkins.addons-937866 san=[127.0.0.1 192.168.39.8 addons-937866 localhost minikube]
	I0813 23:48:17.410467   17389 provision.go:177] copyRemoteCerts
	I0813 23:48:17.410529   17389 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 23:48:17.410551   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHHostname
	I0813 23:48:17.412942   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:17.413289   17389 main.go:141] libmachine: (addons-937866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:c3:1c", ip: ""} in network mk-addons-937866: {Iface:virbr1 ExpiryTime:2024-08-14 00:48:10 +0000 UTC Type:0 Mac:52:54:00:a3:c3:1c Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:addons-937866 Clientid:01:52:54:00:a3:c3:1c}
	I0813 23:48:17.413313   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined IP address 192.168.39.8 and MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:17.413443   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHPort
	I0813 23:48:17.413636   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHKeyPath
	I0813 23:48:17.413754   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHUsername
	I0813 23:48:17.413912   17389 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/addons-937866/id_rsa Username:docker}
	I0813 23:48:17.496011   17389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0813 23:48:17.518748   17389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0813 23:48:17.540899   17389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0813 23:48:17.562888   17389 provision.go:87] duration metric: took 284.967043ms to configureAuth
	I0813 23:48:17.562910   17389 buildroot.go:189] setting minikube options for container-runtime
	I0813 23:48:17.563093   17389 config.go:182] Loaded profile config "addons-937866": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0813 23:48:17.563180   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHHostname
	I0813 23:48:17.565610   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:17.565914   17389 main.go:141] libmachine: (addons-937866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:c3:1c", ip: ""} in network mk-addons-937866: {Iface:virbr1 ExpiryTime:2024-08-14 00:48:10 +0000 UTC Type:0 Mac:52:54:00:a3:c3:1c Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:addons-937866 Clientid:01:52:54:00:a3:c3:1c}
	I0813 23:48:17.565948   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined IP address 192.168.39.8 and MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:17.566113   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHPort
	I0813 23:48:17.566301   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHKeyPath
	I0813 23:48:17.566459   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHKeyPath
	I0813 23:48:17.566591   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHUsername
	I0813 23:48:17.566738   17389 main.go:141] libmachine: Using SSH client type: native
	I0813 23:48:17.566894   17389 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.8 22 <nil> <nil>}
	I0813 23:48:17.566907   17389 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0813 23:48:17.827868   17389 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0813 23:48:17.827895   17389 main.go:141] libmachine: Checking connection to Docker...
	I0813 23:48:17.827904   17389 main.go:141] libmachine: (addons-937866) Calling .GetURL
	I0813 23:48:17.829121   17389 main.go:141] libmachine: (addons-937866) DBG | Using libvirt version 6000000
	I0813 23:48:17.831102   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:17.831408   17389 main.go:141] libmachine: (addons-937866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:c3:1c", ip: ""} in network mk-addons-937866: {Iface:virbr1 ExpiryTime:2024-08-14 00:48:10 +0000 UTC Type:0 Mac:52:54:00:a3:c3:1c Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:addons-937866 Clientid:01:52:54:00:a3:c3:1c}
	I0813 23:48:17.831438   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined IP address 192.168.39.8 and MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:17.831562   17389 main.go:141] libmachine: Docker is up and running!
	I0813 23:48:17.831579   17389 main.go:141] libmachine: Reticulating splines...
	I0813 23:48:17.831588   17389 client.go:171] duration metric: took 21.737171133s to LocalClient.Create
	I0813 23:48:17.831616   17389 start.go:167] duration metric: took 21.737250787s to libmachine.API.Create "addons-937866"
	I0813 23:48:17.831640   17389 start.go:293] postStartSetup for "addons-937866" (driver="kvm2")
	I0813 23:48:17.831666   17389 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 23:48:17.831689   17389 main.go:141] libmachine: (addons-937866) Calling .DriverName
	I0813 23:48:17.831918   17389 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 23:48:17.831943   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHHostname
	I0813 23:48:17.833832   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:17.834180   17389 main.go:141] libmachine: (addons-937866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:c3:1c", ip: ""} in network mk-addons-937866: {Iface:virbr1 ExpiryTime:2024-08-14 00:48:10 +0000 UTC Type:0 Mac:52:54:00:a3:c3:1c Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:addons-937866 Clientid:01:52:54:00:a3:c3:1c}
	I0813 23:48:17.834200   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined IP address 192.168.39.8 and MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:17.834363   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHPort
	I0813 23:48:17.834558   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHKeyPath
	I0813 23:48:17.834881   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHUsername
	I0813 23:48:17.835059   17389 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/addons-937866/id_rsa Username:docker}
	I0813 23:48:17.915779   17389 ssh_runner.go:195] Run: cat /etc/os-release
	I0813 23:48:17.919650   17389 info.go:137] Remote host: Buildroot 2023.02.9
	I0813 23:48:17.919674   17389 filesync.go:126] Scanning /home/jenkins/minikube-integration/19429-9425/.minikube/addons for local assets ...
	I0813 23:48:17.919742   17389 filesync.go:126] Scanning /home/jenkins/minikube-integration/19429-9425/.minikube/files for local assets ...
	I0813 23:48:17.919773   17389 start.go:296] duration metric: took 88.113995ms for postStartSetup
	I0813 23:48:17.919810   17389 main.go:141] libmachine: (addons-937866) Calling .GetConfigRaw
	I0813 23:48:17.920410   17389 main.go:141] libmachine: (addons-937866) Calling .GetIP
	I0813 23:48:17.922970   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:17.923286   17389 main.go:141] libmachine: (addons-937866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:c3:1c", ip: ""} in network mk-addons-937866: {Iface:virbr1 ExpiryTime:2024-08-14 00:48:10 +0000 UTC Type:0 Mac:52:54:00:a3:c3:1c Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:addons-937866 Clientid:01:52:54:00:a3:c3:1c}
	I0813 23:48:17.923312   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined IP address 192.168.39.8 and MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:17.923518   17389 profile.go:143] Saving config to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/addons-937866/config.json ...
	I0813 23:48:17.923689   17389 start.go:128] duration metric: took 21.84598673s to createHost
	I0813 23:48:17.923707   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHHostname
	I0813 23:48:17.925887   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:17.926184   17389 main.go:141] libmachine: (addons-937866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:c3:1c", ip: ""} in network mk-addons-937866: {Iface:virbr1 ExpiryTime:2024-08-14 00:48:10 +0000 UTC Type:0 Mac:52:54:00:a3:c3:1c Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:addons-937866 Clientid:01:52:54:00:a3:c3:1c}
	I0813 23:48:17.926222   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined IP address 192.168.39.8 and MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:17.926309   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHPort
	I0813 23:48:17.926490   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHKeyPath
	I0813 23:48:17.926639   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHKeyPath
	I0813 23:48:17.926749   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHUsername
	I0813 23:48:17.926891   17389 main.go:141] libmachine: Using SSH client type: native
	I0813 23:48:17.927043   17389 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.8 22 <nil> <nil>}
	I0813 23:48:17.927054   17389 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0813 23:48:18.034343   17389 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723592898.008013281
	
	I0813 23:48:18.034368   17389 fix.go:216] guest clock: 1723592898.008013281
	I0813 23:48:18.034378   17389 fix.go:229] Guest: 2024-08-13 23:48:18.008013281 +0000 UTC Remote: 2024-08-13 23:48:17.923698269 +0000 UTC m=+21.939464763 (delta=84.315012ms)
	I0813 23:48:18.034435   17389 fix.go:200] guest clock delta is within tolerance: 84.315012ms
	I0813 23:48:18.034443   17389 start.go:83] releasing machines lock for "addons-937866", held for 21.956814087s
	I0813 23:48:18.034465   17389 main.go:141] libmachine: (addons-937866) Calling .DriverName
	I0813 23:48:18.034721   17389 main.go:141] libmachine: (addons-937866) Calling .GetIP
	I0813 23:48:18.037266   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:18.037681   17389 main.go:141] libmachine: (addons-937866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:c3:1c", ip: ""} in network mk-addons-937866: {Iface:virbr1 ExpiryTime:2024-08-14 00:48:10 +0000 UTC Type:0 Mac:52:54:00:a3:c3:1c Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:addons-937866 Clientid:01:52:54:00:a3:c3:1c}
	I0813 23:48:18.037712   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined IP address 192.168.39.8 and MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:18.037840   17389 main.go:141] libmachine: (addons-937866) Calling .DriverName
	I0813 23:48:18.038381   17389 main.go:141] libmachine: (addons-937866) Calling .DriverName
	I0813 23:48:18.038557   17389 main.go:141] libmachine: (addons-937866) Calling .DriverName
	I0813 23:48:18.038667   17389 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0813 23:48:18.038724   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHHostname
	I0813 23:48:18.038821   17389 ssh_runner.go:195] Run: cat /version.json
	I0813 23:48:18.038843   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHHostname
	I0813 23:48:18.041215   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:18.041458   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:18.041490   17389 main.go:141] libmachine: (addons-937866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:c3:1c", ip: ""} in network mk-addons-937866: {Iface:virbr1 ExpiryTime:2024-08-14 00:48:10 +0000 UTC Type:0 Mac:52:54:00:a3:c3:1c Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:addons-937866 Clientid:01:52:54:00:a3:c3:1c}
	I0813 23:48:18.041514   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined IP address 192.168.39.8 and MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:18.041617   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHPort
	I0813 23:48:18.041790   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHKeyPath
	I0813 23:48:18.041844   17389 main.go:141] libmachine: (addons-937866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:c3:1c", ip: ""} in network mk-addons-937866: {Iface:virbr1 ExpiryTime:2024-08-14 00:48:10 +0000 UTC Type:0 Mac:52:54:00:a3:c3:1c Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:addons-937866 Clientid:01:52:54:00:a3:c3:1c}
	I0813 23:48:18.041868   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined IP address 192.168.39.8 and MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:18.041929   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHUsername
	I0813 23:48:18.042017   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHPort
	I0813 23:48:18.042120   17389 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/addons-937866/id_rsa Username:docker}
	I0813 23:48:18.042205   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHKeyPath
	I0813 23:48:18.042325   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHUsername
	I0813 23:48:18.042533   17389 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/addons-937866/id_rsa Username:docker}
	I0813 23:48:18.152396   17389 ssh_runner.go:195] Run: systemctl --version
	I0813 23:48:18.157924   17389 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0813 23:48:18.310645   17389 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0813 23:48:18.316220   17389 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0813 23:48:18.316274   17389 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0813 23:48:18.336256   17389 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0813 23:48:18.336275   17389 start.go:495] detecting cgroup driver to use...
	I0813 23:48:18.336338   17389 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0813 23:48:18.352990   17389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0813 23:48:18.366259   17389 docker.go:217] disabling cri-docker service (if available) ...
	I0813 23:48:18.366309   17389 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0813 23:48:18.379194   17389 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0813 23:48:18.394633   17389 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0813 23:48:18.519796   17389 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0813 23:48:18.668865   17389 docker.go:233] disabling docker service ...
	I0813 23:48:18.668944   17389 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0813 23:48:18.682362   17389 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0813 23:48:18.694540   17389 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0813 23:48:18.830643   17389 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0813 23:48:18.941346   17389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0813 23:48:18.954278   17389 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 23:48:18.971704   17389 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0813 23:48:18.971774   17389 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0813 23:48:18.981199   17389 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0813 23:48:18.981264   17389 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0813 23:48:18.990523   17389 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0813 23:48:19.000175   17389 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0813 23:48:19.010005   17389 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0813 23:48:19.019832   17389 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0813 23:48:19.030113   17389 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0813 23:48:19.046588   17389 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0813 23:48:19.056483   17389 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 23:48:19.065842   17389 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0813 23:48:19.065900   17389 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0813 23:48:19.079300   17389 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 23:48:19.088997   17389 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0813 23:48:19.195893   17389 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0813 23:48:19.337001   17389 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0813 23:48:19.337114   17389 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0813 23:48:19.341809   17389 start.go:563] Will wait 60s for crictl version
	I0813 23:48:19.341877   17389 ssh_runner.go:195] Run: which crictl
	I0813 23:48:19.345245   17389 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0813 23:48:19.380819   17389 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0813 23:48:19.380952   17389 ssh_runner.go:195] Run: crio --version
	I0813 23:48:19.406256   17389 ssh_runner.go:195] Run: crio --version
	I0813 23:48:19.435050   17389 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0813 23:48:19.436271   17389 main.go:141] libmachine: (addons-937866) Calling .GetIP
	I0813 23:48:19.439015   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:19.439287   17389 main.go:141] libmachine: (addons-937866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:c3:1c", ip: ""} in network mk-addons-937866: {Iface:virbr1 ExpiryTime:2024-08-14 00:48:10 +0000 UTC Type:0 Mac:52:54:00:a3:c3:1c Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:addons-937866 Clientid:01:52:54:00:a3:c3:1c}
	I0813 23:48:19.439307   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined IP address 192.168.39.8 and MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:19.439568   17389 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0813 23:48:19.443714   17389 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 23:48:19.456307   17389 kubeadm.go:883] updating cluster {Name:addons-937866 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:addons-937866 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.8 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0813 23:48:19.456422   17389 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0813 23:48:19.456488   17389 ssh_runner.go:195] Run: sudo crictl images --output json
	I0813 23:48:19.487831   17389 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0813 23:48:19.487912   17389 ssh_runner.go:195] Run: which lz4
	I0813 23:48:19.491707   17389 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0813 23:48:19.495730   17389 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0813 23:48:19.495758   17389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0813 23:48:20.568086   17389 crio.go:462] duration metric: took 1.076405627s to copy over tarball
	I0813 23:48:20.568163   17389 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0813 23:48:22.703031   17389 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.134838623s)
	I0813 23:48:22.703065   17389 crio.go:469] duration metric: took 2.134951647s to extract the tarball
	I0813 23:48:22.703075   17389 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0813 23:48:22.738320   17389 ssh_runner.go:195] Run: sudo crictl images --output json
	I0813 23:48:22.784505   17389 crio.go:514] all images are preloaded for cri-o runtime.
	I0813 23:48:22.784526   17389 cache_images.go:84] Images are preloaded, skipping loading
	I0813 23:48:22.784534   17389 kubeadm.go:934] updating node { 192.168.39.8 8443 v1.31.0 crio true true} ...
	I0813 23:48:22.784646   17389 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-937866 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.8
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-937866 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0813 23:48:22.784707   17389 ssh_runner.go:195] Run: crio config
	I0813 23:48:22.825980   17389 cni.go:84] Creating CNI manager for ""
	I0813 23:48:22.825999   17389 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0813 23:48:22.826008   17389 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0813 23:48:22.826035   17389 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.8 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-937866 NodeName:addons-937866 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.8"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.8 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0813 23:48:22.826198   17389 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.8
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-937866"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.8
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.8"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 23:48:22.826274   17389 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0813 23:48:22.835697   17389 binaries.go:44] Found k8s binaries, skipping transfer
	I0813 23:48:22.835768   17389 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0813 23:48:22.844796   17389 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0813 23:48:22.859923   17389 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0813 23:48:22.874987   17389 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0813 23:48:22.890240   17389 ssh_runner.go:195] Run: grep 192.168.39.8	control-plane.minikube.internal$ /etc/hosts
	I0813 23:48:22.893912   17389 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.8	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 23:48:22.905336   17389 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0813 23:48:23.017756   17389 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0813 23:48:23.034255   17389 certs.go:68] Setting up /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/addons-937866 for IP: 192.168.39.8
	I0813 23:48:23.034281   17389 certs.go:194] generating shared ca certs ...
	I0813 23:48:23.034300   17389 certs.go:226] acquiring lock for ca certs: {Name:mke321171338291cf6d66f3acbfe43d3fabcafa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 23:48:23.034463   17389 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19429-9425/.minikube/ca.key
	I0813 23:48:23.097098   17389 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19429-9425/.minikube/ca.crt ...
	I0813 23:48:23.097123   17389 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19429-9425/.minikube/ca.crt: {Name:mk2977fbe2eeb4385cb50c31ef49d890db41b8bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 23:48:23.097289   17389 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19429-9425/.minikube/ca.key ...
	I0813 23:48:23.097299   17389 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19429-9425/.minikube/ca.key: {Name:mke2ec5f52fb9207c0853de1fa6abf7f31b66110 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 23:48:23.097367   17389 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.key
	I0813 23:48:23.145500   17389 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.crt ...
	I0813 23:48:23.145526   17389 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.crt: {Name:mk6a4b7b7b85b800eb2b54749ea5d443607a3feb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 23:48:23.145679   17389 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.key ...
	I0813 23:48:23.145690   17389 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.key: {Name:mk58b1c47f5e33e6b8b6b98b3d9f11f815c4d139 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 23:48:23.145757   17389 certs.go:256] generating profile certs ...
	I0813 23:48:23.145809   17389 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/addons-937866/client.key
	I0813 23:48:23.145831   17389 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/addons-937866/client.crt with IP's: []
	I0813 23:48:23.285504   17389 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/addons-937866/client.crt ...
	I0813 23:48:23.285535   17389 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/addons-937866/client.crt: {Name:mkeac58f052437b2d744fedbb7b91d00b0fc5f45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 23:48:23.285691   17389 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/addons-937866/client.key ...
	I0813 23:48:23.285701   17389 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/addons-937866/client.key: {Name:mka8d943dfc54e96068a797dac3bd89a31200db0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 23:48:23.285772   17389 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/addons-937866/apiserver.key.51719a42
	I0813 23:48:23.285789   17389 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/addons-937866/apiserver.crt.51719a42 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.8]
	I0813 23:48:23.347329   17389 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/addons-937866/apiserver.crt.51719a42 ...
	I0813 23:48:23.347357   17389 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/addons-937866/apiserver.crt.51719a42: {Name:mk59cd7c654147be8ecda1106b330647aaf66d6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 23:48:23.347503   17389 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/addons-937866/apiserver.key.51719a42 ...
	I0813 23:48:23.347514   17389 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/addons-937866/apiserver.key.51719a42: {Name:mkbbf49bb639d8a56eb052d370671f63f5678ea1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 23:48:23.347579   17389 certs.go:381] copying /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/addons-937866/apiserver.crt.51719a42 -> /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/addons-937866/apiserver.crt
	I0813 23:48:23.347669   17389 certs.go:385] copying /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/addons-937866/apiserver.key.51719a42 -> /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/addons-937866/apiserver.key
	I0813 23:48:23.347722   17389 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/addons-937866/proxy-client.key
	I0813 23:48:23.347739   17389 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/addons-937866/proxy-client.crt with IP's: []
	I0813 23:48:23.561565   17389 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/addons-937866/proxy-client.crt ...
	I0813 23:48:23.561600   17389 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/addons-937866/proxy-client.crt: {Name:mk85241301946cdd3bbc9cef53a4b84f65b6fe58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 23:48:23.561764   17389 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/addons-937866/proxy-client.key ...
	I0813 23:48:23.561775   17389 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/addons-937866/proxy-client.key: {Name:mk3e6bfee87b00af8f8a4fb1688e115f7968ea18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 23:48:23.561931   17389 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem (1679 bytes)
	I0813 23:48:23.561965   17389 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem (1082 bytes)
	I0813 23:48:23.561991   17389 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem (1123 bytes)
	I0813 23:48:23.562013   17389 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem (1675 bytes)
	I0813 23:48:23.562602   17389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0813 23:48:23.585612   17389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0813 23:48:23.608721   17389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0813 23:48:23.631365   17389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0813 23:48:23.652810   17389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/addons-937866/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0813 23:48:23.673830   17389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/addons-937866/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0813 23:48:23.697682   17389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/addons-937866/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0813 23:48:23.723797   17389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/addons-937866/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0813 23:48:23.748347   17389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0813 23:48:23.770963   17389 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0813 23:48:23.785945   17389 ssh_runner.go:195] Run: openssl version
	I0813 23:48:23.791325   17389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0813 23:48:23.801224   17389 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0813 23:48:23.805088   17389 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 13 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0813 23:48:23.805138   17389 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0813 23:48:23.810369   17389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0813 23:48:23.820184   17389 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0813 23:48:23.823707   17389 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0813 23:48:23.823764   17389 kubeadm.go:392] StartCluster: {Name:addons-937866 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 C
lusterName:addons-937866 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.8 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0813 23:48:23.823833   17389 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0813 23:48:23.823902   17389 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 23:48:23.860040   17389 cri.go:89] found id: ""
	I0813 23:48:23.860109   17389 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0813 23:48:23.869680   17389 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 23:48:23.878643   17389 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 23:48:23.887340   17389 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 23:48:23.887389   17389 kubeadm.go:157] found existing configuration files:
	
	I0813 23:48:23.887440   17389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0813 23:48:23.895816   17389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0813 23:48:23.895868   17389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0813 23:48:23.904305   17389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0813 23:48:23.913063   17389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0813 23:48:23.913122   17389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0813 23:48:23.921804   17389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0813 23:48:23.930089   17389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0813 23:48:23.930141   17389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0813 23:48:23.938746   17389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0813 23:48:23.946920   17389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0813 23:48:23.946982   17389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0813 23:48:23.955228   17389 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0813 23:48:24.003406   17389 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0813 23:48:24.003537   17389 kubeadm.go:310] [preflight] Running pre-flight checks
	I0813 23:48:24.097007   17389 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0813 23:48:24.097143   17389 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0813 23:48:24.097268   17389 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0813 23:48:24.108332   17389 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0813 23:48:24.110514   17389 out.go:204]   - Generating certificates and keys ...
	I0813 23:48:24.110596   17389 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0813 23:48:24.110679   17389 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0813 23:48:24.180944   17389 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0813 23:48:24.238864   17389 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0813 23:48:24.475259   17389 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0813 23:48:24.699741   17389 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0813 23:48:24.773354   17389 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0813 23:48:24.773868   17389 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-937866 localhost] and IPs [192.168.39.8 127.0.0.1 ::1]
	I0813 23:48:25.030034   17389 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0813 23:48:25.030205   17389 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-937866 localhost] and IPs [192.168.39.8 127.0.0.1 ::1]
	I0813 23:48:25.146473   17389 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0813 23:48:25.473747   17389 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0813 23:48:25.595182   17389 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0813 23:48:25.595722   17389 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0813 23:48:25.694610   17389 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0813 23:48:25.788502   17389 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0813 23:48:25.962252   17389 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0813 23:48:26.297629   17389 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0813 23:48:26.408172   17389 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0813 23:48:26.409112   17389 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0813 23:48:26.411609   17389 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0813 23:48:26.413354   17389 out.go:204]   - Booting up control plane ...
	I0813 23:48:26.413452   17389 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0813 23:48:26.413540   17389 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0813 23:48:26.414100   17389 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0813 23:48:26.437069   17389 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0813 23:48:26.443257   17389 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0813 23:48:26.443334   17389 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0813 23:48:26.566660   17389 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0813 23:48:26.566834   17389 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0813 23:48:27.068269   17389 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.823494ms
	I0813 23:48:27.068374   17389 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0813 23:48:32.066559   17389 kubeadm.go:310] [api-check] The API server is healthy after 5.001845026s
	I0813 23:48:32.084830   17389 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0813 23:48:32.106881   17389 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0813 23:48:32.140124   17389 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0813 23:48:32.140400   17389 kubeadm.go:310] [mark-control-plane] Marking the node addons-937866 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0813 23:48:32.154831   17389 kubeadm.go:310] [bootstrap-token] Using token: htc53c.dc5uvt68z1ujyfnc
	I0813 23:48:32.156566   17389 out.go:204]   - Configuring RBAC rules ...
	I0813 23:48:32.156705   17389 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0813 23:48:32.160739   17389 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0813 23:48:32.174011   17389 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0813 23:48:32.177420   17389 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0813 23:48:32.181224   17389 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0813 23:48:32.183911   17389 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0813 23:48:32.473528   17389 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0813 23:48:32.959993   17389 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0813 23:48:33.474101   17389 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0813 23:48:33.475028   17389 kubeadm.go:310] 
	I0813 23:48:33.475114   17389 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0813 23:48:33.475127   17389 kubeadm.go:310] 
	I0813 23:48:33.475239   17389 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0813 23:48:33.475251   17389 kubeadm.go:310] 
	I0813 23:48:33.475282   17389 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0813 23:48:33.475373   17389 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0813 23:48:33.475464   17389 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0813 23:48:33.475486   17389 kubeadm.go:310] 
	I0813 23:48:33.475563   17389 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0813 23:48:33.475572   17389 kubeadm.go:310] 
	I0813 23:48:33.475637   17389 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0813 23:48:33.475646   17389 kubeadm.go:310] 
	I0813 23:48:33.475749   17389 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0813 23:48:33.475855   17389 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0813 23:48:33.475951   17389 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0813 23:48:33.475969   17389 kubeadm.go:310] 
	I0813 23:48:33.476069   17389 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0813 23:48:33.476171   17389 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0813 23:48:33.476180   17389 kubeadm.go:310] 
	I0813 23:48:33.476306   17389 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token htc53c.dc5uvt68z1ujyfnc \
	I0813 23:48:33.476435   17389 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f608549e9c065598e2d494ba753eb8a7bbe7260a53a76decd30e6e17b5fe24c3 \
	I0813 23:48:33.476463   17389 kubeadm.go:310] 	--control-plane 
	I0813 23:48:33.476476   17389 kubeadm.go:310] 
	I0813 23:48:33.476572   17389 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0813 23:48:33.476584   17389 kubeadm.go:310] 
	I0813 23:48:33.476689   17389 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token htc53c.dc5uvt68z1ujyfnc \
	I0813 23:48:33.476822   17389 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f608549e9c065598e2d494ba753eb8a7bbe7260a53a76decd30e6e17b5fe24c3 
	I0813 23:48:33.477490   17389 kubeadm.go:310] W0813 23:48:23.981767     821 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0813 23:48:33.477815   17389 kubeadm.go:310] W0813 23:48:23.982640     821 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0813 23:48:33.477944   17389 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0813 23:48:33.477985   17389 cni.go:84] Creating CNI manager for ""
	I0813 23:48:33.477998   17389 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0813 23:48:33.479696   17389 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0813 23:48:33.480891   17389 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0813 23:48:33.490848   17389 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0813 23:48:33.507814   17389 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 23:48:33.507900   17389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 23:48:33.507944   17389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-937866 minikube.k8s.io/updated_at=2024_08_13T23_48_33_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a5ac17629aabe8562ef9220195ba6559cf416caf minikube.k8s.io/name=addons-937866 minikube.k8s.io/primary=true
	I0813 23:48:33.526829   17389 ops.go:34] apiserver oom_adj: -16
	I0813 23:48:33.654478   17389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 23:48:34.155263   17389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 23:48:34.655260   17389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 23:48:35.154984   17389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 23:48:35.655077   17389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 23:48:36.155495   17389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 23:48:36.654653   17389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 23:48:37.155298   17389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 23:48:37.237329   17389 kubeadm.go:1113] duration metric: took 3.729495612s to wait for elevateKubeSystemPrivileges
	I0813 23:48:37.237370   17389 kubeadm.go:394] duration metric: took 13.413610914s to StartCluster
	I0813 23:48:37.237394   17389 settings.go:142] acquiring lock: {Name:mkb0f793aa2a6618ff3457f9cd2d34beec5f1b5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 23:48:37.237545   17389 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19429-9425/kubeconfig
	I0813 23:48:37.238069   17389 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19429-9425/kubeconfig: {Name:mkd99f966962f24d3ec49056c344f5320df43dba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 23:48:37.238282   17389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0813 23:48:37.238298   17389 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.8 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0813 23:48:37.238346   17389 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0813 23:48:37.238458   17389 addons.go:69] Setting helm-tiller=true in profile "addons-937866"
	I0813 23:48:37.238471   17389 addons.go:69] Setting yakd=true in profile "addons-937866"
	I0813 23:48:37.238479   17389 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-937866"
	I0813 23:48:37.238493   17389 addons.go:234] Setting addon helm-tiller=true in "addons-937866"
	I0813 23:48:37.238496   17389 addons.go:234] Setting addon yakd=true in "addons-937866"
	I0813 23:48:37.238491   17389 addons.go:69] Setting ingress-dns=true in profile "addons-937866"
	I0813 23:48:37.238517   17389 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-937866"
	I0813 23:48:37.238524   17389 host.go:66] Checking if "addons-937866" exists ...
	I0813 23:48:37.238530   17389 addons.go:69] Setting registry=true in profile "addons-937866"
	I0813 23:48:37.238530   17389 addons.go:69] Setting ingress=true in profile "addons-937866"
	I0813 23:48:37.238547   17389 addons.go:234] Setting addon registry=true in "addons-937866"
	I0813 23:48:37.238555   17389 host.go:66] Checking if "addons-937866" exists ...
	I0813 23:48:37.238559   17389 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-937866"
	I0813 23:48:37.238566   17389 addons.go:69] Setting inspektor-gadget=true in profile "addons-937866"
	I0813 23:48:37.238575   17389 addons.go:69] Setting storage-provisioner=true in profile "addons-937866"
	I0813 23:48:37.238590   17389 addons.go:234] Setting addon inspektor-gadget=true in "addons-937866"
	I0813 23:48:37.238597   17389 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-937866"
	I0813 23:48:37.238603   17389 addons.go:69] Setting cloud-spanner=true in profile "addons-937866"
	I0813 23:48:37.238614   17389 host.go:66] Checking if "addons-937866" exists ...
	I0813 23:48:37.238618   17389 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-937866"
	I0813 23:48:37.238623   17389 addons.go:234] Setting addon cloud-spanner=true in "addons-937866"
	I0813 23:48:37.238647   17389 host.go:66] Checking if "addons-937866" exists ...
	I0813 23:48:37.238968   17389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 23:48:37.238985   17389 addons.go:69] Setting default-storageclass=true in profile "addons-937866"
	I0813 23:48:37.238985   17389 addons.go:69] Setting gcp-auth=true in profile "addons-937866"
	I0813 23:48:37.239006   17389 mustload.go:65] Loading cluster: addons-937866
	I0813 23:48:37.239008   17389 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-937866"
	I0813 23:48:37.239012   17389 addons.go:69] Setting volcano=true in profile "addons-937866"
	I0813 23:48:37.239016   17389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 23:48:37.239024   17389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 23:48:37.239032   17389 addons.go:234] Setting addon volcano=true in "addons-937866"
	I0813 23:48:37.239044   17389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0813 23:48:37.239052   17389 host.go:66] Checking if "addons-937866" exists ...
	I0813 23:48:37.239061   17389 addons.go:69] Setting metrics-server=true in profile "addons-937866"
	I0813 23:48:37.239086   17389 addons.go:234] Setting addon metrics-server=true in "addons-937866"
	I0813 23:48:37.239115   17389 host.go:66] Checking if "addons-937866" exists ...
	I0813 23:48:37.239166   17389 config.go:182] Loaded profile config "addons-937866": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0813 23:48:37.239322   17389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 23:48:37.239347   17389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0813 23:48:37.239351   17389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 23:48:37.239375   17389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0813 23:48:37.239484   17389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 23:48:37.238555   17389 config.go:182] Loaded profile config "addons-937866": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0813 23:48:37.239507   17389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0813 23:48:37.239513   17389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 23:48:37.238522   17389 addons.go:234] Setting addon ingress-dns=true in "addons-937866"
	I0813 23:48:37.238549   17389 addons.go:234] Setting addon ingress=true in "addons-937866"
	I0813 23:48:37.239550   17389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0813 23:48:37.239558   17389 host.go:66] Checking if "addons-937866" exists ...
	I0813 23:48:37.239570   17389 host.go:66] Checking if "addons-937866" exists ...
	I0813 23:48:37.238592   17389 addons.go:234] Setting addon storage-provisioner=true in "addons-937866"
	I0813 23:48:37.239611   17389 host.go:66] Checking if "addons-937866" exists ...
	I0813 23:48:37.239491   17389 addons.go:69] Setting volumesnapshots=true in profile "addons-937866"
	I0813 23:48:37.239698   17389 addons.go:234] Setting addon volumesnapshots=true in "addons-937866"
	I0813 23:48:37.239745   17389 host.go:66] Checking if "addons-937866" exists ...
	I0813 23:48:37.239014   17389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0813 23:48:37.239055   17389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0813 23:48:37.239927   17389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 23:48:37.239944   17389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0813 23:48:37.238571   17389 host.go:66] Checking if "addons-937866" exists ...
	I0813 23:48:37.239977   17389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 23:48:37.238977   17389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 23:48:37.240002   17389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0813 23:48:37.240089   17389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 23:48:37.240124   17389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0813 23:48:37.240128   17389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 23:48:37.240155   17389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0813 23:48:37.238525   17389 host.go:66] Checking if "addons-937866" exists ...
	I0813 23:48:37.240318   17389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 23:48:37.240381   17389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0813 23:48:37.240526   17389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 23:48:37.240551   17389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0813 23:48:37.238977   17389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 23:48:37.242196   17389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0813 23:48:37.238598   17389 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-937866"
	I0813 23:48:37.257636   17389 host.go:66] Checking if "addons-937866" exists ...
	I0813 23:48:37.248104   17389 out.go:177] * Verifying Kubernetes components...
	I0813 23:48:37.248197   17389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0813 23:48:37.258202   17389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 23:48:37.258463   17389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0813 23:48:37.260051   17389 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0813 23:48:37.260182   17389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37591
	I0813 23:48:37.260360   17389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36391
	I0813 23:48:37.260476   17389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36609
	I0813 23:48:37.260796   17389 main.go:141] libmachine: () Calling .GetVersion
	I0813 23:48:37.260899   17389 main.go:141] libmachine: () Calling .GetVersion
	I0813 23:48:37.260910   17389 main.go:141] libmachine: () Calling .GetVersion
	I0813 23:48:37.261623   17389 main.go:141] libmachine: Using API Version  1
	I0813 23:48:37.261642   17389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0813 23:48:37.261641   17389 main.go:141] libmachine: Using API Version  1
	I0813 23:48:37.261697   17389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0813 23:48:37.261731   17389 main.go:141] libmachine: Using API Version  1
	I0813 23:48:37.261758   17389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0813 23:48:37.262256   17389 main.go:141] libmachine: () Calling .GetMachineName
	I0813 23:48:37.262272   17389 main.go:141] libmachine: () Calling .GetMachineName
	I0813 23:48:37.262345   17389 main.go:141] libmachine: () Calling .GetMachineName
	I0813 23:48:37.262804   17389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 23:48:37.262819   17389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 23:48:37.262844   17389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0813 23:48:37.262877   17389 main.go:141] libmachine: (addons-937866) Calling .GetState
	I0813 23:48:37.262929   17389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39621
	I0813 23:48:37.263136   17389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0813 23:48:37.263222   17389 main.go:141] libmachine: () Calling .GetVersion
	I0813 23:48:37.263667   17389 main.go:141] libmachine: Using API Version  1
	I0813 23:48:37.263686   17389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0813 23:48:37.264107   17389 main.go:141] libmachine: () Calling .GetMachineName
	I0813 23:48:37.264364   17389 main.go:141] libmachine: (addons-937866) Calling .GetState
	I0813 23:48:37.265241   17389 host.go:66] Checking if "addons-937866" exists ...
	I0813 23:48:37.268896   17389 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-937866"
	I0813 23:48:37.268943   17389 host.go:66] Checking if "addons-937866" exists ...
	I0813 23:48:37.269393   17389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 23:48:37.269438   17389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0813 23:48:37.270187   17389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39759
	I0813 23:48:37.270635   17389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 23:48:37.270662   17389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0813 23:48:37.272649   17389 main.go:141] libmachine: () Calling .GetVersion
	I0813 23:48:37.273156   17389 main.go:141] libmachine: Using API Version  1
	I0813 23:48:37.273179   17389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0813 23:48:37.273530   17389 main.go:141] libmachine: () Calling .GetMachineName
	I0813 23:48:37.274034   17389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 23:48:37.274086   17389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0813 23:48:37.280482   17389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41577
	I0813 23:48:37.280616   17389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40301
	I0813 23:48:37.281171   17389 main.go:141] libmachine: () Calling .GetVersion
	I0813 23:48:37.281754   17389 main.go:141] libmachine: Using API Version  1
	I0813 23:48:37.281772   17389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0813 23:48:37.282148   17389 main.go:141] libmachine: () Calling .GetMachineName
	I0813 23:48:37.282677   17389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 23:48:37.282717   17389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0813 23:48:37.283694   17389 main.go:141] libmachine: () Calling .GetVersion
	I0813 23:48:37.290557   17389 main.go:141] libmachine: Using API Version  1
	I0813 23:48:37.290589   17389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0813 23:48:37.294466   17389 main.go:141] libmachine: () Calling .GetMachineName
	I0813 23:48:37.298270   17389 main.go:141] libmachine: (addons-937866) Calling .GetState
	I0813 23:48:37.300168   17389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32953
	I0813 23:48:37.300327   17389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38617
	I0813 23:48:37.300764   17389 main.go:141] libmachine: () Calling .GetVersion
	I0813 23:48:37.301355   17389 main.go:141] libmachine: Using API Version  1
	I0813 23:48:37.301375   17389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0813 23:48:37.301773   17389 main.go:141] libmachine: () Calling .GetMachineName
	I0813 23:48:37.302429   17389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 23:48:37.302467   17389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0813 23:48:37.302661   17389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34473
	I0813 23:48:37.303022   17389 main.go:141] libmachine: () Calling .GetVersion
	I0813 23:48:37.303112   17389 main.go:141] libmachine: () Calling .GetVersion
	I0813 23:48:37.303635   17389 main.go:141] libmachine: Using API Version  1
	I0813 23:48:37.303655   17389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0813 23:48:37.303802   17389 main.go:141] libmachine: Using API Version  1
	I0813 23:48:37.303814   17389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0813 23:48:37.305118   17389 addons.go:234] Setting addon default-storageclass=true in "addons-937866"
	I0813 23:48:37.305158   17389 host.go:66] Checking if "addons-937866" exists ...
	I0813 23:48:37.305566   17389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 23:48:37.305596   17389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0813 23:48:37.305819   17389 main.go:141] libmachine: () Calling .GetMachineName
	I0813 23:48:37.305881   17389 main.go:141] libmachine: () Calling .GetMachineName
	I0813 23:48:37.305920   17389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41791
	I0813 23:48:37.306130   17389 main.go:141] libmachine: (addons-937866) Calling .GetState
	I0813 23:48:37.306400   17389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 23:48:37.306439   17389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0813 23:48:37.306835   17389 main.go:141] libmachine: () Calling .GetVersion
	I0813 23:48:37.307330   17389 main.go:141] libmachine: Using API Version  1
	I0813 23:48:37.307354   17389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0813 23:48:37.307685   17389 main.go:141] libmachine: () Calling .GetMachineName
	I0813 23:48:37.307735   17389 main.go:141] libmachine: (addons-937866) Calling .DriverName
	I0813 23:48:37.308627   17389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 23:48:37.308663   17389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0813 23:48:37.308842   17389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37261
	I0813 23:48:37.309270   17389 main.go:141] libmachine: () Calling .GetVersion
	I0813 23:48:37.309725   17389 main.go:141] libmachine: Using API Version  1
	I0813 23:48:37.309740   17389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0813 23:48:37.310035   17389 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0813 23:48:37.310194   17389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37991
	I0813 23:48:37.310545   17389 main.go:141] libmachine: () Calling .GetVersion
	I0813 23:48:37.310629   17389 main.go:141] libmachine: () Calling .GetMachineName
	I0813 23:48:37.311133   17389 main.go:141] libmachine: Using API Version  1
	I0813 23:48:37.311154   17389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0813 23:48:37.311362   17389 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0813 23:48:37.311377   17389 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0813 23:48:37.311395   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHHostname
	I0813 23:48:37.311505   17389 main.go:141] libmachine: () Calling .GetMachineName
	I0813 23:48:37.311560   17389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 23:48:37.311589   17389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0813 23:48:37.311720   17389 main.go:141] libmachine: (addons-937866) Calling .GetState
	I0813 23:48:37.311721   17389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36393
	I0813 23:48:37.312089   17389 main.go:141] libmachine: () Calling .GetVersion
	I0813 23:48:37.312559   17389 main.go:141] libmachine: Using API Version  1
	I0813 23:48:37.312580   17389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0813 23:48:37.312890   17389 main.go:141] libmachine: () Calling .GetMachineName
	I0813 23:48:37.313401   17389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 23:48:37.313442   17389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0813 23:48:37.315743   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:37.316392   17389 main.go:141] libmachine: (addons-937866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:c3:1c", ip: ""} in network mk-addons-937866: {Iface:virbr1 ExpiryTime:2024-08-14 00:48:10 +0000 UTC Type:0 Mac:52:54:00:a3:c3:1c Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:addons-937866 Clientid:01:52:54:00:a3:c3:1c}
	I0813 23:48:37.316417   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined IP address 192.168.39.8 and MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:37.316557   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHPort
	I0813 23:48:37.316665   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHKeyPath
	I0813 23:48:37.316744   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHUsername
	I0813 23:48:37.316827   17389 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/addons-937866/id_rsa Username:docker}
	I0813 23:48:37.317557   17389 main.go:141] libmachine: (addons-937866) Calling .DriverName
	I0813 23:48:37.317844   17389 main.go:141] libmachine: Making call to close driver server
	I0813 23:48:37.317872   17389 main.go:141] libmachine: (addons-937866) Calling .Close
	I0813 23:48:37.319654   17389 main.go:141] libmachine: (addons-937866) DBG | Closing plugin on server side
	I0813 23:48:37.319676   17389 main.go:141] libmachine: Successfully made call to close driver server
	I0813 23:48:37.319690   17389 main.go:141] libmachine: Making call to close connection to plugin binary
	I0813 23:48:37.319712   17389 main.go:141] libmachine: Making call to close driver server
	I0813 23:48:37.319721   17389 main.go:141] libmachine: (addons-937866) Calling .Close
	I0813 23:48:37.319949   17389 main.go:141] libmachine: (addons-937866) DBG | Closing plugin on server side
	I0813 23:48:37.319957   17389 main.go:141] libmachine: Successfully made call to close driver server
	I0813 23:48:37.319968   17389 main.go:141] libmachine: Making call to close connection to plugin binary
	W0813 23:48:37.320057   17389 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0813 23:48:37.321837   17389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32959
	I0813 23:48:37.322281   17389 main.go:141] libmachine: () Calling .GetVersion
	I0813 23:48:37.322721   17389 main.go:141] libmachine: Using API Version  1
	I0813 23:48:37.322742   17389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0813 23:48:37.323044   17389 main.go:141] libmachine: () Calling .GetMachineName
	I0813 23:48:37.323184   17389 main.go:141] libmachine: (addons-937866) Calling .GetState
	I0813 23:48:37.324831   17389 main.go:141] libmachine: (addons-937866) Calling .DriverName
	I0813 23:48:37.326598   17389 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0813 23:48:37.327831   17389 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0813 23:48:37.327854   17389 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0813 23:48:37.327873   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHHostname
	I0813 23:48:37.331722   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:37.332319   17389 main.go:141] libmachine: (addons-937866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:c3:1c", ip: ""} in network mk-addons-937866: {Iface:virbr1 ExpiryTime:2024-08-14 00:48:10 +0000 UTC Type:0 Mac:52:54:00:a3:c3:1c Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:addons-937866 Clientid:01:52:54:00:a3:c3:1c}
	I0813 23:48:37.332339   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined IP address 192.168.39.8 and MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:37.332612   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHPort
	I0813 23:48:37.332829   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHKeyPath
	I0813 23:48:37.333006   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHUsername
	I0813 23:48:37.333155   17389 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/addons-937866/id_rsa Username:docker}
	I0813 23:48:37.334728   17389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40195
	I0813 23:48:37.334909   17389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44503
	I0813 23:48:37.335240   17389 main.go:141] libmachine: () Calling .GetVersion
	I0813 23:48:37.335323   17389 main.go:141] libmachine: () Calling .GetVersion
	I0813 23:48:37.335714   17389 main.go:141] libmachine: Using API Version  1
	I0813 23:48:37.335728   17389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0813 23:48:37.336027   17389 main.go:141] libmachine: () Calling .GetMachineName
	I0813 23:48:37.336430   17389 main.go:141] libmachine: Using API Version  1
	I0813 23:48:37.336445   17389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0813 23:48:37.336536   17389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 23:48:37.336571   17389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0813 23:48:37.336852   17389 main.go:141] libmachine: () Calling .GetMachineName
	I0813 23:48:37.337057   17389 main.go:141] libmachine: (addons-937866) Calling .GetState
	I0813 23:48:37.337468   17389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35009
	I0813 23:48:37.338663   17389 main.go:141] libmachine: () Calling .GetVersion
	I0813 23:48:37.339088   17389 main.go:141] libmachine: (addons-937866) Calling .DriverName
	I0813 23:48:37.339137   17389 main.go:141] libmachine: Using API Version  1
	I0813 23:48:37.339159   17389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0813 23:48:37.339500   17389 main.go:141] libmachine: () Calling .GetMachineName
	I0813 23:48:37.339696   17389 main.go:141] libmachine: (addons-937866) Calling .GetState
	I0813 23:48:37.340978   17389 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0813 23:48:37.341302   17389 main.go:141] libmachine: (addons-937866) Calling .DriverName
	I0813 23:48:37.342494   17389 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0813 23:48:37.342514   17389 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0813 23:48:37.342532   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHHostname
	I0813 23:48:37.343109   17389 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0813 23:48:37.344075   17389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35205
	I0813 23:48:37.345593   17389 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0813 23:48:37.345704   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:37.346017   17389 main.go:141] libmachine: (addons-937866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:c3:1c", ip: ""} in network mk-addons-937866: {Iface:virbr1 ExpiryTime:2024-08-14 00:48:10 +0000 UTC Type:0 Mac:52:54:00:a3:c3:1c Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:addons-937866 Clientid:01:52:54:00:a3:c3:1c}
	I0813 23:48:37.346036   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined IP address 192.168.39.8 and MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:37.346200   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHPort
	I0813 23:48:37.346377   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHKeyPath
	I0813 23:48:37.346526   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHUsername
	I0813 23:48:37.346650   17389 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/addons-937866/id_rsa Username:docker}
	I0813 23:48:37.348218   17389 main.go:141] libmachine: () Calling .GetVersion
	I0813 23:48:37.348311   17389 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0813 23:48:37.348751   17389 main.go:141] libmachine: Using API Version  1
	I0813 23:48:37.348767   17389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0813 23:48:37.348951   17389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42491
	I0813 23:48:37.349138   17389 main.go:141] libmachine: () Calling .GetMachineName
	I0813 23:48:37.349544   17389 main.go:141] libmachine: () Calling .GetVersion
	I0813 23:48:37.349676   17389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40343
	I0813 23:48:37.349785   17389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 23:48:37.349817   17389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0813 23:48:37.350072   17389 main.go:141] libmachine: () Calling .GetVersion
	I0813 23:48:37.350149   17389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33955
	I0813 23:48:37.350572   17389 main.go:141] libmachine: () Calling .GetVersion
	I0813 23:48:37.350630   17389 main.go:141] libmachine: Using API Version  1
	I0813 23:48:37.350648   17389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0813 23:48:37.350786   17389 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0813 23:48:37.350913   17389 main.go:141] libmachine: () Calling .GetMachineName
	I0813 23:48:37.351297   17389 main.go:141] libmachine: Using API Version  1
	I0813 23:48:37.351324   17389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0813 23:48:37.351416   17389 main.go:141] libmachine: Using API Version  1
	I0813 23:48:37.351434   17389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0813 23:48:37.352264   17389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 23:48:37.352306   17389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0813 23:48:37.352529   17389 main.go:141] libmachine: () Calling .GetMachineName
	I0813 23:48:37.353005   17389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 23:48:37.353036   17389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0813 23:48:37.353203   17389 main.go:141] libmachine: () Calling .GetMachineName
	I0813 23:48:37.353392   17389 main.go:141] libmachine: (addons-937866) Calling .GetState
	I0813 23:48:37.353547   17389 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0813 23:48:37.354523   17389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40647
	I0813 23:48:37.354943   17389 main.go:141] libmachine: () Calling .GetVersion
	I0813 23:48:37.355163   17389 main.go:141] libmachine: (addons-937866) Calling .DriverName
	I0813 23:48:37.355585   17389 main.go:141] libmachine: Using API Version  1
	I0813 23:48:37.355608   17389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0813 23:48:37.355758   17389 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0813 23:48:37.355932   17389 main.go:141] libmachine: () Calling .GetMachineName
	I0813 23:48:37.356275   17389 main.go:141] libmachine: (addons-937866) Calling .DriverName
	I0813 23:48:37.359157   17389 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0813 23:48:37.359420   17389 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0813 23:48:37.360262   17389 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0813 23:48:37.360283   17389 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0813 23:48:37.360304   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHHostname
	I0813 23:48:37.360322   17389 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0813 23:48:37.360404   17389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32789
	I0813 23:48:37.360854   17389 main.go:141] libmachine: () Calling .GetVersion
	I0813 23:48:37.361304   17389 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0813 23:48:37.361323   17389 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0813 23:48:37.361342   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHHostname
	I0813 23:48:37.361422   17389 main.go:141] libmachine: Using API Version  1
	I0813 23:48:37.361436   17389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0813 23:48:37.361790   17389 main.go:141] libmachine: () Calling .GetMachineName
	I0813 23:48:37.361990   17389 main.go:141] libmachine: (addons-937866) Calling .GetState
	I0813 23:48:37.363699   17389 main.go:141] libmachine: (addons-937866) Calling .DriverName
	I0813 23:48:37.364069   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:37.364239   17389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39593
	I0813 23:48:37.364428   17389 main.go:141] libmachine: (addons-937866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:c3:1c", ip: ""} in network mk-addons-937866: {Iface:virbr1 ExpiryTime:2024-08-14 00:48:10 +0000 UTC Type:0 Mac:52:54:00:a3:c3:1c Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:addons-937866 Clientid:01:52:54:00:a3:c3:1c}
	I0813 23:48:37.364445   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined IP address 192.168.39.8 and MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:37.364849   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHPort
	I0813 23:48:37.364922   17389 main.go:141] libmachine: () Calling .GetVersion
	I0813 23:48:37.365087   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHKeyPath
	I0813 23:48:37.365450   17389 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0813 23:48:37.365687   17389 main.go:141] libmachine: Using API Version  1
	I0813 23:48:37.365703   17389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0813 23:48:37.365762   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHUsername
	I0813 23:48:37.365960   17389 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/addons-937866/id_rsa Username:docker}
	I0813 23:48:37.366202   17389 main.go:141] libmachine: () Calling .GetMachineName
	I0813 23:48:37.366840   17389 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0813 23:48:37.366860   17389 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0813 23:48:37.366878   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHHostname
	I0813 23:48:37.366940   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:37.367250   17389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 23:48:37.367291   17389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0813 23:48:37.368335   17389 main.go:141] libmachine: (addons-937866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:c3:1c", ip: ""} in network mk-addons-937866: {Iface:virbr1 ExpiryTime:2024-08-14 00:48:10 +0000 UTC Type:0 Mac:52:54:00:a3:c3:1c Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:addons-937866 Clientid:01:52:54:00:a3:c3:1c}
	I0813 23:48:37.368370   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined IP address 192.168.39.8 and MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:37.368603   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHPort
	I0813 23:48:37.368808   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHKeyPath
	I0813 23:48:37.369087   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHUsername
	I0813 23:48:37.369264   17389 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/addons-937866/id_rsa Username:docker}
	I0813 23:48:37.370236   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:37.370750   17389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43345
	I0813 23:48:37.371153   17389 main.go:141] libmachine: (addons-937866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:c3:1c", ip: ""} in network mk-addons-937866: {Iface:virbr1 ExpiryTime:2024-08-14 00:48:10 +0000 UTC Type:0 Mac:52:54:00:a3:c3:1c Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:addons-937866 Clientid:01:52:54:00:a3:c3:1c}
	I0813 23:48:37.371169   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined IP address 192.168.39.8 and MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:37.371204   17389 main.go:141] libmachine: () Calling .GetVersion
	I0813 23:48:37.371688   17389 main.go:141] libmachine: Using API Version  1
	I0813 23:48:37.371704   17389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0813 23:48:37.371761   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHPort
	I0813 23:48:37.371976   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHKeyPath
	I0813 23:48:37.372176   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHUsername
	I0813 23:48:37.372370   17389 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/addons-937866/id_rsa Username:docker}
	I0813 23:48:37.372965   17389 main.go:141] libmachine: () Calling .GetMachineName
	I0813 23:48:37.373180   17389 main.go:141] libmachine: (addons-937866) Calling .GetState
	I0813 23:48:37.376430   17389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41063
	I0813 23:48:37.376456   17389 main.go:141] libmachine: (addons-937866) Calling .DriverName
	I0813 23:48:37.376949   17389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35521
	I0813 23:48:37.377097   17389 main.go:141] libmachine: () Calling .GetVersion
	I0813 23:48:37.377363   17389 main.go:141] libmachine: () Calling .GetVersion
	I0813 23:48:37.377920   17389 main.go:141] libmachine: Using API Version  1
	I0813 23:48:37.377944   17389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0813 23:48:37.378230   17389 main.go:141] libmachine: Using API Version  1
	I0813 23:48:37.378259   17389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0813 23:48:37.378309   17389 main.go:141] libmachine: () Calling .GetMachineName
	I0813 23:48:37.378411   17389 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0813 23:48:37.378542   17389 main.go:141] libmachine: (addons-937866) Calling .GetState
	I0813 23:48:37.379211   17389 main.go:141] libmachine: () Calling .GetMachineName
	I0813 23:48:37.379383   17389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40407
	I0813 23:48:37.379820   17389 main.go:141] libmachine: () Calling .GetVersion
	I0813 23:48:37.379986   17389 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0813 23:48:37.379999   17389 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0813 23:48:37.380017   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHHostname
	I0813 23:48:37.380268   17389 main.go:141] libmachine: Using API Version  1
	I0813 23:48:37.380282   17389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0813 23:48:37.380337   17389 main.go:141] libmachine: (addons-937866) Calling .DriverName
	I0813 23:48:37.381128   17389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 23:48:37.381169   17389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0813 23:48:37.382193   17389 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 23:48:37.382547   17389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39397
	I0813 23:48:37.383048   17389 main.go:141] libmachine: () Calling .GetVersion
	I0813 23:48:37.383827   17389 main.go:141] libmachine: Using API Version  1
	I0813 23:48:37.383846   17389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0813 23:48:37.383862   17389 main.go:141] libmachine: () Calling .GetMachineName
	I0813 23:48:37.383867   17389 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 23:48:37.383903   17389 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0813 23:48:37.383918   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHHostname
	I0813 23:48:37.383959   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:37.384205   17389 main.go:141] libmachine: (addons-937866) Calling .GetState
	I0813 23:48:37.384387   17389 main.go:141] libmachine: (addons-937866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:c3:1c", ip: ""} in network mk-addons-937866: {Iface:virbr1 ExpiryTime:2024-08-14 00:48:10 +0000 UTC Type:0 Mac:52:54:00:a3:c3:1c Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:addons-937866 Clientid:01:52:54:00:a3:c3:1c}
	I0813 23:48:37.384407   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined IP address 192.168.39.8 and MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:37.384912   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHPort
	I0813 23:48:37.385138   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHKeyPath
	I0813 23:48:37.385200   17389 main.go:141] libmachine: () Calling .GetMachineName
	I0813 23:48:37.385468   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHUsername
	I0813 23:48:37.385541   17389 main.go:141] libmachine: (addons-937866) Calling .GetState
	I0813 23:48:37.385642   17389 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/addons-937866/id_rsa Username:docker}
	I0813 23:48:37.386559   17389 main.go:141] libmachine: (addons-937866) Calling .DriverName
	I0813 23:48:37.388084   17389 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0813 23:48:37.389187   17389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41965
	I0813 23:48:37.389301   17389 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0813 23:48:37.389316   17389 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0813 23:48:37.389334   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHHostname
	I0813 23:48:37.389683   17389 main.go:141] libmachine: () Calling .GetVersion
	I0813 23:48:37.390112   17389 main.go:141] libmachine: (addons-937866) Calling .DriverName
	I0813 23:48:37.390165   17389 main.go:141] libmachine: Using API Version  1
	I0813 23:48:37.390177   17389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0813 23:48:37.390972   17389 main.go:141] libmachine: () Calling .GetMachineName
	I0813 23:48:37.391036   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:37.391216   17389 main.go:141] libmachine: (addons-937866) Calling .GetState
	I0813 23:48:37.391347   17389 main.go:141] libmachine: (addons-937866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:c3:1c", ip: ""} in network mk-addons-937866: {Iface:virbr1 ExpiryTime:2024-08-14 00:48:10 +0000 UTC Type:0 Mac:52:54:00:a3:c3:1c Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:addons-937866 Clientid:01:52:54:00:a3:c3:1c}
	I0813 23:48:37.391368   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined IP address 192.168.39.8 and MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:37.391566   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHPort
	I0813 23:48:37.391717   17389 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0813 23:48:37.391734   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHKeyPath
	I0813 23:48:37.391906   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHUsername
	I0813 23:48:37.392164   17389 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/addons-937866/id_rsa Username:docker}
	I0813 23:48:37.393004   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:37.393252   17389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40807
	I0813 23:48:37.393494   17389 main.go:141] libmachine: (addons-937866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:c3:1c", ip: ""} in network mk-addons-937866: {Iface:virbr1 ExpiryTime:2024-08-14 00:48:10 +0000 UTC Type:0 Mac:52:54:00:a3:c3:1c Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:addons-937866 Clientid:01:52:54:00:a3:c3:1c}
	I0813 23:48:37.393514   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined IP address 192.168.39.8 and MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:37.393593   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHPort
	I0813 23:48:37.393862   17389 main.go:141] libmachine: () Calling .GetVersion
	I0813 23:48:37.393890   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHKeyPath
	I0813 23:48:37.394127   17389 main.go:141] libmachine: (addons-937866) Calling .DriverName
	I0813 23:48:37.394135   17389 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0813 23:48:37.394180   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHUsername
	I0813 23:48:37.394458   17389 main.go:141] libmachine: Using API Version  1
	I0813 23:48:37.394477   17389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0813 23:48:37.394789   17389 main.go:141] libmachine: () Calling .GetMachineName
	I0813 23:48:37.394838   17389 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/addons-937866/id_rsa Username:docker}
	I0813 23:48:37.395253   17389 main.go:141] libmachine: (addons-937866) Calling .GetState
	I0813 23:48:37.395366   17389 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0813 23:48:37.396480   17389 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0813 23:48:37.396686   17389 main.go:141] libmachine: (addons-937866) Calling .DriverName
	I0813 23:48:37.396986   17389 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0813 23:48:37.397001   17389 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0813 23:48:37.397018   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHHostname
	I0813 23:48:37.397061   17389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43561
	I0813 23:48:37.397404   17389 main.go:141] libmachine: () Calling .GetVersion
	I0813 23:48:37.397554   17389 out.go:177]   - Using image docker.io/busybox:stable
	I0813 23:48:37.397815   17389 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0813 23:48:37.397835   17389 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0813 23:48:37.397849   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHHostname
	I0813 23:48:37.397875   17389 main.go:141] libmachine: Using API Version  1
	I0813 23:48:37.397887   17389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0813 23:48:37.398239   17389 main.go:141] libmachine: () Calling .GetMachineName
	I0813 23:48:37.398450   17389 main.go:141] libmachine: (addons-937866) Calling .GetState
	I0813 23:48:37.398870   17389 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0813 23:48:37.398888   17389 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0813 23:48:37.398904   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHHostname
	I0813 23:48:37.400880   17389 main.go:141] libmachine: (addons-937866) Calling .DriverName
	I0813 23:48:37.402871   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:37.402886   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:37.402902   17389 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
	I0813 23:48:37.402910   17389 main.go:141] libmachine: (addons-937866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:c3:1c", ip: ""} in network mk-addons-937866: {Iface:virbr1 ExpiryTime:2024-08-14 00:48:10 +0000 UTC Type:0 Mac:52:54:00:a3:c3:1c Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:addons-937866 Clientid:01:52:54:00:a3:c3:1c}
	I0813 23:48:37.402982   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined IP address 192.168.39.8 and MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:37.403030   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHPort
	I0813 23:48:37.403192   17389 main.go:141] libmachine: (addons-937866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:c3:1c", ip: ""} in network mk-addons-937866: {Iface:virbr1 ExpiryTime:2024-08-14 00:48:10 +0000 UTC Type:0 Mac:52:54:00:a3:c3:1c Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:addons-937866 Clientid:01:52:54:00:a3:c3:1c}
	I0813 23:48:37.403214   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined IP address 192.168.39.8 and MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:37.403247   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHKeyPath
	I0813 23:48:37.403303   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:37.403487   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHUsername
	I0813 23:48:37.403494   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHPort
	I0813 23:48:37.403663   17389 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/addons-937866/id_rsa Username:docker}
	I0813 23:48:37.403916   17389 main.go:141] libmachine: (addons-937866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:c3:1c", ip: ""} in network mk-addons-937866: {Iface:virbr1 ExpiryTime:2024-08-14 00:48:10 +0000 UTC Type:0 Mac:52:54:00:a3:c3:1c Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:addons-937866 Clientid:01:52:54:00:a3:c3:1c}
	I0813 23:48:37.403932   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHKeyPath
	I0813 23:48:37.403939   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined IP address 192.168.39.8 and MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:37.404031   17389 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0813 23:48:37.404047   17389 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0813 23:48:37.404065   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHHostname
	I0813 23:48:37.404102   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHUsername
	I0813 23:48:37.404684   17389 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/addons-937866/id_rsa Username:docker}
	I0813 23:48:37.405062   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHPort
	I0813 23:48:37.405269   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHKeyPath
	I0813 23:48:37.405432   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHUsername
	I0813 23:48:37.405556   17389 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/addons-937866/id_rsa Username:docker}
	I0813 23:48:37.406374   17389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45491
	I0813 23:48:37.407095   17389 main.go:141] libmachine: () Calling .GetVersion
	W0813 23:48:37.407234   17389 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:37800->192.168.39.8:22: read: connection reset by peer
	I0813 23:48:37.407255   17389 retry.go:31] will retry after 192.929271ms: ssh: handshake failed: read tcp 192.168.39.1:37800->192.168.39.8:22: read: connection reset by peer
	I0813 23:48:37.407563   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:37.407770   17389 main.go:141] libmachine: Using API Version  1
	I0813 23:48:37.407789   17389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0813 23:48:37.408030   17389 main.go:141] libmachine: (addons-937866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:c3:1c", ip: ""} in network mk-addons-937866: {Iface:virbr1 ExpiryTime:2024-08-14 00:48:10 +0000 UTC Type:0 Mac:52:54:00:a3:c3:1c Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:addons-937866 Clientid:01:52:54:00:a3:c3:1c}
	I0813 23:48:37.408049   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined IP address 192.168.39.8 and MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:37.408191   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHPort
	I0813 23:48:37.408214   17389 main.go:141] libmachine: () Calling .GetMachineName
	I0813 23:48:37.408370   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHKeyPath
	I0813 23:48:37.408373   17389 main.go:141] libmachine: (addons-937866) Calling .GetState
	I0813 23:48:37.408506   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHUsername
	I0813 23:48:37.408632   17389 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/addons-937866/id_rsa Username:docker}
	I0813 23:48:37.409889   17389 main.go:141] libmachine: (addons-937866) Calling .DriverName
	I0813 23:48:37.411353   17389 out.go:177]   - Using image docker.io/registry:2.8.3
	I0813 23:48:37.412557   17389 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0813 23:48:37.413669   17389 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0813 23:48:37.413684   17389 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0813 23:48:37.413706   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHHostname
	I0813 23:48:37.416878   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:37.417291   17389 main.go:141] libmachine: (addons-937866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:c3:1c", ip: ""} in network mk-addons-937866: {Iface:virbr1 ExpiryTime:2024-08-14 00:48:10 +0000 UTC Type:0 Mac:52:54:00:a3:c3:1c Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:addons-937866 Clientid:01:52:54:00:a3:c3:1c}
	I0813 23:48:37.417314   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined IP address 192.168.39.8 and MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:37.417454   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHPort
	I0813 23:48:37.417656   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHKeyPath
	I0813 23:48:37.417798   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHUsername
	I0813 23:48:37.417945   17389 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/addons-937866/id_rsa Username:docker}
	I0813 23:48:37.706081   17389 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0813 23:48:37.707962   17389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0813 23:48:37.726150   17389 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0813 23:48:37.726173   17389 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0813 23:48:37.747197   17389 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0813 23:48:37.747217   17389 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0813 23:48:37.823431   17389 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0813 23:48:37.928090   17389 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0813 23:48:37.928111   17389 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0813 23:48:37.928344   17389 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0813 23:48:37.971749   17389 node_ready.go:35] waiting up to 6m0s for node "addons-937866" to be "Ready" ...
	I0813 23:48:37.978242   17389 node_ready.go:49] node "addons-937866" has status "Ready":"True"
	I0813 23:48:37.978271   17389 node_ready.go:38] duration metric: took 6.494553ms for node "addons-937866" to be "Ready" ...
	I0813 23:48:37.978284   17389 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 23:48:37.979105   17389 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0813 23:48:37.979122   17389 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0813 23:48:38.017932   17389 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0813 23:48:38.017957   17389 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0813 23:48:38.026363   17389 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 23:48:38.029945   17389 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0813 23:48:38.031413   17389 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-mq64k" in "kube-system" namespace to be "Ready" ...
	I0813 23:48:38.032312   17389 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0813 23:48:38.032326   17389 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0813 23:48:38.034494   17389 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0813 23:48:38.034508   17389 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0813 23:48:38.048324   17389 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0813 23:48:38.048347   17389 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0813 23:48:38.048953   17389 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0813 23:48:38.058771   17389 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0813 23:48:38.080228   17389 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0813 23:48:38.146953   17389 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0813 23:48:38.146976   17389 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0813 23:48:38.170939   17389 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0813 23:48:38.170962   17389 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0813 23:48:38.174228   17389 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 23:48:38.174245   17389 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0813 23:48:38.198818   17389 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0813 23:48:38.198838   17389 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0813 23:48:38.280387   17389 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0813 23:48:38.291444   17389 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0813 23:48:38.291470   17389 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0813 23:48:38.345715   17389 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0813 23:48:38.345745   17389 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0813 23:48:38.364841   17389 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0813 23:48:38.364868   17389 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0813 23:48:38.390098   17389 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0813 23:48:38.390122   17389 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0813 23:48:38.421644   17389 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 23:48:38.446482   17389 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0813 23:48:38.446504   17389 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0813 23:48:38.553826   17389 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0813 23:48:38.553850   17389 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0813 23:48:38.576406   17389 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0813 23:48:38.576428   17389 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0813 23:48:38.579688   17389 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0813 23:48:38.579705   17389 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0813 23:48:38.653128   17389 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0813 23:48:38.735032   17389 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0813 23:48:38.735055   17389 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0813 23:48:38.792541   17389 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0813 23:48:38.792567   17389 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0813 23:48:38.796030   17389 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0813 23:48:38.796053   17389 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0813 23:48:38.824222   17389 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0813 23:48:38.824247   17389 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0813 23:48:38.979555   17389 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0813 23:48:39.011618   17389 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0813 23:48:39.011641   17389 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0813 23:48:39.031256   17389 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0813 23:48:39.031276   17389 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0813 23:48:39.141039   17389 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0813 23:48:39.164078   17389 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0813 23:48:39.164102   17389 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0813 23:48:39.164763   17389 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0813 23:48:39.164779   17389 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0813 23:48:39.428182   17389 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0813 23:48:39.428209   17389 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0813 23:48:39.496847   17389 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0813 23:48:39.496873   17389 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0813 23:48:39.724770   17389 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0813 23:48:39.767305   17389 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0813 23:48:39.767331   17389 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0813 23:48:39.862585   17389 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.154592722s)
	I0813 23:48:39.862621   17389 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0813 23:48:40.038812   17389 pod_ready.go:102] pod "coredns-6f6b679f8f-mq64k" in "kube-system" namespace has status "Ready":"False"
	I0813 23:48:40.140702   17389 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0813 23:48:40.140723   17389 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0813 23:48:40.353191   17389 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0813 23:48:40.353217   17389 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0813 23:48:40.368324   17389 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-937866" context rescaled to 1 replicas
	I0813 23:48:40.667892   17389 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0813 23:48:42.062104   17389 pod_ready.go:102] pod "coredns-6f6b679f8f-mq64k" in "kube-system" namespace has status "Ready":"False"
	I0813 23:48:44.116373   17389 pod_ready.go:92] pod "coredns-6f6b679f8f-mq64k" in "kube-system" namespace has status "Ready":"True"
	I0813 23:48:44.116404   17389 pod_ready.go:81] duration metric: took 6.084969999s for pod "coredns-6f6b679f8f-mq64k" in "kube-system" namespace to be "Ready" ...
	I0813 23:48:44.116416   17389 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-xg8fx" in "kube-system" namespace to be "Ready" ...
	I0813 23:48:44.387202   17389 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0813 23:48:44.387235   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHHostname
	I0813 23:48:44.390396   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:44.390786   17389 main.go:141] libmachine: (addons-937866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:c3:1c", ip: ""} in network mk-addons-937866: {Iface:virbr1 ExpiryTime:2024-08-14 00:48:10 +0000 UTC Type:0 Mac:52:54:00:a3:c3:1c Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:addons-937866 Clientid:01:52:54:00:a3:c3:1c}
	I0813 23:48:44.390817   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined IP address 192.168.39.8 and MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:44.390970   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHPort
	I0813 23:48:44.391170   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHKeyPath
	I0813 23:48:44.391340   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHUsername
	I0813 23:48:44.391485   17389 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/addons-937866/id_rsa Username:docker}
	I0813 23:48:44.926184   17389 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0813 23:48:44.984658   17389 addons.go:234] Setting addon gcp-auth=true in "addons-937866"
	I0813 23:48:44.984722   17389 host.go:66] Checking if "addons-937866" exists ...
	I0813 23:48:44.985199   17389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 23:48:44.985244   17389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0813 23:48:45.000914   17389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42427
	I0813 23:48:45.001353   17389 main.go:141] libmachine: () Calling .GetVersion
	I0813 23:48:45.001892   17389 main.go:141] libmachine: Using API Version  1
	I0813 23:48:45.001920   17389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0813 23:48:45.002238   17389 main.go:141] libmachine: () Calling .GetMachineName
	I0813 23:48:45.002833   17389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0813 23:48:45.002868   17389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0813 23:48:45.018910   17389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35611
	I0813 23:48:45.019280   17389 main.go:141] libmachine: () Calling .GetVersion
	I0813 23:48:45.019785   17389 main.go:141] libmachine: Using API Version  1
	I0813 23:48:45.019809   17389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0813 23:48:45.020151   17389 main.go:141] libmachine: () Calling .GetMachineName
	I0813 23:48:45.020344   17389 main.go:141] libmachine: (addons-937866) Calling .GetState
	I0813 23:48:45.021996   17389 main.go:141] libmachine: (addons-937866) Calling .DriverName
	I0813 23:48:45.022228   17389 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0813 23:48:45.022248   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHHostname
	I0813 23:48:45.024869   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:45.025249   17389 main.go:141] libmachine: (addons-937866) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:c3:1c", ip: ""} in network mk-addons-937866: {Iface:virbr1 ExpiryTime:2024-08-14 00:48:10 +0000 UTC Type:0 Mac:52:54:00:a3:c3:1c Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:addons-937866 Clientid:01:52:54:00:a3:c3:1c}
	I0813 23:48:45.025272   17389 main.go:141] libmachine: (addons-937866) DBG | domain addons-937866 has defined IP address 192.168.39.8 and MAC address 52:54:00:a3:c3:1c in network mk-addons-937866
	I0813 23:48:45.025430   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHPort
	I0813 23:48:45.025597   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHKeyPath
	I0813 23:48:45.025737   17389 main.go:141] libmachine: (addons-937866) Calling .GetSSHUsername
	I0813 23:48:45.025854   17389 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/addons-937866/id_rsa Username:docker}
	I0813 23:48:45.638443   17389 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.814982912s)
	I0813 23:48:45.638496   17389 main.go:141] libmachine: Making call to close driver server
	I0813 23:48:45.638498   17389 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.710127754s)
	I0813 23:48:45.638539   17389 main.go:141] libmachine: Making call to close driver server
	I0813 23:48:45.638556   17389 main.go:141] libmachine: (addons-937866) Calling .Close
	I0813 23:48:45.638508   17389 main.go:141] libmachine: (addons-937866) Calling .Close
	I0813 23:48:45.638609   17389 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.589637115s)
	I0813 23:48:45.638631   17389 main.go:141] libmachine: Making call to close driver server
	I0813 23:48:45.638645   17389 main.go:141] libmachine: (addons-937866) Calling .Close
	I0813 23:48:45.638677   17389 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.579879698s)
	I0813 23:48:45.638541   17389 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.612157426s)
	I0813 23:48:45.638699   17389 main.go:141] libmachine: Making call to close driver server
	I0813 23:48:45.638712   17389 main.go:141] libmachine: (addons-937866) Calling .Close
	I0813 23:48:45.638744   17389 main.go:141] libmachine: Making call to close driver server
	I0813 23:48:45.638744   17389 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.558489355s)
	I0813 23:48:45.638762   17389 main.go:141] libmachine: (addons-937866) Calling .Close
	I0813 23:48:45.638777   17389 main.go:141] libmachine: Making call to close driver server
	I0813 23:48:45.638783   17389 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.358369214s)
	I0813 23:48:45.638788   17389 main.go:141] libmachine: (addons-937866) Calling .Close
	I0813 23:48:45.638800   17389 main.go:141] libmachine: Making call to close driver server
	I0813 23:48:45.638810   17389 main.go:141] libmachine: (addons-937866) Calling .Close
	I0813 23:48:45.638582   17389 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.608615632s)
	I0813 23:48:45.638829   17389 main.go:141] libmachine: Making call to close driver server
	I0813 23:48:45.638838   17389 main.go:141] libmachine: (addons-937866) Calling .Close
	I0813 23:48:45.638909   17389 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.217236424s)
	I0813 23:48:45.638934   17389 main.go:141] libmachine: Making call to close driver server
	I0813 23:48:45.638945   17389 main.go:141] libmachine: (addons-937866) Calling .Close
	I0813 23:48:45.639014   17389 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (6.985811076s)
	I0813 23:48:45.639035   17389 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.659448322s)
	I0813 23:48:45.639047   17389 main.go:141] libmachine: Making call to close driver server
	I0813 23:48:45.639059   17389 main.go:141] libmachine: (addons-937866) Calling .Close
	I0813 23:48:45.639058   17389 main.go:141] libmachine: Making call to close driver server
	I0813 23:48:45.639071   17389 main.go:141] libmachine: (addons-937866) Calling .Close
	I0813 23:48:45.639203   17389 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.498130942s)
	W0813 23:48:45.639268   17389 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0813 23:48:45.639290   17389 retry.go:31] will retry after 269.107791ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0813 23:48:45.639355   17389 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.914546449s)
	I0813 23:48:45.639379   17389 main.go:141] libmachine: Making call to close driver server
	I0813 23:48:45.639399   17389 main.go:141] libmachine: (addons-937866) Calling .Close
	I0813 23:48:45.642604   17389 main.go:141] libmachine: (addons-937866) DBG | Closing plugin on server side
	I0813 23:48:45.642611   17389 main.go:141] libmachine: (addons-937866) DBG | Closing plugin on server side
	I0813 23:48:45.642618   17389 main.go:141] libmachine: Successfully made call to close driver server
	I0813 23:48:45.642668   17389 main.go:141] libmachine: Successfully made call to close driver server
	I0813 23:48:45.642686   17389 main.go:141] libmachine: Making call to close connection to plugin binary
	I0813 23:48:45.642690   17389 main.go:141] libmachine: (addons-937866) DBG | Closing plugin on server side
	I0813 23:48:45.642701   17389 main.go:141] libmachine: Making call to close driver server
	I0813 23:48:45.642713   17389 main.go:141] libmachine: (addons-937866) Calling .Close
	I0813 23:48:45.642729   17389 main.go:141] libmachine: (addons-937866) DBG | Closing plugin on server side
	I0813 23:48:45.642716   17389 main.go:141] libmachine: Successfully made call to close driver server
	I0813 23:48:45.642769   17389 main.go:141] libmachine: Making call to close connection to plugin binary
	I0813 23:48:45.642770   17389 main.go:141] libmachine: (addons-937866) DBG | Closing plugin on server side
	I0813 23:48:45.642645   17389 main.go:141] libmachine: (addons-937866) DBG | Closing plugin on server side
	I0813 23:48:45.642782   17389 main.go:141] libmachine: Making call to close driver server
	I0813 23:48:45.642752   17389 main.go:141] libmachine: (addons-937866) DBG | Closing plugin on server side
	I0813 23:48:45.642757   17389 main.go:141] libmachine: Successfully made call to close driver server
	I0813 23:48:45.642790   17389 main.go:141] libmachine: (addons-937866) Calling .Close
	I0813 23:48:45.642795   17389 main.go:141] libmachine: Making call to close connection to plugin binary
	I0813 23:48:45.642810   17389 main.go:141] libmachine: Making call to close driver server
	I0813 23:48:45.642823   17389 main.go:141] libmachine: (addons-937866) Calling .Close
	I0813 23:48:45.642840   17389 main.go:141] libmachine: (addons-937866) DBG | Closing plugin on server side
	I0813 23:48:45.642881   17389 main.go:141] libmachine: Successfully made call to close driver server
	I0813 23:48:45.642904   17389 main.go:141] libmachine: Making call to close connection to plugin binary
	I0813 23:48:45.642923   17389 main.go:141] libmachine: Making call to close driver server
	I0813 23:48:45.642942   17389 main.go:141] libmachine: (addons-937866) Calling .Close
	I0813 23:48:45.642652   17389 main.go:141] libmachine: Successfully made call to close driver server
	I0813 23:48:45.642979   17389 main.go:141] libmachine: Successfully made call to close driver server
	I0813 23:48:45.642981   17389 main.go:141] libmachine: Making call to close connection to plugin binary
	I0813 23:48:45.642987   17389 main.go:141] libmachine: Making call to close connection to plugin binary
	I0813 23:48:45.642991   17389 main.go:141] libmachine: Making call to close driver server
	I0813 23:48:45.642995   17389 main.go:141] libmachine: Making call to close driver server
	I0813 23:48:45.643008   17389 main.go:141] libmachine: (addons-937866) Calling .Close
	I0813 23:48:45.642999   17389 main.go:141] libmachine: (addons-937866) Calling .Close
	I0813 23:48:45.642964   17389 main.go:141] libmachine: (addons-937866) DBG | Closing plugin on server side
	I0813 23:48:45.643098   17389 main.go:141] libmachine: (addons-937866) DBG | Closing plugin on server side
	I0813 23:48:45.643127   17389 main.go:141] libmachine: Successfully made call to close driver server
	I0813 23:48:45.643134   17389 main.go:141] libmachine: Making call to close connection to plugin binary
	I0813 23:48:45.643186   17389 main.go:141] libmachine: (addons-937866) DBG | Closing plugin on server side
	I0813 23:48:45.642671   17389 main.go:141] libmachine: Successfully made call to close driver server
	I0813 23:48:45.643206   17389 main.go:141] libmachine: Making call to close connection to plugin binary
	I0813 23:48:45.643214   17389 main.go:141] libmachine: Making call to close driver server
	I0813 23:48:45.643221   17389 main.go:141] libmachine: (addons-937866) Calling .Close
	I0813 23:48:45.643263   17389 main.go:141] libmachine: Successfully made call to close driver server
	I0813 23:48:45.643283   17389 main.go:141] libmachine: Successfully made call to close driver server
	I0813 23:48:45.643307   17389 main.go:141] libmachine: Making call to close connection to plugin binary
	I0813 23:48:45.643322   17389 main.go:141] libmachine: Making call to close driver server
	I0813 23:48:45.643331   17389 main.go:141] libmachine: (addons-937866) Calling .Close
	I0813 23:48:45.643388   17389 main.go:141] libmachine: (addons-937866) DBG | Closing plugin on server side
	I0813 23:48:45.642676   17389 main.go:141] libmachine: Making call to close connection to plugin binary
	I0813 23:48:45.643427   17389 main.go:141] libmachine: Making call to close driver server
	I0813 23:48:45.643435   17389 main.go:141] libmachine: (addons-937866) Calling .Close
	I0813 23:48:45.643477   17389 main.go:141] libmachine: (addons-937866) DBG | Closing plugin on server side
	I0813 23:48:45.642933   17389 main.go:141] libmachine: Successfully made call to close driver server
	I0813 23:48:45.643518   17389 main.go:141] libmachine: Making call to close connection to plugin binary
	I0813 23:48:45.643535   17389 main.go:141] libmachine: Making call to close driver server
	I0813 23:48:45.643543   17389 main.go:141] libmachine: (addons-937866) Calling .Close
	I0813 23:48:45.643572   17389 main.go:141] libmachine: Successfully made call to close driver server
	I0813 23:48:45.642949   17389 main.go:141] libmachine: Successfully made call to close driver server
	I0813 23:48:45.643601   17389 main.go:141] libmachine: Making call to close connection to plugin binary
	I0813 23:48:45.643613   17389 main.go:141] libmachine: Making call to close driver server
	I0813 23:48:45.643626   17389 main.go:141] libmachine: (addons-937866) Calling .Close
	I0813 23:48:45.643288   17389 main.go:141] libmachine: Making call to close connection to plugin binary
	I0813 23:48:45.643734   17389 main.go:141] libmachine: Successfully made call to close driver server
	I0813 23:48:45.643755   17389 main.go:141] libmachine: Making call to close connection to plugin binary
	I0813 23:48:45.643823   17389 main.go:141] libmachine: (addons-937866) DBG | Closing plugin on server side
	I0813 23:48:45.643863   17389 main.go:141] libmachine: Successfully made call to close driver server
	I0813 23:48:45.643887   17389 main.go:141] libmachine: Making call to close connection to plugin binary
	I0813 23:48:45.644059   17389 main.go:141] libmachine: Successfully made call to close driver server
	I0813 23:48:45.644075   17389 main.go:141] libmachine: Making call to close connection to plugin binary
	I0813 23:48:45.644084   17389 main.go:141] libmachine: Making call to close driver server
	I0813 23:48:45.644092   17389 main.go:141] libmachine: (addons-937866) Calling .Close
	I0813 23:48:45.644113   17389 main.go:141] libmachine: (addons-937866) DBG | Closing plugin on server side
	I0813 23:48:45.644139   17389 main.go:141] libmachine: Successfully made call to close driver server
	I0813 23:48:45.644147   17389 main.go:141] libmachine: Making call to close connection to plugin binary
	I0813 23:48:45.644155   17389 addons.go:475] Verifying addon ingress=true in "addons-937866"
	I0813 23:48:45.644397   17389 main.go:141] libmachine: (addons-937866) DBG | Closing plugin on server side
	I0813 23:48:45.644415   17389 main.go:141] libmachine: (addons-937866) DBG | Closing plugin on server side
	I0813 23:48:45.644436   17389 main.go:141] libmachine: Successfully made call to close driver server
	I0813 23:48:45.644445   17389 main.go:141] libmachine: Making call to close connection to plugin binary
	I0813 23:48:45.645291   17389 main.go:141] libmachine: (addons-937866) DBG | Closing plugin on server side
	I0813 23:48:45.645316   17389 main.go:141] libmachine: Successfully made call to close driver server
	I0813 23:48:45.645323   17389 main.go:141] libmachine: Making call to close connection to plugin binary
	I0813 23:48:45.643590   17389 main.go:141] libmachine: Making call to close connection to plugin binary
	I0813 23:48:45.643693   17389 main.go:141] libmachine: (addons-937866) DBG | Closing plugin on server side
	I0813 23:48:45.646069   17389 main.go:141] libmachine: Successfully made call to close driver server
	I0813 23:48:45.646079   17389 main.go:141] libmachine: Making call to close connection to plugin binary
	I0813 23:48:45.646087   17389 addons.go:475] Verifying addon registry=true in "addons-937866"
	I0813 23:48:45.646278   17389 out.go:177] * Verifying ingress addon...
	I0813 23:48:45.646818   17389 main.go:141] libmachine: Successfully made call to close driver server
	I0813 23:48:45.646828   17389 main.go:141] libmachine: Making call to close connection to plugin binary
	I0813 23:48:45.647278   17389 main.go:141] libmachine: (addons-937866) DBG | Closing plugin on server side
	I0813 23:48:45.647278   17389 main.go:141] libmachine: Successfully made call to close driver server
	I0813 23:48:45.647284   17389 main.go:141] libmachine: (addons-937866) DBG | Closing plugin on server side
	I0813 23:48:45.647292   17389 main.go:141] libmachine: Making call to close connection to plugin binary
	I0813 23:48:45.647301   17389 main.go:141] libmachine: Successfully made call to close driver server
	I0813 23:48:45.647310   17389 main.go:141] libmachine: Making call to close connection to plugin binary
	I0813 23:48:45.647317   17389 addons.go:475] Verifying addon metrics-server=true in "addons-937866"
	I0813 23:48:45.647389   17389 out.go:177] * Verifying registry addon...
	I0813 23:48:45.648252   17389 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0813 23:48:45.648645   17389 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-937866 service yakd-dashboard -n yakd-dashboard
	
	I0813 23:48:45.649441   17389 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0813 23:48:45.664291   17389 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0813 23:48:45.664314   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:48:45.664474   17389 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0813 23:48:45.664498   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:48:45.702625   17389 main.go:141] libmachine: Making call to close driver server
	I0813 23:48:45.702645   17389 main.go:141] libmachine: (addons-937866) Calling .Close
	I0813 23:48:45.702926   17389 main.go:141] libmachine: Successfully made call to close driver server
	I0813 23:48:45.702946   17389 main.go:141] libmachine: Making call to close connection to plugin binary
	W0813 23:48:45.703022   17389 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0813 23:48:45.702926   17389 main.go:141] libmachine: (addons-937866) DBG | Closing plugin on server side
	I0813 23:48:45.707332   17389 main.go:141] libmachine: Making call to close driver server
	I0813 23:48:45.707348   17389 main.go:141] libmachine: (addons-937866) Calling .Close
	I0813 23:48:45.707572   17389 main.go:141] libmachine: Successfully made call to close driver server
	I0813 23:48:45.707589   17389 main.go:141] libmachine: Making call to close connection to plugin binary
	I0813 23:48:45.908712   17389 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0813 23:48:46.127205   17389 pod_ready.go:102] pod "coredns-6f6b679f8f-xg8fx" in "kube-system" namespace has status "Ready":"False"
	I0813 23:48:46.156242   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:48:46.156435   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:48:46.588768   17389 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.920810342s)
	I0813 23:48:46.588818   17389 main.go:141] libmachine: Making call to close driver server
	I0813 23:48:46.588832   17389 main.go:141] libmachine: (addons-937866) Calling .Close
	I0813 23:48:46.588775   17389 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.566524559s)
	I0813 23:48:46.589100   17389 main.go:141] libmachine: (addons-937866) DBG | Closing plugin on server side
	I0813 23:48:46.589120   17389 main.go:141] libmachine: Successfully made call to close driver server
	I0813 23:48:46.589132   17389 main.go:141] libmachine: Making call to close connection to plugin binary
	I0813 23:48:46.589146   17389 main.go:141] libmachine: Making call to close driver server
	I0813 23:48:46.589159   17389 main.go:141] libmachine: (addons-937866) Calling .Close
	I0813 23:48:46.589413   17389 main.go:141] libmachine: (addons-937866) DBG | Closing plugin on server side
	I0813 23:48:46.589430   17389 main.go:141] libmachine: Successfully made call to close driver server
	I0813 23:48:46.589437   17389 main.go:141] libmachine: Making call to close connection to plugin binary
	I0813 23:48:46.589459   17389 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-937866"
	I0813 23:48:46.591107   17389 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0813 23:48:46.591113   17389 out.go:177] * Verifying csi-hostpath-driver addon...
	I0813 23:48:46.592494   17389 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0813 23:48:46.593099   17389 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0813 23:48:46.593493   17389 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0813 23:48:46.593508   17389 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0813 23:48:46.607372   17389 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0813 23:48:46.607391   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:48:46.671972   17389 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0813 23:48:46.671994   17389 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0813 23:48:46.673979   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:48:46.674312   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:48:46.754905   17389 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0813 23:48:46.754927   17389 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0813 23:48:46.835695   17389 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0813 23:48:47.103669   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:48:47.152591   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:48:47.152915   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:48:47.598634   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:48:47.652519   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:48:47.652912   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:48:47.738249   17389 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.829486095s)
	I0813 23:48:47.738303   17389 main.go:141] libmachine: Making call to close driver server
	I0813 23:48:47.738317   17389 main.go:141] libmachine: (addons-937866) Calling .Close
	I0813 23:48:47.738641   17389 main.go:141] libmachine: Successfully made call to close driver server
	I0813 23:48:47.738661   17389 main.go:141] libmachine: Making call to close connection to plugin binary
	I0813 23:48:47.738661   17389 main.go:141] libmachine: (addons-937866) DBG | Closing plugin on server side
	I0813 23:48:47.738671   17389 main.go:141] libmachine: Making call to close driver server
	I0813 23:48:47.738679   17389 main.go:141] libmachine: (addons-937866) Calling .Close
	I0813 23:48:47.738885   17389 main.go:141] libmachine: Successfully made call to close driver server
	I0813 23:48:47.738897   17389 main.go:141] libmachine: Making call to close connection to plugin binary
	I0813 23:48:48.175682   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:48:48.180153   17389 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.344420445s)
	I0813 23:48:48.180209   17389 main.go:141] libmachine: Making call to close driver server
	I0813 23:48:48.180226   17389 main.go:141] libmachine: (addons-937866) Calling .Close
	I0813 23:48:48.180547   17389 main.go:141] libmachine: Successfully made call to close driver server
	I0813 23:48:48.180610   17389 main.go:141] libmachine: Making call to close connection to plugin binary
	I0813 23:48:48.180635   17389 main.go:141] libmachine: Making call to close driver server
	I0813 23:48:48.180651   17389 main.go:141] libmachine: (addons-937866) Calling .Close
	I0813 23:48:48.180608   17389 main.go:141] libmachine: (addons-937866) DBG | Closing plugin on server side
	I0813 23:48:48.180923   17389 main.go:141] libmachine: Successfully made call to close driver server
	I0813 23:48:48.180942   17389 main.go:141] libmachine: Making call to close connection to plugin binary
	I0813 23:48:48.182726   17389 addons.go:475] Verifying addon gcp-auth=true in "addons-937866"
	I0813 23:48:48.184296   17389 out.go:177] * Verifying gcp-auth addon...
	I0813 23:48:48.186137   17389 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0813 23:48:48.209211   17389 pod_ready.go:102] pod "coredns-6f6b679f8f-xg8fx" in "kube-system" namespace has status "Ready":"False"
	I0813 23:48:48.232972   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:48:48.233134   17389 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0813 23:48:48.233159   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:48:48.233550   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:48:48.598896   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:48:48.653497   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:48:48.653611   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:48:48.690412   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:48:49.097830   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:48:49.153209   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:48:49.153525   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:48:49.190218   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:48:49.597264   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:48:49.653896   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:48:49.654417   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:48:49.689768   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:48:50.097569   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:48:50.122112   17389 pod_ready.go:97] pod "coredns-6f6b679f8f-xg8fx" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-13 23:48:49 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-13 23:48:37 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-13 23:48:37 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-13 23:48:37 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-13 23:48:37 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.8 HostIPs:[{IP:192.168.39.8
}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-08-13 23:48:37 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-08-13 23:48:41 +0000 UTC,FinishedAt:2024-08-13 23:48:48 +0000 UTC,ContainerID:cri-o://57188d13697467e6140175385ca067455c09a2e9f44f868ff2c79498b0bf8ccf,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://57188d13697467e6140175385ca067455c09a2e9f44f868ff2c79498b0bf8ccf Started:0xc00280d810 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc00281a900} {Name:kube-api-access-45gsc MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc00281a910}] User:nil Al
locatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0813 23:48:50.122147   17389 pod_ready.go:81] duration metric: took 6.005722085s for pod "coredns-6f6b679f8f-xg8fx" in "kube-system" namespace to be "Ready" ...
	E0813 23:48:50.122168   17389 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-6f6b679f8f-xg8fx" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-13 23:48:49 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-13 23:48:37 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-13 23:48:37 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-13 23:48:37 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-13 23:48:37 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.8 HostIPs:[{IP:192.168.39.8}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-08-13 23:48:37 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-08-13 23:48:41 +0000 UTC,FinishedAt:2024-08-13 23:48:48 +0000 UTC,ContainerID:cri-o://57188d13697467e6140175385ca067455c09a2e9f44f868ff2c79498b0bf8ccf,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://57188d13697467e6140175385ca067455c09a2e9f44f868ff2c79498b0bf8ccf Started:0xc00280d810 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc00281a900} {Name:kube-api-access-45gsc MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOn
ly:0xc00281a910}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0813 23:48:50.122183   17389 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-937866" in "kube-system" namespace to be "Ready" ...
	I0813 23:48:50.126292   17389 pod_ready.go:92] pod "etcd-addons-937866" in "kube-system" namespace has status "Ready":"True"
	I0813 23:48:50.126313   17389 pod_ready.go:81] duration metric: took 4.120047ms for pod "etcd-addons-937866" in "kube-system" namespace to be "Ready" ...
	I0813 23:48:50.126325   17389 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-937866" in "kube-system" namespace to be "Ready" ...
	I0813 23:48:50.130912   17389 pod_ready.go:92] pod "kube-apiserver-addons-937866" in "kube-system" namespace has status "Ready":"True"
	I0813 23:48:50.130931   17389 pod_ready.go:81] duration metric: took 4.598167ms for pod "kube-apiserver-addons-937866" in "kube-system" namespace to be "Ready" ...
	I0813 23:48:50.130942   17389 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-937866" in "kube-system" namespace to be "Ready" ...
	I0813 23:48:50.135325   17389 pod_ready.go:92] pod "kube-controller-manager-addons-937866" in "kube-system" namespace has status "Ready":"True"
	I0813 23:48:50.135340   17389 pod_ready.go:81] duration metric: took 4.391855ms for pod "kube-controller-manager-addons-937866" in "kube-system" namespace to be "Ready" ...
	I0813 23:48:50.135351   17389 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-824wz" in "kube-system" namespace to be "Ready" ...
	I0813 23:48:50.140106   17389 pod_ready.go:92] pod "kube-proxy-824wz" in "kube-system" namespace has status "Ready":"True"
	I0813 23:48:50.140122   17389 pod_ready.go:81] duration metric: took 4.764171ms for pod "kube-proxy-824wz" in "kube-system" namespace to be "Ready" ...
	I0813 23:48:50.140131   17389 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-937866" in "kube-system" namespace to be "Ready" ...
	I0813 23:48:50.152083   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:48:50.155833   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:48:50.190421   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:48:50.520537   17389 pod_ready.go:92] pod "kube-scheduler-addons-937866" in "kube-system" namespace has status "Ready":"True"
	I0813 23:48:50.520567   17389 pod_ready.go:81] duration metric: took 380.427953ms for pod "kube-scheduler-addons-937866" in "kube-system" namespace to be "Ready" ...
	I0813 23:48:50.520580   17389 pod_ready.go:38] duration metric: took 12.542275153s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 23:48:50.520598   17389 api_server.go:52] waiting for apiserver process to appear ...
	I0813 23:48:50.520675   17389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 23:48:50.563714   17389 api_server.go:72] duration metric: took 13.32538518s to wait for apiserver process to appear ...
	I0813 23:48:50.563748   17389 api_server.go:88] waiting for apiserver healthz status ...
	I0813 23:48:50.563771   17389 api_server.go:253] Checking apiserver healthz at https://192.168.39.8:8443/healthz ...
	I0813 23:48:50.571204   17389 api_server.go:279] https://192.168.39.8:8443/healthz returned 200:
	ok
	I0813 23:48:50.572748   17389 api_server.go:141] control plane version: v1.31.0
	I0813 23:48:50.572774   17389 api_server.go:131] duration metric: took 9.018119ms to wait for apiserver health ...
	I0813 23:48:50.572783   17389 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 23:48:50.600576   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:48:50.655035   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:48:50.658972   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:48:50.690600   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:48:50.728332   17389 system_pods.go:59] 18 kube-system pods found
	I0813 23:48:50.728364   17389 system_pods.go:61] "coredns-6f6b679f8f-mq64k" [0528e757-cec5-40d0-9a8e-12819640a8db] Running
	I0813 23:48:50.728372   17389 system_pods.go:61] "csi-hostpath-attacher-0" [e4801af2-e316-4c00-bb1a-f69134d81190] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0813 23:48:50.728378   17389 system_pods.go:61] "csi-hostpath-resizer-0" [f5bda74c-dfef-4e1c-857d-7d252de5db1e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0813 23:48:50.728393   17389 system_pods.go:61] "csi-hostpathplugin-vxpnf" [17d9d31f-6635-4275-9b5e-4bfa444ec3da] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0813 23:48:50.728398   17389 system_pods.go:61] "etcd-addons-937866" [6d636c7e-8378-4d77-8a06-c97743bddc68] Running
	I0813 23:48:50.728402   17389 system_pods.go:61] "kube-apiserver-addons-937866" [9191440b-abcb-45ce-901c-ef6578bec1e0] Running
	I0813 23:48:50.728407   17389 system_pods.go:61] "kube-controller-manager-addons-937866" [8063133c-4ca8-4683-882a-37dbd1cd0ac0] Running
	I0813 23:48:50.728412   17389 system_pods.go:61] "kube-ingress-dns-minikube" [1b4c2a31-5938-43b5-9fa3-fe5b3ebf19bf] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0813 23:48:50.728415   17389 system_pods.go:61] "kube-proxy-824wz" [8453a99d-976e-4371-9c3b-104af4136766] Running
	I0813 23:48:50.728419   17389 system_pods.go:61] "kube-scheduler-addons-937866" [b1f5df74-7ed9-4837-8cfb-deef2ecb11ca] Running
	I0813 23:48:50.728423   17389 system_pods.go:61] "metrics-server-8988944d9-mnlqq" [82850aaa-4f93-49e5-b89b-e86bc208fd74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 23:48:50.728430   17389 system_pods.go:61] "nvidia-device-plugin-daemonset-mg5kj" [decbf56f-a46d-4b32-a963-1abb25adfab9] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0813 23:48:50.728443   17389 system_pods.go:61] "registry-6fb4cdfc84-d8ptz" [03e452f4-85d3-486e-bf4e-30e1bf8b8929] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0813 23:48:50.728449   17389 system_pods.go:61] "registry-proxy-9lq9k" [1cb9d48b-73e5-4500-bb30-902eac13720e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0813 23:48:50.728455   17389 system_pods.go:61] "snapshot-controller-56fcc65765-fnm49" [98fb76a3-1db4-4ad5-b71c-c64a3e5c97d6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0813 23:48:50.728479   17389 system_pods.go:61] "snapshot-controller-56fcc65765-jg4b7" [fd5994b7-7852-4377-9f88-fa1d4de1138f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0813 23:48:50.728490   17389 system_pods.go:61] "storage-provisioner" [9ba3f553-c9e7-46cf-b4b9-a0e0246b026a] Running
	I0813 23:48:50.728496   17389 system_pods.go:61] "tiller-deploy-b48cc5f79-p2hvc" [66ce562c-db93-4b51-b8be-ce14bacba0f8] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0813 23:48:50.728501   17389 system_pods.go:74] duration metric: took 155.713541ms to wait for pod list to return data ...
	I0813 23:48:50.728509   17389 default_sa.go:34] waiting for default service account to be created ...
	I0813 23:48:50.920520   17389 default_sa.go:45] found service account: "default"
	I0813 23:48:50.920547   17389 default_sa.go:55] duration metric: took 192.03021ms for default service account to be created ...
	I0813 23:48:50.920555   17389 system_pods.go:116] waiting for k8s-apps to be running ...
	I0813 23:48:51.098244   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:48:51.128050   17389 system_pods.go:86] 18 kube-system pods found
	I0813 23:48:51.128079   17389 system_pods.go:89] "coredns-6f6b679f8f-mq64k" [0528e757-cec5-40d0-9a8e-12819640a8db] Running
	I0813 23:48:51.128088   17389 system_pods.go:89] "csi-hostpath-attacher-0" [e4801af2-e316-4c00-bb1a-f69134d81190] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0813 23:48:51.128096   17389 system_pods.go:89] "csi-hostpath-resizer-0" [f5bda74c-dfef-4e1c-857d-7d252de5db1e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0813 23:48:51.128106   17389 system_pods.go:89] "csi-hostpathplugin-vxpnf" [17d9d31f-6635-4275-9b5e-4bfa444ec3da] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0813 23:48:51.128113   17389 system_pods.go:89] "etcd-addons-937866" [6d636c7e-8378-4d77-8a06-c97743bddc68] Running
	I0813 23:48:51.128122   17389 system_pods.go:89] "kube-apiserver-addons-937866" [9191440b-abcb-45ce-901c-ef6578bec1e0] Running
	I0813 23:48:51.128133   17389 system_pods.go:89] "kube-controller-manager-addons-937866" [8063133c-4ca8-4683-882a-37dbd1cd0ac0] Running
	I0813 23:48:51.128143   17389 system_pods.go:89] "kube-ingress-dns-minikube" [1b4c2a31-5938-43b5-9fa3-fe5b3ebf19bf] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0813 23:48:51.128155   17389 system_pods.go:89] "kube-proxy-824wz" [8453a99d-976e-4371-9c3b-104af4136766] Running
	I0813 23:48:51.128164   17389 system_pods.go:89] "kube-scheduler-addons-937866" [b1f5df74-7ed9-4837-8cfb-deef2ecb11ca] Running
	I0813 23:48:51.128172   17389 system_pods.go:89] "metrics-server-8988944d9-mnlqq" [82850aaa-4f93-49e5-b89b-e86bc208fd74] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0813 23:48:51.128183   17389 system_pods.go:89] "nvidia-device-plugin-daemonset-mg5kj" [decbf56f-a46d-4b32-a963-1abb25adfab9] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0813 23:48:51.128193   17389 system_pods.go:89] "registry-6fb4cdfc84-d8ptz" [03e452f4-85d3-486e-bf4e-30e1bf8b8929] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0813 23:48:51.128209   17389 system_pods.go:89] "registry-proxy-9lq9k" [1cb9d48b-73e5-4500-bb30-902eac13720e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0813 23:48:51.128222   17389 system_pods.go:89] "snapshot-controller-56fcc65765-fnm49" [98fb76a3-1db4-4ad5-b71c-c64a3e5c97d6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0813 23:48:51.128235   17389 system_pods.go:89] "snapshot-controller-56fcc65765-jg4b7" [fd5994b7-7852-4377-9f88-fa1d4de1138f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0813 23:48:51.128243   17389 system_pods.go:89] "storage-provisioner" [9ba3f553-c9e7-46cf-b4b9-a0e0246b026a] Running
	I0813 23:48:51.128249   17389 system_pods.go:89] "tiller-deploy-b48cc5f79-p2hvc" [66ce562c-db93-4b51-b8be-ce14bacba0f8] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0813 23:48:51.128258   17389 system_pods.go:126] duration metric: took 207.697714ms to wait for k8s-apps to be running ...
	I0813 23:48:51.128271   17389 system_svc.go:44] waiting for kubelet service to be running ....
	I0813 23:48:51.128319   17389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0813 23:48:51.153565   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:48:51.154896   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:48:51.169250   17389 system_svc.go:56] duration metric: took 40.970036ms WaitForService to wait for kubelet
	I0813 23:48:51.169280   17389 kubeadm.go:582] duration metric: took 13.930952977s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0813 23:48:51.169304   17389 node_conditions.go:102] verifying NodePressure condition ...
	I0813 23:48:51.190455   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:48:51.320331   17389 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0813 23:48:51.320354   17389 node_conditions.go:123] node cpu capacity is 2
	I0813 23:48:51.320365   17389 node_conditions.go:105] duration metric: took 151.056247ms to run NodePressure ...
	I0813 23:48:51.320376   17389 start.go:241] waiting for startup goroutines ...
	I0813 23:48:51.598588   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:48:51.653107   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:48:51.653326   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:48:51.692851   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:48:52.099804   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:48:52.153281   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:48:52.155502   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:48:52.190023   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:48:52.597256   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:48:52.652564   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:48:52.652718   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:48:52.689665   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:48:53.206551   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:48:53.304933   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:48:53.305048   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:48:53.305217   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:48:53.598948   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:48:53.654857   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:48:53.656449   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:48:53.690461   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:48:54.098849   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:48:54.154017   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:48:54.155736   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:48:54.190375   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:48:54.598273   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:48:54.652538   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:48:54.653119   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:48:54.689208   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:48:55.098261   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:48:55.153346   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:48:55.154087   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:48:55.197475   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:48:55.597907   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:48:55.653179   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:48:55.653350   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:48:55.689501   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:48:56.098424   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:48:56.152239   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:48:56.153435   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:48:56.189617   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:48:56.598247   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:48:56.652305   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:48:56.652802   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:48:56.689085   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:48:57.097959   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:48:57.155254   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:48:57.155346   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:48:57.190012   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:48:57.599203   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:48:57.653988   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:48:57.654065   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:48:57.688912   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:48:58.097780   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:48:58.152705   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:48:58.153359   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:48:58.190051   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:48:58.597770   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:48:58.652609   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:48:58.653210   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:48:58.689489   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:48:59.097730   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:48:59.152398   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:48:59.154600   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:48:59.189354   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:48:59.598236   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:48:59.652890   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:48:59.653745   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:48:59.689296   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:00.097482   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:00.153192   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:00.153934   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:00.189627   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:00.597977   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:00.653060   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:00.653570   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:00.689755   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:01.097940   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:01.152212   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:01.152970   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:01.189175   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:01.597601   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:01.651972   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:01.652248   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:01.697613   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:02.097919   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:02.152873   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:02.153289   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:02.189844   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:02.599988   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:02.652114   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:02.653489   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:02.691051   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:03.098643   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:03.153666   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:03.153790   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:03.191787   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:03.600120   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:03.658600   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:03.664700   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:03.690079   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:04.098263   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:04.152701   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:04.155825   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:04.189286   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:04.597928   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:04.652535   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:04.652780   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:04.688939   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:05.099623   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:05.199919   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:05.200010   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:05.200230   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:05.598190   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:05.653973   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:05.657138   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:05.700398   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:06.099861   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:06.153860   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:06.155802   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:06.189190   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:06.597871   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:06.652693   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:06.653223   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:06.691423   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:07.098729   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:07.152395   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:07.152823   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:07.190229   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:07.598319   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:07.653662   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:07.654230   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:07.697679   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:08.098263   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:08.154157   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:08.154357   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:08.196256   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:08.598169   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:08.653539   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:08.653651   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:08.692981   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:09.098420   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:09.153840   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:09.153901   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:09.189316   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:09.597721   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:09.653112   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:09.653493   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:09.689569   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:10.098378   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:10.152117   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:10.153926   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:10.189318   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:10.597507   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:10.652372   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:10.653558   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:10.688493   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:11.097975   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:11.152929   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:11.152942   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:11.189508   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:11.597522   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:11.652757   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:11.652794   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:11.697235   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:12.097550   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:12.151778   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:12.153054   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:12.189953   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:12.604938   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:12.653616   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:12.654531   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:12.690372   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:13.097263   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:13.152309   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:13.153397   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:13.190275   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:13.726269   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:13.726496   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:13.727028   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:13.727063   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:14.098430   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:14.152594   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:14.153283   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:14.189661   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:14.597775   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:14.653719   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:14.653865   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:14.689102   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:15.097049   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:15.152721   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:15.152905   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:15.189378   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:15.598094   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:15.653060   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:15.653187   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:15.689956   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:16.098173   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:16.152118   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:16.153288   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:16.189681   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:16.597194   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:16.652564   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:16.653449   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:16.689527   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:17.098000   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:17.152516   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:17.156033   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:17.189581   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:17.772720   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:17.773124   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:17.773397   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:17.774222   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:18.098231   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:18.152110   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:18.152685   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:18.189532   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:18.597523   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:18.653735   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:18.654773   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:18.689434   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:19.097563   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:19.153811   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:19.154102   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:19.189332   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:19.597314   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:19.651857   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:19.653523   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:19.688933   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:20.098253   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:20.152039   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:20.152515   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:20.189753   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:20.598186   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:20.652014   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:20.652892   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:20.689003   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:21.097642   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:21.152684   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:21.153025   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:21.189536   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:21.598201   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:21.653417   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:21.653444   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:21.698186   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:22.098268   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:22.153100   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:22.154569   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:22.189184   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:22.598027   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:22.653179   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:22.653440   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:22.689931   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:23.097490   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:23.152937   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:23.153244   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:23.189842   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:23.598247   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:23.653362   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:23.653833   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:23.689539   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:24.098766   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:24.153612   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:24.154300   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:24.189793   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:24.598322   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:24.651940   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:24.652921   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:24.689539   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:25.097167   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:25.152344   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:25.153606   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:25.189243   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:25.597675   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:25.653937   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:25.655255   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:25.689834   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:26.097463   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:26.152859   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:26.153771   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:26.189546   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:26.598071   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:26.652885   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:26.654494   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:26.689487   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:27.097880   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:27.153214   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:27.153863   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:27.189129   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:27.597247   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:27.652083   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:27.654300   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:27.690693   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:28.098615   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:28.152275   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:28.152637   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:28.188948   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:28.598512   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:28.654283   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:28.654380   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:28.689455   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:29.097724   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:29.152620   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:29.153738   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:29.189198   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:29.597687   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:29.653718   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:29.654359   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:29.689224   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:30.097865   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:30.153281   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:30.154098   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:30.189507   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:30.597484   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:30.653660   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:30.654511   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:30.689582   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:31.097711   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:31.152718   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:31.154133   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:31.189990   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:31.597519   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:31.653204   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:31.654323   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 23:49:31.697436   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:32.098215   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:32.155025   17389 kapi.go:107] duration metric: took 46.505581339s to wait for kubernetes.io/minikube-addons=registry ...
	I0813 23:49:32.155788   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:32.192983   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:32.600434   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:32.652789   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:32.688420   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:33.098237   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:33.198631   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:33.199073   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:33.599560   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:33.654097   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:33.689242   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:34.097163   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:34.153237   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:34.190006   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:34.598928   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:34.652448   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:34.689665   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:35.098428   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:35.153212   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:35.188915   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:35.598165   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:35.652174   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:35.689560   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:36.098308   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:36.153445   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:36.189675   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:36.598442   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:36.653079   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:36.689580   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:37.281636   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:37.285390   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:37.285552   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:37.597177   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:37.652193   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:37.689282   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:38.097616   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:38.152691   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:38.193651   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:38.598250   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:38.652483   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:38.689808   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:39.097860   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:39.152447   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:39.189490   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:39.597650   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:39.652429   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:39.689534   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:40.097643   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:40.197969   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:40.198740   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:40.596958   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:40.652981   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:40.688874   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:41.098375   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:41.156100   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:41.191405   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:41.845801   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:41.856947   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:41.857426   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:42.099214   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:42.151804   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:42.188826   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:42.597876   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:42.653391   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:42.689868   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:43.098128   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:43.151804   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:43.188947   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:43.601946   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:43.653712   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:43.689455   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:44.097578   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:44.152547   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:44.189645   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:44.598160   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:44.652295   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:44.688690   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:45.098686   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:45.199166   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:45.199410   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:45.597279   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:45.698602   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:45.698915   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:46.099244   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:46.156098   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:46.189992   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:46.598717   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:46.652248   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:46.690114   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:47.097932   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:47.197384   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:47.198506   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:47.600598   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:47.652849   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:47.688458   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:48.097890   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:48.152506   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:48.190132   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:48.845674   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:48.846388   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:48.846655   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:49.098289   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:49.198332   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:49.198592   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:49.596768   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:49.652304   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:49.689468   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:50.106563   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:50.205831   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:50.206391   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:50.597976   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:50.652730   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:50.689555   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:51.097816   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:51.152602   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:51.189886   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:51.598378   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:51.653210   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:51.689835   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:52.100217   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:52.153278   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:52.198828   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:52.598724   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:52.653955   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:52.688945   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:53.101206   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:53.152200   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:53.191806   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:53.599210   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:53.652020   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:53.689308   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:54.512519   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:54.513123   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:54.513136   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:54.597360   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:54.652101   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:54.689111   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:55.097634   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:55.197597   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:55.198434   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:55.597107   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:55.651805   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:55.689537   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:56.098210   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:56.153621   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:56.189318   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:56.983628   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:56.983878   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:56.985650   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:57.098695   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:57.156082   17389 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 23:49:57.255160   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:57.600639   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:57.698982   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:57.699919   17389 kapi.go:107] duration metric: took 1m12.051665488s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0813 23:49:58.097288   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:58.190700   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:58.601685   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:58.700568   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:59.097875   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:59.190313   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:49:59.597703   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:49:59.689543   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:50:00.097747   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:50:00.190247   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:50:00.597439   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:50:00.692788   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:50:01.098061   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:50:01.189499   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:50:01.597920   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:50:01.692116   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:50:02.099675   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:50:02.189632   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 23:50:02.597817   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:50:02.694771   17389 kapi.go:107] duration metric: took 1m14.508630277s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0813 23:50:02.695953   17389 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-937866 cluster.
	I0813 23:50:02.696927   17389 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0813 23:50:02.697846   17389 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0813 23:50:03.098257   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:50:03.599315   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:50:04.098179   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:50:04.598577   17389 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 23:50:05.099833   17389 kapi.go:107] duration metric: took 1m18.506732231s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0813 23:50:05.101490   17389 out.go:177] * Enabled addons: inspektor-gadget, ingress-dns, helm-tiller, storage-provisioner, nvidia-device-plugin, cloud-spanner, metrics-server, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0813 23:50:05.102657   17389 addons.go:510] duration metric: took 1m27.864313653s for enable addons: enabled=[inspektor-gadget ingress-dns helm-tiller storage-provisioner nvidia-device-plugin cloud-spanner metrics-server yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0813 23:50:05.102688   17389 start.go:246] waiting for cluster config update ...
	I0813 23:50:05.102703   17389 start.go:255] writing updated cluster config ...
	I0813 23:50:05.102934   17389 ssh_runner.go:195] Run: rm -f paused
	I0813 23:50:05.153742   17389 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0813 23:50:05.155757   17389 out.go:177] * Done! kubectl is now configured to use "addons-937866" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 13 23:56:27 addons-937866 crio[672]: time="2024-08-13 23:56:27.069319728Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723593387069292737,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590423,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=01081169-5e1f-4bd3-84b9-a7876bdede1c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 13 23:56:27 addons-937866 crio[672]: time="2024-08-13 23:56:27.069792685Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=457f1dfc-a0cd-4da1-8e64-e53c12b28c0e name=/runtime.v1.RuntimeService/ListContainers
	Aug 13 23:56:27 addons-937866 crio[672]: time="2024-08-13 23:56:27.069860749Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=457f1dfc-a0cd-4da1-8e64-e53c12b28c0e name=/runtime.v1.RuntimeService/ListContainers
	Aug 13 23:56:27 addons-937866 crio[672]: time="2024-08-13 23:56:27.070293400Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8f8a78c51396eedd90f1efb827e3068eb25b17e5a211cd3e0b4a03c8f733baf1,PodSandboxId:35b1ff51c158ff15821a2f082526de21c145d159fce3acaacd1640ee1dc7db11,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1723593204188162225,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-tgpcr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8a9febae-6afb-415b-9902-a227a7298d06,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b8516b3c92e0cc27ed3cdbff2eea3887caa7f28512183c7f1cb8639cbbb3f0a,PodSandboxId:eac73cadfda845564addf7539292840751eabedf41c780e867eaa4607576dcfb,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1723593063774166595,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 58e32069-0078-4b2c-83a7-45c915783932,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:080bae7736a72de2879ee0d3a4f237eb9b3a908007b9b977f2a0de7752529957,PodSandboxId:b5b91661d0a0429d4a31359d707f551150e546a4fc9438d9db9306db70d31d24,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723593008973481421,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5c89c9ad-4cc6-4702-9
bca-4e1f1aaba12a,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aef71853ff093cc09890a375f0e40a633b9946dfe086983308015a12d79c0ad1,PodSandboxId:5a356b85f2d87c5618dada30cae3fb6065e89fe0d6017a13ac0ff56bed6ec299,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1723592962867510636,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-wpqrr,io.kubernetes.pod.name
space: local-path-storage,io.kubernetes.pod.uid: c2c1ef92-b0ad-4867-8557-bd97061d6a77,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2384f234584637de7dc22678138c01c69ac4583bcef705f2c9092b9bfcdb9c3a,PodSandboxId:a02180723e74c3fa5bf4395df48247dc026937859e7c6127b3c6117d8c5e3609,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1723592945059195931,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metr
ics-server-8988944d9-mnlqq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82850aaa-4f93-49e5-b89b-e86bc208fd74,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27c34da461ec94bd2a036cc3e5a0bcb06066c6e533a5d76d7bd8689b9ace0e1b,PodSandboxId:250db49cedef2b39bccb69b7f3d4b8ddf31736e260362a70828c8b92c8d713dd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723592923504577297,Labels
:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ba3f553-c9e7-46cf-b4b9-a0e0246b026a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ad9649d499b73f453514ebcdbb386fd901bcad1a452c8ce0865a7bbe03aa40e,PodSandboxId:40484d729855fddf393de4d963534b514e63415ed9854819fc4afc5e58bd9b14,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723592921051831977,Labels:map[string]string{io.ku
bernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mq64k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0528e757-cec5-40d0-9a8e-12819640a8db,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e9d428f9086acfa295ebbe628e9f163619d77a3c764d06b84e4f2ade96f737b,PodSandboxId:d36dd11f973fb38b1936721687cfb0ab985a9e29e7a9415a3a8dcd4a8bfe4fb0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{
},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723592918705436898,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-824wz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8453a99d-976e-4371-9c3b-104af4136766,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d4207ddccb37f18323a1e3696612f510dd1ed74f5cf3e34d4fa6005dae1c9ae,PodSandboxId:59e77b940b475cfe19bf401b5a937b1bd8eb5e06c53bcf9400b277e89c9b2ae3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},
ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723592907453827348,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-937866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d74633517769850b725dccf9a0ffc53d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b454b279a20a235dbba873bdf29da83c94c918cdf25120909f89e43ab04e4f88,PodSandboxId:cea3c9b85f1fd6a2e273a1befa44d440d8d6351a3d62bd8aaa9bbf6ce12b9675,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e67414
62f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723592907495439394,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-937866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fce30fe107538c52cc2e261cb4c0133b,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:293d856a8715f2caab799754964d885457b07494e38abee7052087e67cc85340,PodSandboxId:9d7dd29c62160990aa4ce81efc620847f461ba1ab21f24dda57c75ad3c83816d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455
e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723592907458211050,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-937866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9571be376cc12fe482c4bfad58fba714,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a3ce8195d181b25c78f8341fd5c5da13fb0f3bf61d58093031de7c03a823424,PodSandboxId:4cc8cb6149f16822e50492f840788b5466199262b9ab4f70e4266f4feb1212a6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206
f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723592907432577828,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-937866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b584370eacfec4bbab6319ba572cc8a,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=457f1dfc-a0cd-4da1-8e64-e53c12b28c0e name=/runtime.v1.RuntimeService/ListContainers
	Aug 13 23:56:27 addons-937866 crio[672]: time="2024-08-13 23:56:27.109112308Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3b523b91-f7c0-4b7a-ac70-c3b4424d2d5f name=/runtime.v1.RuntimeService/Version
	Aug 13 23:56:27 addons-937866 crio[672]: time="2024-08-13 23:56:27.109188905Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3b523b91-f7c0-4b7a-ac70-c3b4424d2d5f name=/runtime.v1.RuntimeService/Version
	Aug 13 23:56:27 addons-937866 crio[672]: time="2024-08-13 23:56:27.110523906Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8cd62920-ffdb-4667-8392-acec599c91d1 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 13 23:56:27 addons-937866 crio[672]: time="2024-08-13 23:56:27.111937463Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723593387111905061,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590423,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8cd62920-ffdb-4667-8392-acec599c91d1 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 13 23:56:27 addons-937866 crio[672]: time="2024-08-13 23:56:27.112408647Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ed33c3f3-1085-41ac-b6d5-d813988ecc66 name=/runtime.v1.RuntimeService/ListContainers
	Aug 13 23:56:27 addons-937866 crio[672]: time="2024-08-13 23:56:27.112483438Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ed33c3f3-1085-41ac-b6d5-d813988ecc66 name=/runtime.v1.RuntimeService/ListContainers
	Aug 13 23:56:27 addons-937866 crio[672]: time="2024-08-13 23:56:27.113481445Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8f8a78c51396eedd90f1efb827e3068eb25b17e5a211cd3e0b4a03c8f733baf1,PodSandboxId:35b1ff51c158ff15821a2f082526de21c145d159fce3acaacd1640ee1dc7db11,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1723593204188162225,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-tgpcr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8a9febae-6afb-415b-9902-a227a7298d06,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b8516b3c92e0cc27ed3cdbff2eea3887caa7f28512183c7f1cb8639cbbb3f0a,PodSandboxId:eac73cadfda845564addf7539292840751eabedf41c780e867eaa4607576dcfb,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1723593063774166595,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 58e32069-0078-4b2c-83a7-45c915783932,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:080bae7736a72de2879ee0d3a4f237eb9b3a908007b9b977f2a0de7752529957,PodSandboxId:b5b91661d0a0429d4a31359d707f551150e546a4fc9438d9db9306db70d31d24,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723593008973481421,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5c89c9ad-4cc6-4702-9
bca-4e1f1aaba12a,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aef71853ff093cc09890a375f0e40a633b9946dfe086983308015a12d79c0ad1,PodSandboxId:5a356b85f2d87c5618dada30cae3fb6065e89fe0d6017a13ac0ff56bed6ec299,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1723592962867510636,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-wpqrr,io.kubernetes.pod.name
space: local-path-storage,io.kubernetes.pod.uid: c2c1ef92-b0ad-4867-8557-bd97061d6a77,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2384f234584637de7dc22678138c01c69ac4583bcef705f2c9092b9bfcdb9c3a,PodSandboxId:a02180723e74c3fa5bf4395df48247dc026937859e7c6127b3c6117d8c5e3609,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1723592945059195931,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metr
ics-server-8988944d9-mnlqq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82850aaa-4f93-49e5-b89b-e86bc208fd74,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27c34da461ec94bd2a036cc3e5a0bcb06066c6e533a5d76d7bd8689b9ace0e1b,PodSandboxId:250db49cedef2b39bccb69b7f3d4b8ddf31736e260362a70828c8b92c8d713dd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723592923504577297,Labels
:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ba3f553-c9e7-46cf-b4b9-a0e0246b026a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ad9649d499b73f453514ebcdbb386fd901bcad1a452c8ce0865a7bbe03aa40e,PodSandboxId:40484d729855fddf393de4d963534b514e63415ed9854819fc4afc5e58bd9b14,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723592921051831977,Labels:map[string]string{io.ku
bernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mq64k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0528e757-cec5-40d0-9a8e-12819640a8db,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e9d428f9086acfa295ebbe628e9f163619d77a3c764d06b84e4f2ade96f737b,PodSandboxId:d36dd11f973fb38b1936721687cfb0ab985a9e29e7a9415a3a8dcd4a8bfe4fb0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{
},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723592918705436898,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-824wz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8453a99d-976e-4371-9c3b-104af4136766,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d4207ddccb37f18323a1e3696612f510dd1ed74f5cf3e34d4fa6005dae1c9ae,PodSandboxId:59e77b940b475cfe19bf401b5a937b1bd8eb5e06c53bcf9400b277e89c9b2ae3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},
ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723592907453827348,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-937866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d74633517769850b725dccf9a0ffc53d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b454b279a20a235dbba873bdf29da83c94c918cdf25120909f89e43ab04e4f88,PodSandboxId:cea3c9b85f1fd6a2e273a1befa44d440d8d6351a3d62bd8aaa9bbf6ce12b9675,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e67414
62f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723592907495439394,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-937866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fce30fe107538c52cc2e261cb4c0133b,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:293d856a8715f2caab799754964d885457b07494e38abee7052087e67cc85340,PodSandboxId:9d7dd29c62160990aa4ce81efc620847f461ba1ab21f24dda57c75ad3c83816d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455
e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723592907458211050,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-937866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9571be376cc12fe482c4bfad58fba714,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a3ce8195d181b25c78f8341fd5c5da13fb0f3bf61d58093031de7c03a823424,PodSandboxId:4cc8cb6149f16822e50492f840788b5466199262b9ab4f70e4266f4feb1212a6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206
f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723592907432577828,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-937866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b584370eacfec4bbab6319ba572cc8a,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ed33c3f3-1085-41ac-b6d5-d813988ecc66 name=/runtime.v1.RuntimeService/ListContainers
	Aug 13 23:56:27 addons-937866 crio[672]: time="2024-08-13 23:56:27.160458162Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a172bc4e-2099-40cc-a029-cca1fba83deb name=/runtime.v1.RuntimeService/Version
	Aug 13 23:56:27 addons-937866 crio[672]: time="2024-08-13 23:56:27.160543302Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a172bc4e-2099-40cc-a029-cca1fba83deb name=/runtime.v1.RuntimeService/Version
	Aug 13 23:56:27 addons-937866 crio[672]: time="2024-08-13 23:56:27.161487958Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c05d1ac9-7e78-48c7-a94e-7ae587f54f0e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 13 23:56:27 addons-937866 crio[672]: time="2024-08-13 23:56:27.162907480Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723593387162878610,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590423,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c05d1ac9-7e78-48c7-a94e-7ae587f54f0e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 13 23:56:27 addons-937866 crio[672]: time="2024-08-13 23:56:27.163443578Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6648727a-2acc-461a-ae2b-869122e45be1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 13 23:56:27 addons-937866 crio[672]: time="2024-08-13 23:56:27.163505741Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6648727a-2acc-461a-ae2b-869122e45be1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 13 23:56:27 addons-937866 crio[672]: time="2024-08-13 23:56:27.163833786Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8f8a78c51396eedd90f1efb827e3068eb25b17e5a211cd3e0b4a03c8f733baf1,PodSandboxId:35b1ff51c158ff15821a2f082526de21c145d159fce3acaacd1640ee1dc7db11,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1723593204188162225,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-tgpcr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8a9febae-6afb-415b-9902-a227a7298d06,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b8516b3c92e0cc27ed3cdbff2eea3887caa7f28512183c7f1cb8639cbbb3f0a,PodSandboxId:eac73cadfda845564addf7539292840751eabedf41c780e867eaa4607576dcfb,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1723593063774166595,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 58e32069-0078-4b2c-83a7-45c915783932,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:080bae7736a72de2879ee0d3a4f237eb9b3a908007b9b977f2a0de7752529957,PodSandboxId:b5b91661d0a0429d4a31359d707f551150e546a4fc9438d9db9306db70d31d24,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723593008973481421,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5c89c9ad-4cc6-4702-9
bca-4e1f1aaba12a,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aef71853ff093cc09890a375f0e40a633b9946dfe086983308015a12d79c0ad1,PodSandboxId:5a356b85f2d87c5618dada30cae3fb6065e89fe0d6017a13ac0ff56bed6ec299,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1723592962867510636,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-wpqrr,io.kubernetes.pod.name
space: local-path-storage,io.kubernetes.pod.uid: c2c1ef92-b0ad-4867-8557-bd97061d6a77,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2384f234584637de7dc22678138c01c69ac4583bcef705f2c9092b9bfcdb9c3a,PodSandboxId:a02180723e74c3fa5bf4395df48247dc026937859e7c6127b3c6117d8c5e3609,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1723592945059195931,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metr
ics-server-8988944d9-mnlqq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82850aaa-4f93-49e5-b89b-e86bc208fd74,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27c34da461ec94bd2a036cc3e5a0bcb06066c6e533a5d76d7bd8689b9ace0e1b,PodSandboxId:250db49cedef2b39bccb69b7f3d4b8ddf31736e260362a70828c8b92c8d713dd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723592923504577297,Labels
:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ba3f553-c9e7-46cf-b4b9-a0e0246b026a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ad9649d499b73f453514ebcdbb386fd901bcad1a452c8ce0865a7bbe03aa40e,PodSandboxId:40484d729855fddf393de4d963534b514e63415ed9854819fc4afc5e58bd9b14,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723592921051831977,Labels:map[string]string{io.ku
bernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mq64k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0528e757-cec5-40d0-9a8e-12819640a8db,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e9d428f9086acfa295ebbe628e9f163619d77a3c764d06b84e4f2ade96f737b,PodSandboxId:d36dd11f973fb38b1936721687cfb0ab985a9e29e7a9415a3a8dcd4a8bfe4fb0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{
},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723592918705436898,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-824wz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8453a99d-976e-4371-9c3b-104af4136766,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d4207ddccb37f18323a1e3696612f510dd1ed74f5cf3e34d4fa6005dae1c9ae,PodSandboxId:59e77b940b475cfe19bf401b5a937b1bd8eb5e06c53bcf9400b277e89c9b2ae3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},
ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723592907453827348,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-937866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d74633517769850b725dccf9a0ffc53d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b454b279a20a235dbba873bdf29da83c94c918cdf25120909f89e43ab04e4f88,PodSandboxId:cea3c9b85f1fd6a2e273a1befa44d440d8d6351a3d62bd8aaa9bbf6ce12b9675,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e67414
62f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723592907495439394,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-937866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fce30fe107538c52cc2e261cb4c0133b,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:293d856a8715f2caab799754964d885457b07494e38abee7052087e67cc85340,PodSandboxId:9d7dd29c62160990aa4ce81efc620847f461ba1ab21f24dda57c75ad3c83816d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455
e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723592907458211050,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-937866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9571be376cc12fe482c4bfad58fba714,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a3ce8195d181b25c78f8341fd5c5da13fb0f3bf61d58093031de7c03a823424,PodSandboxId:4cc8cb6149f16822e50492f840788b5466199262b9ab4f70e4266f4feb1212a6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206
f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723592907432577828,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-937866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b584370eacfec4bbab6319ba572cc8a,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6648727a-2acc-461a-ae2b-869122e45be1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 13 23:56:27 addons-937866 crio[672]: time="2024-08-13 23:56:27.195251952Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9303b3f1-b74e-4c03-ad21-43d0a4c1741e name=/runtime.v1.RuntimeService/Version
	Aug 13 23:56:27 addons-937866 crio[672]: time="2024-08-13 23:56:27.195336672Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9303b3f1-b74e-4c03-ad21-43d0a4c1741e name=/runtime.v1.RuntimeService/Version
	Aug 13 23:56:27 addons-937866 crio[672]: time="2024-08-13 23:56:27.196559546Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=10369f76-2f38-4ceb-b36a-9fe52e4043ab name=/runtime.v1.ImageService/ImageFsInfo
	Aug 13 23:56:27 addons-937866 crio[672]: time="2024-08-13 23:56:27.197928660Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723593387197905000,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590423,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=10369f76-2f38-4ceb-b36a-9fe52e4043ab name=/runtime.v1.ImageService/ImageFsInfo
	Aug 13 23:56:27 addons-937866 crio[672]: time="2024-08-13 23:56:27.198688766Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9c8b6971-316f-4f1d-aede-ef0a9d176b7a name=/runtime.v1.RuntimeService/ListContainers
	Aug 13 23:56:27 addons-937866 crio[672]: time="2024-08-13 23:56:27.198759112Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9c8b6971-316f-4f1d-aede-ef0a9d176b7a name=/runtime.v1.RuntimeService/ListContainers
	Aug 13 23:56:27 addons-937866 crio[672]: time="2024-08-13 23:56:27.199014705Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8f8a78c51396eedd90f1efb827e3068eb25b17e5a211cd3e0b4a03c8f733baf1,PodSandboxId:35b1ff51c158ff15821a2f082526de21c145d159fce3acaacd1640ee1dc7db11,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1723593204188162225,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-tgpcr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8a9febae-6afb-415b-9902-a227a7298d06,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b8516b3c92e0cc27ed3cdbff2eea3887caa7f28512183c7f1cb8639cbbb3f0a,PodSandboxId:eac73cadfda845564addf7539292840751eabedf41c780e867eaa4607576dcfb,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1723593063774166595,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 58e32069-0078-4b2c-83a7-45c915783932,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:080bae7736a72de2879ee0d3a4f237eb9b3a908007b9b977f2a0de7752529957,PodSandboxId:b5b91661d0a0429d4a31359d707f551150e546a4fc9438d9db9306db70d31d24,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723593008973481421,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5c89c9ad-4cc6-4702-9
bca-4e1f1aaba12a,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aef71853ff093cc09890a375f0e40a633b9946dfe086983308015a12d79c0ad1,PodSandboxId:5a356b85f2d87c5618dada30cae3fb6065e89fe0d6017a13ac0ff56bed6ec299,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1723592962867510636,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-wpqrr,io.kubernetes.pod.name
space: local-path-storage,io.kubernetes.pod.uid: c2c1ef92-b0ad-4867-8557-bd97061d6a77,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2384f234584637de7dc22678138c01c69ac4583bcef705f2c9092b9bfcdb9c3a,PodSandboxId:a02180723e74c3fa5bf4395df48247dc026937859e7c6127b3c6117d8c5e3609,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1723592945059195931,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metr
ics-server-8988944d9-mnlqq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82850aaa-4f93-49e5-b89b-e86bc208fd74,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27c34da461ec94bd2a036cc3e5a0bcb06066c6e533a5d76d7bd8689b9ace0e1b,PodSandboxId:250db49cedef2b39bccb69b7f3d4b8ddf31736e260362a70828c8b92c8d713dd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723592923504577297,Labels
:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ba3f553-c9e7-46cf-b4b9-a0e0246b026a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ad9649d499b73f453514ebcdbb386fd901bcad1a452c8ce0865a7bbe03aa40e,PodSandboxId:40484d729855fddf393de4d963534b514e63415ed9854819fc4afc5e58bd9b14,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723592921051831977,Labels:map[string]string{io.ku
bernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mq64k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0528e757-cec5-40d0-9a8e-12819640a8db,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e9d428f9086acfa295ebbe628e9f163619d77a3c764d06b84e4f2ade96f737b,PodSandboxId:d36dd11f973fb38b1936721687cfb0ab985a9e29e7a9415a3a8dcd4a8bfe4fb0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{
},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723592918705436898,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-824wz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8453a99d-976e-4371-9c3b-104af4136766,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d4207ddccb37f18323a1e3696612f510dd1ed74f5cf3e34d4fa6005dae1c9ae,PodSandboxId:59e77b940b475cfe19bf401b5a937b1bd8eb5e06c53bcf9400b277e89c9b2ae3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},
ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723592907453827348,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-937866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d74633517769850b725dccf9a0ffc53d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b454b279a20a235dbba873bdf29da83c94c918cdf25120909f89e43ab04e4f88,PodSandboxId:cea3c9b85f1fd6a2e273a1befa44d440d8d6351a3d62bd8aaa9bbf6ce12b9675,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e67414
62f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723592907495439394,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-937866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fce30fe107538c52cc2e261cb4c0133b,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:293d856a8715f2caab799754964d885457b07494e38abee7052087e67cc85340,PodSandboxId:9d7dd29c62160990aa4ce81efc620847f461ba1ab21f24dda57c75ad3c83816d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455
e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723592907458211050,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-937866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9571be376cc12fe482c4bfad58fba714,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a3ce8195d181b25c78f8341fd5c5da13fb0f3bf61d58093031de7c03a823424,PodSandboxId:4cc8cb6149f16822e50492f840788b5466199262b9ab4f70e4266f4feb1212a6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206
f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723592907432577828,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-937866,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b584370eacfec4bbab6319ba572cc8a,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9c8b6971-316f-4f1d-aede-ef0a9d176b7a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8f8a78c51396e       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   3 minutes ago       Running             hello-world-app           0                   35b1ff51c158f       hello-world-app-55bf9c44b4-tgpcr
	4b8516b3c92e0       docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9                         5 minutes ago       Running             nginx                     0                   eac73cadfda84       nginx
	080bae7736a72       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                     6 minutes ago       Running             busybox                   0                   b5b91661d0a04       busybox
	aef71853ff093       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef        7 minutes ago       Running             local-path-provisioner    0                   5a356b85f2d87       local-path-provisioner-86d989889c-wpqrr
	2384f23458463       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872   7 minutes ago       Running             metrics-server            0                   a02180723e74c       metrics-server-8988944d9-mnlqq
	27c34da461ec9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        7 minutes ago       Running             storage-provisioner       0                   250db49cedef2       storage-provisioner
	2ad9649d499b7       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                        7 minutes ago       Running             coredns                   0                   40484d729855f       coredns-6f6b679f8f-mq64k
	9e9d428f9086a       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                                        7 minutes ago       Running             kube-proxy                0                   d36dd11f973fb       kube-proxy-824wz
	b454b279a20a2       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                                        7 minutes ago       Running             kube-scheduler            0                   cea3c9b85f1fd       kube-scheduler-addons-937866
	293d856a8715f       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                                        7 minutes ago       Running             kube-controller-manager   0                   9d7dd29c62160       kube-controller-manager-addons-937866
	8d4207ddccb37       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                        7 minutes ago       Running             etcd                      0                   59e77b940b475       etcd-addons-937866
	5a3ce8195d181       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                                        7 minutes ago       Running             kube-apiserver            0                   4cc8cb6149f16       kube-apiserver-addons-937866
	
	
	==> coredns [2ad9649d499b73f453514ebcdbb386fd901bcad1a452c8ce0865a7bbe03aa40e] <==
	[INFO] 10.244.0.6:43651 - 28708 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00017181s
	[INFO] 10.244.0.6:44940 - 168 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000070971s
	[INFO] 10.244.0.6:44940 - 51370 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000138763s
	[INFO] 10.244.0.6:46219 - 19579 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000060234s
	[INFO] 10.244.0.6:46219 - 3941 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000080171s
	[INFO] 10.244.0.6:50817 - 6558 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000076919s
	[INFO] 10.244.0.6:50817 - 60831 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000072477s
	[INFO] 10.244.0.6:45604 - 61411 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00016819s
	[INFO] 10.244.0.6:45604 - 27361 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000080329s
	[INFO] 10.244.0.6:47939 - 17859 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000100466s
	[INFO] 10.244.0.6:47939 - 42188 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000067174s
	[INFO] 10.244.0.6:33737 - 30997 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000093616s
	[INFO] 10.244.0.6:33737 - 4630 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000053071s
	[INFO] 10.244.0.6:44723 - 32836 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000106843s
	[INFO] 10.244.0.6:44723 - 55622 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000066281s
	[INFO] 10.244.0.22:55010 - 57326 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000458656s
	[INFO] 10.244.0.22:35695 - 36191 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000551967s
	[INFO] 10.244.0.22:52582 - 12013 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000120693s
	[INFO] 10.244.0.22:50456 - 22256 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000124883s
	[INFO] 10.244.0.22:37911 - 46857 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000087551s
	[INFO] 10.244.0.22:48471 - 41145 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00009798s
	[INFO] 10.244.0.22:59036 - 40120 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.0007384s
	[INFO] 10.244.0.22:47390 - 5688 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002527361s
	[INFO] 10.244.0.26:52827 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000459791s
	[INFO] 10.244.0.26:37226 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000230064s
	
	
	==> describe nodes <==
	Name:               addons-937866
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-937866
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a5ac17629aabe8562ef9220195ba6559cf416caf
	                    minikube.k8s.io/name=addons-937866
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_13T23_48_33_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-937866
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 13 Aug 2024 23:48:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-937866
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 13 Aug 2024 23:56:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 13 Aug 2024 23:53:39 +0000   Tue, 13 Aug 2024 23:48:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 13 Aug 2024 23:53:39 +0000   Tue, 13 Aug 2024 23:48:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 13 Aug 2024 23:53:39 +0000   Tue, 13 Aug 2024 23:48:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 13 Aug 2024 23:53:39 +0000   Tue, 13 Aug 2024 23:48:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.8
	  Hostname:    addons-937866
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	System Info:
	  Machine ID:                 bc5bcf134b3b4c709916fbf63733e2a0
	  System UUID:                bc5bcf13-4b3b-4c70-9916-fbf63733e2a0
	  Boot ID:                    eaf6e0ab-a5e4-44e2-800d-ca41f7b49a0b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m22s
	  default                     hello-world-app-55bf9c44b4-tgpcr           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m6s
	  default                     nginx                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m28s
	  kube-system                 coredns-6f6b679f8f-mq64k                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     7m50s
	  kube-system                 etcd-addons-937866                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         7m55s
	  kube-system                 kube-apiserver-addons-937866               250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m55s
	  kube-system                 kube-controller-manager-addons-937866      200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m55s
	  kube-system                 kube-proxy-824wz                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m50s
	  kube-system                 kube-scheduler-addons-937866               100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m55s
	  kube-system                 metrics-server-8988944d9-mnlqq             100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         7m45s
	  kube-system                 storage-provisioner                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m46s
	  local-path-storage          local-path-provisioner-86d989889c-wpqrr    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (9%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 7m47s                kube-proxy       
	  Normal  NodeHasSufficientMemory  8m1s (x8 over 8m1s)  kubelet          Node addons-937866 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m1s (x8 over 8m1s)  kubelet          Node addons-937866 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m1s (x7 over 8m1s)  kubelet          Node addons-937866 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m1s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 7m55s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m55s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m55s                kubelet          Node addons-937866 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m55s                kubelet          Node addons-937866 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m55s                kubelet          Node addons-937866 status is now: NodeHasSufficientPID
	  Normal  NodeReady                7m54s                kubelet          Node addons-937866 status is now: NodeReady
	  Normal  RegisteredNode           7m51s                node-controller  Node addons-937866 event: Registered Node addons-937866 in Controller
	
	
	==> dmesg <==
	[  +5.061112] kauditd_printk_skb: 111 callbacks suppressed
	[  +5.028728] kauditd_printk_skb: 156 callbacks suppressed
	[  +6.733719] kauditd_printk_skb: 36 callbacks suppressed
	[Aug13 23:49] kauditd_printk_skb: 2 callbacks suppressed
	[ +20.166878] kauditd_printk_skb: 2 callbacks suppressed
	[  +9.256576] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.794045] kauditd_printk_skb: 58 callbacks suppressed
	[  +5.471540] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.867276] kauditd_printk_skb: 22 callbacks suppressed
	[Aug13 23:50] kauditd_printk_skb: 9 callbacks suppressed
	[  +6.474459] kauditd_printk_skb: 52 callbacks suppressed
	[ +10.894825] kauditd_printk_skb: 1 callbacks suppressed
	[  +5.885782] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.074865] kauditd_printk_skb: 19 callbacks suppressed
	[  +5.085248] kauditd_printk_skb: 27 callbacks suppressed
	[  +5.014442] kauditd_printk_skb: 93 callbacks suppressed
	[  +5.846385] kauditd_printk_skb: 28 callbacks suppressed
	[  +6.899849] kauditd_printk_skb: 11 callbacks suppressed
	[  +6.273445] kauditd_printk_skb: 27 callbacks suppressed
	[Aug13 23:51] kauditd_printk_skb: 15 callbacks suppressed
	[  +6.155711] kauditd_printk_skb: 2 callbacks suppressed
	[ +22.401696] kauditd_printk_skb: 7 callbacks suppressed
	[  +8.114049] kauditd_printk_skb: 33 callbacks suppressed
	[Aug13 23:53] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.456489] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [8d4207ddccb37f18323a1e3696612f510dd1ed74f5cf3e34d4fa6005dae1c9ae] <==
	{"level":"warn","ts":"2024-08-13T23:49:56.961126Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"292.584809ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2024-08-13T23:49:56.961144Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"322.424532ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2024-08-13T23:49:56.961164Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"325.252916ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-13T23:49:56.962532Z","caller":"traceutil/trace.go:171","msg":"trace[823464642] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1146; }","duration":"381.823044ms","start":"2024-08-13T23:49:56.580698Z","end":"2024-08-13T23:49:56.962521Z","steps":["trace[823464642] 'agreement among raft nodes before linearized reading'  (duration: 379.370567ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-13T23:49:56.963432Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-13T23:49:56.580663Z","time spent":"382.761174ms","remote":"127.0.0.1:33668","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2024-08-13T23:49:56.963185Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-13T23:49:56.593125Z","time spent":"370.044007ms","remote":"127.0.0.1:33518","response type":"/etcdserverpb.KV/Range","request count":0,"request size":120,"response count":4,"response size":30,"request content":"key:\"/registry/apiextensions.k8s.io/customresourcedefinitions/\" range_end:\"/registry/apiextensions.k8s.io/customresourcedefinitions0\" count_only:true "}
	{"level":"info","ts":"2024-08-13T23:49:56.963291Z","caller":"traceutil/trace.go:171","msg":"trace[397263197] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1146; }","duration":"289.195793ms","start":"2024-08-13T23:49:56.674088Z","end":"2024-08-13T23:49:56.963284Z","steps":["trace[397263197] 'agreement among raft nodes before linearized reading'  (duration: 287.005682ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-13T23:49:56.963334Z","caller":"traceutil/trace.go:171","msg":"trace[1053156069] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1146; }","duration":"294.789693ms","start":"2024-08-13T23:49:56.668538Z","end":"2024-08-13T23:49:56.963328Z","steps":["trace[1053156069] 'agreement among raft nodes before linearized reading'  (duration: 292.578679ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-13T23:49:56.963372Z","caller":"traceutil/trace.go:171","msg":"trace[516487354] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1146; }","duration":"324.65087ms","start":"2024-08-13T23:49:56.638715Z","end":"2024-08-13T23:49:56.963366Z","steps":["trace[516487354] 'agreement among raft nodes before linearized reading'  (duration: 322.419972ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-13T23:49:56.963407Z","caller":"traceutil/trace.go:171","msg":"trace[1670613478] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1146; }","duration":"327.496015ms","start":"2024-08-13T23:49:56.635907Z","end":"2024-08-13T23:49:56.963403Z","steps":["trace[1670613478] 'agreement among raft nodes before linearized reading'  (duration: 325.246052ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-13T23:49:56.965206Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-13T23:49:56.635871Z","time spent":"329.324615ms","remote":"127.0.0.1:33668","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"info","ts":"2024-08-13T23:50:45.858362Z","caller":"traceutil/trace.go:171","msg":"trace[1305374889] linearizableReadLoop","detail":"{readStateIndex:1561; appliedIndex:1560; }","duration":"184.914883ms","start":"2024-08-13T23:50:45.673423Z","end":"2024-08-13T23:50:45.858337Z","steps":["trace[1305374889] 'read index received'  (duration: 184.770584ms)","trace[1305374889] 'applied index is now lower than readState.Index'  (duration: 143.863µs)"],"step_count":2}
	{"level":"warn","ts":"2024-08-13T23:50:45.858577Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"185.140781ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-13T23:50:45.858643Z","caller":"traceutil/trace.go:171","msg":"trace[1850772669] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1514; }","duration":"185.23461ms","start":"2024-08-13T23:50:45.673401Z","end":"2024-08-13T23:50:45.858636Z","steps":["trace[1850772669] 'agreement among raft nodes before linearized reading'  (duration: 185.075771ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-13T23:50:45.858843Z","caller":"traceutil/trace.go:171","msg":"trace[1638936505] transaction","detail":"{read_only:false; response_revision:1514; number_of_response:1; }","duration":"216.098948ms","start":"2024-08-13T23:50:45.642697Z","end":"2024-08-13T23:50:45.858796Z","steps":["trace[1638936505] 'process raft request'  (duration: 215.540086ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-13T23:50:53.584216Z","caller":"traceutil/trace.go:171","msg":"trace[1489961074] linearizableReadLoop","detail":"{readStateIndex:1589; appliedIndex:1588; }","duration":"205.886912ms","start":"2024-08-13T23:50:53.378314Z","end":"2024-08-13T23:50:53.584201Z","steps":["trace[1489961074] 'read index received'  (duration: 205.471506ms)","trace[1489961074] 'applied index is now lower than readState.Index'  (duration: 414.395µs)"],"step_count":2}
	{"level":"info","ts":"2024-08-13T23:50:53.584538Z","caller":"traceutil/trace.go:171","msg":"trace[1483766349] transaction","detail":"{read_only:false; response_revision:1540; number_of_response:1; }","duration":"284.731748ms","start":"2024-08-13T23:50:53.299785Z","end":"2024-08-13T23:50:53.584517Z","steps":["trace[1483766349] 'process raft request'  (duration: 284.097909ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-13T23:50:53.584486Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"206.140799ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumeclaims/default/hpvc\" ","response":"range_response_count:1 size:822"}
	{"level":"info","ts":"2024-08-13T23:50:53.585802Z","caller":"traceutil/trace.go:171","msg":"trace[1039593450] range","detail":"{range_begin:/registry/persistentvolumeclaims/default/hpvc; range_end:; response_count:1; response_revision:1540; }","duration":"207.480901ms","start":"2024-08-13T23:50:53.378311Z","end":"2024-08-13T23:50:53.585792Z","steps":["trace[1039593450] 'agreement among raft nodes before linearized reading'  (duration: 206.001828ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-13T23:51:15.615891Z","caller":"traceutil/trace.go:171","msg":"trace[2043624885] linearizableReadLoop","detail":"{readStateIndex:1740; appliedIndex:1739; }","duration":"133.627863ms","start":"2024-08-13T23:51:15.482249Z","end":"2024-08-13T23:51:15.615877Z","steps":["trace[2043624885] 'read index received'  (duration: 133.477937ms)","trace[2043624885] 'applied index is now lower than readState.Index'  (duration: 149.543µs)"],"step_count":2}
	{"level":"warn","ts":"2024-08-13T23:51:15.616013Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"133.726342ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-13T23:51:15.616037Z","caller":"traceutil/trace.go:171","msg":"trace[664777303] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1683; }","duration":"133.784057ms","start":"2024-08-13T23:51:15.482246Z","end":"2024-08-13T23:51:15.616030Z","steps":["trace[664777303] 'agreement among raft nodes before linearized reading'  (duration: 133.692896ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-13T23:51:15.616239Z","caller":"traceutil/trace.go:171","msg":"trace[1967623336] transaction","detail":"{read_only:false; response_revision:1683; number_of_response:1; }","duration":"304.299722ms","start":"2024-08-13T23:51:15.311890Z","end":"2024-08-13T23:51:15.616190Z","steps":["trace[1967623336] 'process raft request'  (duration: 303.899084ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-13T23:51:15.616339Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-13T23:51:15.311868Z","time spent":"304.411735ms","remote":"127.0.0.1:33748","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":538,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" mod_revision:1671 > success:<request_put:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" value_size:451 >> failure:<request_range:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" > >"}
	{"level":"info","ts":"2024-08-13T23:51:51.518256Z","caller":"traceutil/trace.go:171","msg":"trace[2092690914] transaction","detail":"{read_only:false; response_revision:1888; number_of_response:1; }","duration":"158.236909ms","start":"2024-08-13T23:51:51.360004Z","end":"2024-08-13T23:51:51.518241Z","steps":["trace[2092690914] 'process raft request'  (duration: 158.104001ms)"],"step_count":1}
	
	
	==> kernel <==
	 23:56:27 up 8 min,  0 users,  load average: 0.79, 0.81, 0.54
	Linux addons-937866 5.10.207 #1 SMP Tue Aug 13 22:05:29 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [5a3ce8195d181b25c78f8341fd5c5da13fb0f3bf61d58093031de7c03a823424] <==
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0813 23:50:13.354883       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0813 23:50:13.367174       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0813 23:50:16.644727       1 conn.go:339] Error on socket receive: read tcp 192.168.39.8:8443->192.168.39.1:38974: use of closed network connection
	E0813 23:50:16.844139       1 conn.go:339] Error on socket receive: read tcp 192.168.39.8:8443->192.168.39.1:38990: use of closed network connection
	I0813 23:50:41.798659       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.111.168.48"}
	I0813 23:50:59.839667       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0813 23:50:59.997213       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.104.9.84"}
	I0813 23:51:01.625081       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0813 23:51:02.673121       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0813 23:51:24.549983       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0813 23:51:40.742065       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"csi-hostpathplugin-sa\" not found]"
	I0813 23:51:47.259048       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0813 23:51:47.259122       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0813 23:51:47.296997       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0813 23:51:47.297093       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0813 23:51:47.311362       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0813 23:51:47.311412       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0813 23:51:47.337527       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0813 23:51:47.337688       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0813 23:51:48.297523       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0813 23:51:48.337775       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0813 23:51:48.348326       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0813 23:53:21.511584       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.97.13.200"}
	
	
	==> kube-controller-manager [293d856a8715f2caab799754964d885457b07494e38abee7052087e67cc85340] <==
	W0813 23:54:14.796710       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 23:54:14.796791       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0813 23:54:17.318052       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 23:54:17.318158       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0813 23:54:27.996467       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 23:54:27.996562       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0813 23:54:36.809761       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 23:54:36.809832       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0813 23:55:12.998671       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 23:55:12.998736       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0813 23:55:15.689755       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 23:55:15.689818       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0813 23:55:19.178788       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 23:55:19.178904       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0813 23:55:21.809206       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 23:55:21.809334       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0813 23:55:49.868329       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 23:55:49.868390       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0813 23:55:52.457082       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 23:55:52.457192       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0813 23:55:58.122058       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 23:55:58.122245       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0813 23:56:06.028424       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 23:56:06.028486       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0813 23:56:26.231018       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-8988944d9" duration="12.482µs"
	
	
	==> kube-proxy [9e9d428f9086acfa295ebbe628e9f163619d77a3c764d06b84e4f2ade96f737b] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0813 23:48:39.544557       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0813 23:48:39.566423       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.8"]
	E0813 23:48:39.566512       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0813 23:48:39.663392       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0813 23:48:39.663434       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0813 23:48:39.663465       1 server_linux.go:169] "Using iptables Proxier"
	I0813 23:48:39.665717       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0813 23:48:39.665964       1 server.go:483] "Version info" version="v1.31.0"
	I0813 23:48:39.665984       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0813 23:48:39.670933       1 config.go:197] "Starting service config controller"
	I0813 23:48:39.670969       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0813 23:48:39.670999       1 config.go:104] "Starting endpoint slice config controller"
	I0813 23:48:39.671004       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0813 23:48:39.679225       1 config.go:326] "Starting node config controller"
	I0813 23:48:39.679253       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0813 23:48:39.771960       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0813 23:48:39.772043       1 shared_informer.go:320] Caches are synced for service config
	I0813 23:48:39.779530       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [b454b279a20a235dbba873bdf29da83c94c918cdf25120909f89e43ab04e4f88] <==
	W0813 23:48:30.034785       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 23:48:30.034811       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0813 23:48:30.038916       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 23:48:30.038977       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0813 23:48:30.896847       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 23:48:30.896916       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0813 23:48:31.039478       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 23:48:31.039512       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0813 23:48:31.084928       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 23:48:31.085106       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0813 23:48:31.090241       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 23:48:31.090282       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0813 23:48:31.225454       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 23:48:31.225511       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0813 23:48:31.229342       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 23:48:31.229492       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0813 23:48:31.242459       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0813 23:48:31.242637       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0813 23:48:31.256334       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 23:48:31.256386       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0813 23:48:31.258547       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0813 23:48:31.258626       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0813 23:48:31.308579       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0813 23:48:31.308758       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0813 23:48:33.308676       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 13 23:55:23 addons-937866 kubelet[1207]: E0813 23:55:23.070981    1207 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723593323070557454,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590423,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 13 23:55:23 addons-937866 kubelet[1207]: E0813 23:55:23.071255    1207 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723593323070557454,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590423,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 13 23:55:32 addons-937866 kubelet[1207]: E0813 23:55:32.771312    1207 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 13 23:55:32 addons-937866 kubelet[1207]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 13 23:55:32 addons-937866 kubelet[1207]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 13 23:55:32 addons-937866 kubelet[1207]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 13 23:55:32 addons-937866 kubelet[1207]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 13 23:55:33 addons-937866 kubelet[1207]: E0813 23:55:33.074382    1207 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723593333074050650,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590423,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 13 23:55:33 addons-937866 kubelet[1207]: E0813 23:55:33.074421    1207 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723593333074050650,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590423,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 13 23:55:43 addons-937866 kubelet[1207]: E0813 23:55:43.077442    1207 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723593343076954260,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590423,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 13 23:55:43 addons-937866 kubelet[1207]: E0813 23:55:43.077942    1207 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723593343076954260,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590423,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 13 23:55:53 addons-937866 kubelet[1207]: E0813 23:55:53.080890    1207 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723593353080541715,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590423,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 13 23:55:53 addons-937866 kubelet[1207]: E0813 23:55:53.080927    1207 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723593353080541715,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590423,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 13 23:56:03 addons-937866 kubelet[1207]: E0813 23:56:03.083283    1207 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723593363082952447,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590423,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 13 23:56:03 addons-937866 kubelet[1207]: E0813 23:56:03.083326    1207 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723593363082952447,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590423,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 13 23:56:13 addons-937866 kubelet[1207]: E0813 23:56:13.085817    1207 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723593373085253467,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590423,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 13 23:56:13 addons-937866 kubelet[1207]: E0813 23:56:13.085924    1207 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723593373085253467,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590423,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 13 23:56:23 addons-937866 kubelet[1207]: E0813 23:56:23.089663    1207 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723593383088993974,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590423,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 13 23:56:23 addons-937866 kubelet[1207]: E0813 23:56:23.089706    1207 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723593383088993974,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590423,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 13 23:56:26 addons-937866 kubelet[1207]: I0813 23:56:26.259028    1207 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-55bf9c44b4-tgpcr" podStartSLOduration=182.936553528 podStartE2EDuration="3m5.259010075s" podCreationTimestamp="2024-08-13 23:53:21 +0000 UTC" firstStartedPulling="2024-08-13 23:53:21.853685038 +0000 UTC m=+289.215478923" lastFinishedPulling="2024-08-13 23:53:24.176141584 +0000 UTC m=+291.537935470" observedRunningTime="2024-08-13 23:53:24.345398722 +0000 UTC m=+291.707192619" watchObservedRunningTime="2024-08-13 23:56:26.259010075 +0000 UTC m=+473.620803974"
	Aug 13 23:56:26 addons-937866 kubelet[1207]: I0813 23:56:26.756381    1207 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Aug 13 23:56:27 addons-937866 kubelet[1207]: I0813 23:56:27.694415    1207 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bthcr\" (UniqueName: \"kubernetes.io/projected/82850aaa-4f93-49e5-b89b-e86bc208fd74-kube-api-access-bthcr\") pod \"82850aaa-4f93-49e5-b89b-e86bc208fd74\" (UID: \"82850aaa-4f93-49e5-b89b-e86bc208fd74\") "
	Aug 13 23:56:27 addons-937866 kubelet[1207]: I0813 23:56:27.694467    1207 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/82850aaa-4f93-49e5-b89b-e86bc208fd74-tmp-dir\") pod \"82850aaa-4f93-49e5-b89b-e86bc208fd74\" (UID: \"82850aaa-4f93-49e5-b89b-e86bc208fd74\") "
	Aug 13 23:56:27 addons-937866 kubelet[1207]: I0813 23:56:27.694851    1207 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82850aaa-4f93-49e5-b89b-e86bc208fd74-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "82850aaa-4f93-49e5-b89b-e86bc208fd74" (UID: "82850aaa-4f93-49e5-b89b-e86bc208fd74"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Aug 13 23:56:27 addons-937866 kubelet[1207]: I0813 23:56:27.708165    1207 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82850aaa-4f93-49e5-b89b-e86bc208fd74-kube-api-access-bthcr" (OuterVolumeSpecName: "kube-api-access-bthcr") pod "82850aaa-4f93-49e5-b89b-e86bc208fd74" (UID: "82850aaa-4f93-49e5-b89b-e86bc208fd74"). InnerVolumeSpecName "kube-api-access-bthcr". PluginName "kubernetes.io/projected", VolumeGidValue ""
	
	
	==> storage-provisioner [27c34da461ec94bd2a036cc3e5a0bcb06066c6e533a5d76d7bd8689b9ace0e1b] <==
	I0813 23:48:44.466997       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0813 23:48:44.595691       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0813 23:48:44.596286       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0813 23:48:44.652286       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0813 23:48:44.652811       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"89976f48-0746-4e54-ba26-c348dc5cce52", APIVersion:"v1", ResourceVersion:"690", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-937866_9dfff4cf-f63c-4b3b-a619-d7d854f560ef became leader
	I0813 23:48:44.652948       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-937866_9dfff4cf-f63c-4b3b-a619-d7d854f560ef!
	I0813 23:48:44.856336       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-937866_9dfff4cf-f63c-4b3b-a619-d7d854f560ef!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-937866 -n addons-937866
helpers_test.go:261: (dbg) Run:  kubectl --context addons-937866 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (346.66s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.35s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-937866
addons_test.go:174: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-937866: exit status 82 (2m0.446105298s)

                                                
                                                
-- stdout --
	* Stopping node "addons-937866"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:176: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-937866" : exit status 82
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-937866
addons_test.go:178: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-937866: exit status 11 (21.611296694s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.8:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:180: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-937866" : exit status 11
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-937866
addons_test.go:182: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-937866: exit status 11 (6.143001s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.8:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:184: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-937866" : exit status 11
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-937866
addons_test.go:187: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-937866: exit status 11 (6.144774427s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.8:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:189: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-937866" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (354.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-105013 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-105013 -v=7 --alsologtostderr
E0814 00:09:58.046181   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/functional-770612/client.crt: no such file or directory" logger="UnhandledError"
E0814 00:10:05.519331   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/addons-937866/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-105013 -v=7 --alsologtostderr: exit status 82 (2m1.806493086s)

                                                
                                                
-- stdout --
	* Stopping node "ha-105013-m04"  ...
	* Stopping node "ha-105013-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 00:08:59.799459   31882 out.go:291] Setting OutFile to fd 1 ...
	I0814 00:08:59.799700   31882 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 00:08:59.799709   31882 out.go:304] Setting ErrFile to fd 2...
	I0814 00:08:59.799713   31882 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 00:08:59.799872   31882 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19429-9425/.minikube/bin
	I0814 00:08:59.800075   31882 out.go:298] Setting JSON to false
	I0814 00:08:59.800159   31882 mustload.go:65] Loading cluster: ha-105013
	I0814 00:08:59.800496   31882 config.go:182] Loaded profile config "ha-105013": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 00:08:59.800574   31882 profile.go:143] Saving config to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/ha-105013/config.json ...
	I0814 00:08:59.800743   31882 mustload.go:65] Loading cluster: ha-105013
	I0814 00:08:59.800867   31882 config.go:182] Loaded profile config "ha-105013": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 00:08:59.800896   31882 stop.go:39] StopHost: ha-105013-m04
	I0814 00:08:59.801242   31882 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 00:08:59.801303   31882 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 00:08:59.818885   31882 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34237
	I0814 00:08:59.819328   31882 main.go:141] libmachine: () Calling .GetVersion
	I0814 00:08:59.819862   31882 main.go:141] libmachine: Using API Version  1
	I0814 00:08:59.819889   31882 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 00:08:59.820276   31882 main.go:141] libmachine: () Calling .GetMachineName
	I0814 00:08:59.822285   31882 out.go:177] * Stopping node "ha-105013-m04"  ...
	I0814 00:08:59.823453   31882 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0814 00:08:59.823487   31882 main.go:141] libmachine: (ha-105013-m04) Calling .DriverName
	I0814 00:08:59.823728   31882 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0814 00:08:59.823748   31882 main.go:141] libmachine: (ha-105013-m04) Calling .GetSSHHostname
	I0814 00:08:59.826454   31882 main.go:141] libmachine: (ha-105013-m04) DBG | domain ha-105013-m04 has defined MAC address 52:54:00:36:47:1b in network mk-ha-105013
	I0814 00:08:59.826973   31882 main.go:141] libmachine: (ha-105013-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:47:1b", ip: ""} in network mk-ha-105013: {Iface:virbr1 ExpiryTime:2024-08-14 01:07:15 +0000 UTC Type:0 Mac:52:54:00:36:47:1b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-105013-m04 Clientid:01:52:54:00:36:47:1b}
	I0814 00:08:59.827000   31882 main.go:141] libmachine: (ha-105013-m04) DBG | domain ha-105013-m04 has defined IP address 192.168.39.102 and MAC address 52:54:00:36:47:1b in network mk-ha-105013
	I0814 00:08:59.827189   31882 main.go:141] libmachine: (ha-105013-m04) Calling .GetSSHPort
	I0814 00:08:59.827369   31882 main.go:141] libmachine: (ha-105013-m04) Calling .GetSSHKeyPath
	I0814 00:08:59.827526   31882 main.go:141] libmachine: (ha-105013-m04) Calling .GetSSHUsername
	I0814 00:08:59.827685   31882 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/ha-105013-m04/id_rsa Username:docker}
	I0814 00:08:59.912787   31882 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0814 00:08:59.965790   31882 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0814 00:09:00.018564   31882 main.go:141] libmachine: Stopping "ha-105013-m04"...
	I0814 00:09:00.018595   31882 main.go:141] libmachine: (ha-105013-m04) Calling .GetState
	I0814 00:09:00.020197   31882 main.go:141] libmachine: (ha-105013-m04) Calling .Stop
	I0814 00:09:00.023455   31882 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 0/120
	I0814 00:09:01.139246   31882 main.go:141] libmachine: (ha-105013-m04) Calling .GetState
	I0814 00:09:01.140473   31882 main.go:141] libmachine: Machine "ha-105013-m04" was stopped.
	I0814 00:09:01.140489   31882 stop.go:75] duration metric: took 1.317045095s to stop
	I0814 00:09:01.140519   31882 stop.go:39] StopHost: ha-105013-m03
	I0814 00:09:01.140943   31882 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 00:09:01.140984   31882 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 00:09:01.156726   31882 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35299
	I0814 00:09:01.157136   31882 main.go:141] libmachine: () Calling .GetVersion
	I0814 00:09:01.157615   31882 main.go:141] libmachine: Using API Version  1
	I0814 00:09:01.157635   31882 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 00:09:01.157904   31882 main.go:141] libmachine: () Calling .GetMachineName
	I0814 00:09:01.159552   31882 out.go:177] * Stopping node "ha-105013-m03"  ...
	I0814 00:09:01.161091   31882 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0814 00:09:01.161112   31882 main.go:141] libmachine: (ha-105013-m03) Calling .DriverName
	I0814 00:09:01.161364   31882 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0814 00:09:01.161390   31882 main.go:141] libmachine: (ha-105013-m03) Calling .GetSSHHostname
	I0814 00:09:01.164187   31882 main.go:141] libmachine: (ha-105013-m03) DBG | domain ha-105013-m03 has defined MAC address 52:54:00:b1:67:1f in network mk-ha-105013
	I0814 00:09:01.164581   31882 main.go:141] libmachine: (ha-105013-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:67:1f", ip: ""} in network mk-ha-105013: {Iface:virbr1 ExpiryTime:2024-08-14 01:05:54 +0000 UTC Type:0 Mac:52:54:00:b1:67:1f Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:ha-105013-m03 Clientid:01:52:54:00:b1:67:1f}
	I0814 00:09:01.164610   31882 main.go:141] libmachine: (ha-105013-m03) DBG | domain ha-105013-m03 has defined IP address 192.168.39.177 and MAC address 52:54:00:b1:67:1f in network mk-ha-105013
	I0814 00:09:01.164764   31882 main.go:141] libmachine: (ha-105013-m03) Calling .GetSSHPort
	I0814 00:09:01.164926   31882 main.go:141] libmachine: (ha-105013-m03) Calling .GetSSHKeyPath
	I0814 00:09:01.165065   31882 main.go:141] libmachine: (ha-105013-m03) Calling .GetSSHUsername
	I0814 00:09:01.165172   31882 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/ha-105013-m03/id_rsa Username:docker}
	I0814 00:09:01.252480   31882 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0814 00:09:01.305505   31882 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0814 00:09:01.359900   31882 main.go:141] libmachine: Stopping "ha-105013-m03"...
	I0814 00:09:01.359948   31882 main.go:141] libmachine: (ha-105013-m03) Calling .GetState
	I0814 00:09:01.361571   31882 main.go:141] libmachine: (ha-105013-m03) Calling .Stop
	I0814 00:09:01.364940   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 0/120
	I0814 00:09:02.366203   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 1/120
	I0814 00:09:03.368511   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 2/120
	I0814 00:09:04.369739   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 3/120
	I0814 00:09:05.371062   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 4/120
	I0814 00:09:06.372498   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 5/120
	I0814 00:09:07.373761   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 6/120
	I0814 00:09:08.375130   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 7/120
	I0814 00:09:09.376454   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 8/120
	I0814 00:09:10.377936   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 9/120
	I0814 00:09:11.379781   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 10/120
	I0814 00:09:12.381149   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 11/120
	I0814 00:09:13.382616   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 12/120
	I0814 00:09:14.384036   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 13/120
	I0814 00:09:15.385380   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 14/120
	I0814 00:09:16.387504   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 15/120
	I0814 00:09:17.388777   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 16/120
	I0814 00:09:18.390260   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 17/120
	I0814 00:09:19.391753   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 18/120
	I0814 00:09:20.393270   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 19/120
	I0814 00:09:21.395333   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 20/120
	I0814 00:09:22.396615   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 21/120
	I0814 00:09:23.397905   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 22/120
	I0814 00:09:24.399286   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 23/120
	I0814 00:09:25.400695   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 24/120
	I0814 00:09:26.402395   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 25/120
	I0814 00:09:27.403860   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 26/120
	I0814 00:09:28.405602   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 27/120
	I0814 00:09:29.407115   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 28/120
	I0814 00:09:30.408915   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 29/120
	I0814 00:09:31.410697   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 30/120
	I0814 00:09:32.412659   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 31/120
	I0814 00:09:33.413889   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 32/120
	I0814 00:09:34.415255   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 33/120
	I0814 00:09:35.416394   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 34/120
	I0814 00:09:36.418001   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 35/120
	I0814 00:09:37.419162   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 36/120
	I0814 00:09:38.420513   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 37/120
	I0814 00:09:39.422009   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 38/120
	I0814 00:09:40.423506   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 39/120
	I0814 00:09:41.425214   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 40/120
	I0814 00:09:42.426470   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 41/120
	I0814 00:09:43.428406   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 42/120
	I0814 00:09:44.430790   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 43/120
	I0814 00:09:45.432636   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 44/120
	I0814 00:09:46.434809   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 45/120
	I0814 00:09:47.436795   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 46/120
	I0814 00:09:48.438085   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 47/120
	I0814 00:09:49.439314   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 48/120
	I0814 00:09:50.440650   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 49/120
	I0814 00:09:51.442171   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 50/120
	I0814 00:09:52.444418   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 51/120
	I0814 00:09:53.445550   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 52/120
	I0814 00:09:54.446995   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 53/120
	I0814 00:09:55.448516   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 54/120
	I0814 00:09:56.450861   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 55/120
	I0814 00:09:57.452258   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 56/120
	I0814 00:09:58.454179   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 57/120
	I0814 00:09:59.456386   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 58/120
	I0814 00:10:00.457962   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 59/120
	I0814 00:10:01.459918   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 60/120
	I0814 00:10:02.461238   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 61/120
	I0814 00:10:03.462521   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 62/120
	I0814 00:10:04.464437   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 63/120
	I0814 00:10:05.465873   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 64/120
	I0814 00:10:06.467588   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 65/120
	I0814 00:10:07.468785   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 66/120
	I0814 00:10:08.470270   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 67/120
	I0814 00:10:09.472641   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 68/120
	I0814 00:10:10.474321   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 69/120
	I0814 00:10:11.475863   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 70/120
	I0814 00:10:12.477196   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 71/120
	I0814 00:10:13.479076   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 72/120
	I0814 00:10:14.480434   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 73/120
	I0814 00:10:15.481477   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 74/120
	I0814 00:10:16.482823   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 75/120
	I0814 00:10:17.484459   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 76/120
	I0814 00:10:18.486098   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 77/120
	I0814 00:10:19.487567   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 78/120
	I0814 00:10:20.489169   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 79/120
	I0814 00:10:21.490921   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 80/120
	I0814 00:10:22.492442   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 81/120
	I0814 00:10:23.493632   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 82/120
	I0814 00:10:24.494851   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 83/120
	I0814 00:10:25.496241   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 84/120
	I0814 00:10:26.497773   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 85/120
	I0814 00:10:27.499074   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 86/120
	I0814 00:10:28.500426   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 87/120
	I0814 00:10:29.501789   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 88/120
	I0814 00:10:30.503064   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 89/120
	I0814 00:10:31.504497   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 90/120
	I0814 00:10:32.505702   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 91/120
	I0814 00:10:33.507558   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 92/120
	I0814 00:10:34.508949   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 93/120
	I0814 00:10:35.511452   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 94/120
	I0814 00:10:36.513359   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 95/120
	I0814 00:10:37.514873   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 96/120
	I0814 00:10:38.516456   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 97/120
	I0814 00:10:39.518185   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 98/120
	I0814 00:10:40.520399   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 99/120
	I0814 00:10:41.521965   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 100/120
	I0814 00:10:42.523288   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 101/120
	I0814 00:10:43.524551   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 102/120
	I0814 00:10:44.525865   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 103/120
	I0814 00:10:45.527261   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 104/120
	I0814 00:10:46.528828   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 105/120
	I0814 00:10:47.531161   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 106/120
	I0814 00:10:48.532462   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 107/120
	I0814 00:10:49.533988   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 108/120
	I0814 00:10:50.536441   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 109/120
	I0814 00:10:51.538213   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 110/120
	I0814 00:10:52.540640   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 111/120
	I0814 00:10:53.542929   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 112/120
	I0814 00:10:54.544833   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 113/120
	I0814 00:10:55.546275   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 114/120
	I0814 00:10:56.547870   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 115/120
	I0814 00:10:57.549288   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 116/120
	I0814 00:10:58.550771   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 117/120
	I0814 00:10:59.552612   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 118/120
	I0814 00:11:00.553996   31882 main.go:141] libmachine: (ha-105013-m03) Waiting for machine to stop 119/120
	I0814 00:11:01.554473   31882 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0814 00:11:01.554544   31882 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0814 00:11:01.556345   31882 out.go:177] 
	W0814 00:11:01.557464   31882 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0814 00:11:01.557479   31882 out.go:239] * 
	* 
	W0814 00:11:01.559766   31882 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0814 00:11:01.561868   31882 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-105013 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-105013 --wait=true -v=7 --alsologtostderr
E0814 00:12:14.189277   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/functional-770612/client.crt: no such file or directory" logger="UnhandledError"
E0814 00:12:41.887856   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/functional-770612/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-105013 --wait=true -v=7 --alsologtostderr: (3m50.377544645s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-105013
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-105013 -n ha-105013
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-105013 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-105013 logs -n 25: (1.661144825s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-105013 cp ha-105013-m03:/home/docker/cp-test.txt                              | ha-105013 | jenkins | v1.33.1 | 14 Aug 24 00:08 UTC | 14 Aug 24 00:08 UTC |
	|         | ha-105013-m02:/home/docker/cp-test_ha-105013-m03_ha-105013-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-105013 ssh -n                                                                 | ha-105013 | jenkins | v1.33.1 | 14 Aug 24 00:08 UTC | 14 Aug 24 00:08 UTC |
	|         | ha-105013-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-105013 ssh -n ha-105013-m02 sudo cat                                          | ha-105013 | jenkins | v1.33.1 | 14 Aug 24 00:08 UTC | 14 Aug 24 00:08 UTC |
	|         | /home/docker/cp-test_ha-105013-m03_ha-105013-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-105013 cp ha-105013-m03:/home/docker/cp-test.txt                              | ha-105013 | jenkins | v1.33.1 | 14 Aug 24 00:08 UTC | 14 Aug 24 00:08 UTC |
	|         | ha-105013-m04:/home/docker/cp-test_ha-105013-m03_ha-105013-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-105013 ssh -n                                                                 | ha-105013 | jenkins | v1.33.1 | 14 Aug 24 00:08 UTC | 14 Aug 24 00:08 UTC |
	|         | ha-105013-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-105013 ssh -n ha-105013-m04 sudo cat                                          | ha-105013 | jenkins | v1.33.1 | 14 Aug 24 00:08 UTC | 14 Aug 24 00:08 UTC |
	|         | /home/docker/cp-test_ha-105013-m03_ha-105013-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-105013 cp testdata/cp-test.txt                                                | ha-105013 | jenkins | v1.33.1 | 14 Aug 24 00:08 UTC | 14 Aug 24 00:08 UTC |
	|         | ha-105013-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-105013 ssh -n                                                                 | ha-105013 | jenkins | v1.33.1 | 14 Aug 24 00:08 UTC | 14 Aug 24 00:08 UTC |
	|         | ha-105013-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-105013 cp ha-105013-m04:/home/docker/cp-test.txt                              | ha-105013 | jenkins | v1.33.1 | 14 Aug 24 00:08 UTC | 14 Aug 24 00:08 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2764919469/001/cp-test_ha-105013-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-105013 ssh -n                                                                 | ha-105013 | jenkins | v1.33.1 | 14 Aug 24 00:08 UTC | 14 Aug 24 00:08 UTC |
	|         | ha-105013-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-105013 cp ha-105013-m04:/home/docker/cp-test.txt                              | ha-105013 | jenkins | v1.33.1 | 14 Aug 24 00:08 UTC | 14 Aug 24 00:08 UTC |
	|         | ha-105013:/home/docker/cp-test_ha-105013-m04_ha-105013.txt                       |           |         |         |                     |                     |
	| ssh     | ha-105013 ssh -n                                                                 | ha-105013 | jenkins | v1.33.1 | 14 Aug 24 00:08 UTC | 14 Aug 24 00:08 UTC |
	|         | ha-105013-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-105013 ssh -n ha-105013 sudo cat                                              | ha-105013 | jenkins | v1.33.1 | 14 Aug 24 00:08 UTC | 14 Aug 24 00:08 UTC |
	|         | /home/docker/cp-test_ha-105013-m04_ha-105013.txt                                 |           |         |         |                     |                     |
	| cp      | ha-105013 cp ha-105013-m04:/home/docker/cp-test.txt                              | ha-105013 | jenkins | v1.33.1 | 14 Aug 24 00:08 UTC | 14 Aug 24 00:08 UTC |
	|         | ha-105013-m02:/home/docker/cp-test_ha-105013-m04_ha-105013-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-105013 ssh -n                                                                 | ha-105013 | jenkins | v1.33.1 | 14 Aug 24 00:08 UTC | 14 Aug 24 00:08 UTC |
	|         | ha-105013-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-105013 ssh -n ha-105013-m02 sudo cat                                          | ha-105013 | jenkins | v1.33.1 | 14 Aug 24 00:08 UTC | 14 Aug 24 00:08 UTC |
	|         | /home/docker/cp-test_ha-105013-m04_ha-105013-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-105013 cp ha-105013-m04:/home/docker/cp-test.txt                              | ha-105013 | jenkins | v1.33.1 | 14 Aug 24 00:08 UTC | 14 Aug 24 00:08 UTC |
	|         | ha-105013-m03:/home/docker/cp-test_ha-105013-m04_ha-105013-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-105013 ssh -n                                                                 | ha-105013 | jenkins | v1.33.1 | 14 Aug 24 00:08 UTC | 14 Aug 24 00:08 UTC |
	|         | ha-105013-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-105013 ssh -n ha-105013-m03 sudo cat                                          | ha-105013 | jenkins | v1.33.1 | 14 Aug 24 00:08 UTC | 14 Aug 24 00:08 UTC |
	|         | /home/docker/cp-test_ha-105013-m04_ha-105013-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-105013 node stop m02 -v=7                                                     | ha-105013 | jenkins | v1.33.1 | 14 Aug 24 00:08 UTC | 14 Aug 24 00:08 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-105013 node start m02 -v=7                                                    | ha-105013 | jenkins | v1.33.1 | 14 Aug 24 00:08 UTC | 14 Aug 24 00:08 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-105013 -v=7                                                           | ha-105013 | jenkins | v1.33.1 | 14 Aug 24 00:08 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-105013 -v=7                                                                | ha-105013 | jenkins | v1.33.1 | 14 Aug 24 00:08 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-105013 --wait=true -v=7                                                    | ha-105013 | jenkins | v1.33.1 | 14 Aug 24 00:11 UTC | 14 Aug 24 00:14 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-105013                                                                | ha-105013 | jenkins | v1.33.1 | 14 Aug 24 00:14 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/14 00:11:01
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0814 00:11:01.603645   32343 out.go:291] Setting OutFile to fd 1 ...
	I0814 00:11:01.603869   32343 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 00:11:01.603877   32343 out.go:304] Setting ErrFile to fd 2...
	I0814 00:11:01.603881   32343 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 00:11:01.604023   32343 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19429-9425/.minikube/bin
	I0814 00:11:01.604532   32343 out.go:298] Setting JSON to false
	I0814 00:11:01.605605   32343 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3208,"bootTime":1723591054,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0814 00:11:01.605747   32343 start.go:139] virtualization: kvm guest
	I0814 00:11:01.608323   32343 out.go:177] * [ha-105013] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0814 00:11:01.609579   32343 notify.go:220] Checking for updates...
	I0814 00:11:01.609599   32343 out.go:177]   - MINIKUBE_LOCATION=19429
	I0814 00:11:01.611171   32343 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0814 00:11:01.612889   32343 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19429-9425/kubeconfig
	I0814 00:11:01.614091   32343 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19429-9425/.minikube
	I0814 00:11:01.615186   32343 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0814 00:11:01.616542   32343 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0814 00:11:01.617947   32343 config.go:182] Loaded profile config "ha-105013": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 00:11:01.618071   32343 driver.go:392] Setting default libvirt URI to qemu:///system
	I0814 00:11:01.618465   32343 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 00:11:01.618516   32343 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 00:11:01.633575   32343 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39001
	I0814 00:11:01.634070   32343 main.go:141] libmachine: () Calling .GetVersion
	I0814 00:11:01.634588   32343 main.go:141] libmachine: Using API Version  1
	I0814 00:11:01.634613   32343 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 00:11:01.634960   32343 main.go:141] libmachine: () Calling .GetMachineName
	I0814 00:11:01.635138   32343 main.go:141] libmachine: (ha-105013) Calling .DriverName
	I0814 00:11:01.670109   32343 out.go:177] * Using the kvm2 driver based on existing profile
	I0814 00:11:01.671351   32343 start.go:297] selected driver: kvm2
	I0814 00:11:01.671371   32343 start.go:901] validating driver "kvm2" against &{Name:ha-105013 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.0 ClusterName:ha-105013 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.79 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.160 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.177 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.102 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false ef
k:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 00:11:01.671542   32343 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0814 00:11:01.671847   32343 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 00:11:01.671918   32343 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19429-9425/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0814 00:11:01.686470   32343 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0814 00:11:01.687452   32343 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 00:11:01.687551   32343 cni.go:84] Creating CNI manager for ""
	I0814 00:11:01.687569   32343 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0814 00:11:01.687660   32343 start.go:340] cluster config:
	{Name:ha-105013 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-105013 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.79 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.160 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.177 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.102 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-til
ler:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 00:11:01.687842   32343 iso.go:125] acquiring lock: {Name:mk654171f0e78c238a265344dbbd1eacb21d0f1b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 00:11:01.689692   32343 out.go:177] * Starting "ha-105013" primary control-plane node in "ha-105013" cluster
	I0814 00:11:01.690919   32343 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0814 00:11:01.690968   32343 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0814 00:11:01.690980   32343 cache.go:56] Caching tarball of preloaded images
	I0814 00:11:01.691058   32343 preload.go:172] Found /home/jenkins/minikube-integration/19429-9425/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0814 00:11:01.691070   32343 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0814 00:11:01.691183   32343 profile.go:143] Saving config to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/ha-105013/config.json ...
	I0814 00:11:01.691404   32343 start.go:360] acquireMachinesLock for ha-105013: {Name:mk8ab1b491404dd6cc95a37a50c74a14d6d0e4c9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 00:11:01.691456   32343 start.go:364] duration metric: took 33.019µs to acquireMachinesLock for "ha-105013"
	I0814 00:11:01.691475   32343 start.go:96] Skipping create...Using existing machine configuration
	I0814 00:11:01.691492   32343 fix.go:54] fixHost starting: 
	I0814 00:11:01.691774   32343 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 00:11:01.691813   32343 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 00:11:01.705778   32343 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33045
	I0814 00:11:01.706192   32343 main.go:141] libmachine: () Calling .GetVersion
	I0814 00:11:01.706654   32343 main.go:141] libmachine: Using API Version  1
	I0814 00:11:01.706676   32343 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 00:11:01.706964   32343 main.go:141] libmachine: () Calling .GetMachineName
	I0814 00:11:01.707134   32343 main.go:141] libmachine: (ha-105013) Calling .DriverName
	I0814 00:11:01.707289   32343 main.go:141] libmachine: (ha-105013) Calling .GetState
	I0814 00:11:01.708800   32343 fix.go:112] recreateIfNeeded on ha-105013: state=Running err=<nil>
	W0814 00:11:01.708820   32343 fix.go:138] unexpected machine state, will restart: <nil>
	I0814 00:11:01.710621   32343 out.go:177] * Updating the running kvm2 "ha-105013" VM ...
	I0814 00:11:01.711941   32343 machine.go:94] provisionDockerMachine start ...
	I0814 00:11:01.711966   32343 main.go:141] libmachine: (ha-105013) Calling .DriverName
	I0814 00:11:01.712173   32343 main.go:141] libmachine: (ha-105013) Calling .GetSSHHostname
	I0814 00:11:01.714197   32343 main.go:141] libmachine: (ha-105013) DBG | domain ha-105013 has defined MAC address 52:54:00:e0:21:6c in network mk-ha-105013
	I0814 00:11:01.714615   32343 main.go:141] libmachine: (ha-105013) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:21:6c", ip: ""} in network mk-ha-105013: {Iface:virbr1 ExpiryTime:2024-08-14 01:03:22 +0000 UTC Type:0 Mac:52:54:00:e0:21:6c Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-105013 Clientid:01:52:54:00:e0:21:6c}
	I0814 00:11:01.714643   32343 main.go:141] libmachine: (ha-105013) DBG | domain ha-105013 has defined IP address 192.168.39.79 and MAC address 52:54:00:e0:21:6c in network mk-ha-105013
	I0814 00:11:01.714736   32343 main.go:141] libmachine: (ha-105013) Calling .GetSSHPort
	I0814 00:11:01.714911   32343 main.go:141] libmachine: (ha-105013) Calling .GetSSHKeyPath
	I0814 00:11:01.715067   32343 main.go:141] libmachine: (ha-105013) Calling .GetSSHKeyPath
	I0814 00:11:01.715209   32343 main.go:141] libmachine: (ha-105013) Calling .GetSSHUsername
	I0814 00:11:01.715365   32343 main.go:141] libmachine: Using SSH client type: native
	I0814 00:11:01.715548   32343 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.79 22 <nil> <nil>}
	I0814 00:11:01.715561   32343 main.go:141] libmachine: About to run SSH command:
	hostname
	I0814 00:11:01.827484   32343 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-105013
	
	I0814 00:11:01.827513   32343 main.go:141] libmachine: (ha-105013) Calling .GetMachineName
	I0814 00:11:01.827823   32343 buildroot.go:166] provisioning hostname "ha-105013"
	I0814 00:11:01.827850   32343 main.go:141] libmachine: (ha-105013) Calling .GetMachineName
	I0814 00:11:01.828071   32343 main.go:141] libmachine: (ha-105013) Calling .GetSSHHostname
	I0814 00:11:01.830717   32343 main.go:141] libmachine: (ha-105013) DBG | domain ha-105013 has defined MAC address 52:54:00:e0:21:6c in network mk-ha-105013
	I0814 00:11:01.831169   32343 main.go:141] libmachine: (ha-105013) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:21:6c", ip: ""} in network mk-ha-105013: {Iface:virbr1 ExpiryTime:2024-08-14 01:03:22 +0000 UTC Type:0 Mac:52:54:00:e0:21:6c Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-105013 Clientid:01:52:54:00:e0:21:6c}
	I0814 00:11:01.831192   32343 main.go:141] libmachine: (ha-105013) DBG | domain ha-105013 has defined IP address 192.168.39.79 and MAC address 52:54:00:e0:21:6c in network mk-ha-105013
	I0814 00:11:01.831365   32343 main.go:141] libmachine: (ha-105013) Calling .GetSSHPort
	I0814 00:11:01.831534   32343 main.go:141] libmachine: (ha-105013) Calling .GetSSHKeyPath
	I0814 00:11:01.831718   32343 main.go:141] libmachine: (ha-105013) Calling .GetSSHKeyPath
	I0814 00:11:01.831879   32343 main.go:141] libmachine: (ha-105013) Calling .GetSSHUsername
	I0814 00:11:01.832041   32343 main.go:141] libmachine: Using SSH client type: native
	I0814 00:11:01.832240   32343 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.79 22 <nil> <nil>}
	I0814 00:11:01.832254   32343 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-105013 && echo "ha-105013" | sudo tee /etc/hostname
	I0814 00:11:01.953778   32343 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-105013
	
	I0814 00:11:01.953811   32343 main.go:141] libmachine: (ha-105013) Calling .GetSSHHostname
	I0814 00:11:01.956433   32343 main.go:141] libmachine: (ha-105013) DBG | domain ha-105013 has defined MAC address 52:54:00:e0:21:6c in network mk-ha-105013
	I0814 00:11:01.956877   32343 main.go:141] libmachine: (ha-105013) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:21:6c", ip: ""} in network mk-ha-105013: {Iface:virbr1 ExpiryTime:2024-08-14 01:03:22 +0000 UTC Type:0 Mac:52:54:00:e0:21:6c Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-105013 Clientid:01:52:54:00:e0:21:6c}
	I0814 00:11:01.956905   32343 main.go:141] libmachine: (ha-105013) DBG | domain ha-105013 has defined IP address 192.168.39.79 and MAC address 52:54:00:e0:21:6c in network mk-ha-105013
	I0814 00:11:01.957039   32343 main.go:141] libmachine: (ha-105013) Calling .GetSSHPort
	I0814 00:11:01.957222   32343 main.go:141] libmachine: (ha-105013) Calling .GetSSHKeyPath
	I0814 00:11:01.957363   32343 main.go:141] libmachine: (ha-105013) Calling .GetSSHKeyPath
	I0814 00:11:01.957503   32343 main.go:141] libmachine: (ha-105013) Calling .GetSSHUsername
	I0814 00:11:01.957663   32343 main.go:141] libmachine: Using SSH client type: native
	I0814 00:11:01.957893   32343 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.79 22 <nil> <nil>}
	I0814 00:11:01.957916   32343 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-105013' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-105013/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-105013' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0814 00:11:02.063262   32343 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 00:11:02.063302   32343 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19429-9425/.minikube CaCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19429-9425/.minikube}
	I0814 00:11:02.063329   32343 buildroot.go:174] setting up certificates
	I0814 00:11:02.063339   32343 provision.go:84] configureAuth start
	I0814 00:11:02.063348   32343 main.go:141] libmachine: (ha-105013) Calling .GetMachineName
	I0814 00:11:02.063662   32343 main.go:141] libmachine: (ha-105013) Calling .GetIP
	I0814 00:11:02.066704   32343 main.go:141] libmachine: (ha-105013) DBG | domain ha-105013 has defined MAC address 52:54:00:e0:21:6c in network mk-ha-105013
	I0814 00:11:02.067260   32343 main.go:141] libmachine: (ha-105013) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:21:6c", ip: ""} in network mk-ha-105013: {Iface:virbr1 ExpiryTime:2024-08-14 01:03:22 +0000 UTC Type:0 Mac:52:54:00:e0:21:6c Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-105013 Clientid:01:52:54:00:e0:21:6c}
	I0814 00:11:02.067288   32343 main.go:141] libmachine: (ha-105013) DBG | domain ha-105013 has defined IP address 192.168.39.79 and MAC address 52:54:00:e0:21:6c in network mk-ha-105013
	I0814 00:11:02.067426   32343 main.go:141] libmachine: (ha-105013) Calling .GetSSHHostname
	I0814 00:11:02.069799   32343 main.go:141] libmachine: (ha-105013) DBG | domain ha-105013 has defined MAC address 52:54:00:e0:21:6c in network mk-ha-105013
	I0814 00:11:02.070165   32343 main.go:141] libmachine: (ha-105013) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:21:6c", ip: ""} in network mk-ha-105013: {Iface:virbr1 ExpiryTime:2024-08-14 01:03:22 +0000 UTC Type:0 Mac:52:54:00:e0:21:6c Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-105013 Clientid:01:52:54:00:e0:21:6c}
	I0814 00:11:02.070193   32343 main.go:141] libmachine: (ha-105013) DBG | domain ha-105013 has defined IP address 192.168.39.79 and MAC address 52:54:00:e0:21:6c in network mk-ha-105013
	I0814 00:11:02.070313   32343 provision.go:143] copyHostCerts
	I0814 00:11:02.070344   32343 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem
	I0814 00:11:02.070387   32343 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem, removing ...
	I0814 00:11:02.070403   32343 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem
	I0814 00:11:02.070471   32343 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem (1082 bytes)
	I0814 00:11:02.070578   32343 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem
	I0814 00:11:02.070621   32343 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem, removing ...
	I0814 00:11:02.070634   32343 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem
	I0814 00:11:02.070676   32343 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem (1123 bytes)
	I0814 00:11:02.070749   32343 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem
	I0814 00:11:02.070773   32343 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem, removing ...
	I0814 00:11:02.070782   32343 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem
	I0814 00:11:02.070818   32343 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem (1675 bytes)
	I0814 00:11:02.070890   32343 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem org=jenkins.ha-105013 san=[127.0.0.1 192.168.39.79 ha-105013 localhost minikube]
	I0814 00:11:02.206902   32343 provision.go:177] copyRemoteCerts
	I0814 00:11:02.206961   32343 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0814 00:11:02.206982   32343 main.go:141] libmachine: (ha-105013) Calling .GetSSHHostname
	I0814 00:11:02.209768   32343 main.go:141] libmachine: (ha-105013) DBG | domain ha-105013 has defined MAC address 52:54:00:e0:21:6c in network mk-ha-105013
	I0814 00:11:02.210200   32343 main.go:141] libmachine: (ha-105013) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:21:6c", ip: ""} in network mk-ha-105013: {Iface:virbr1 ExpiryTime:2024-08-14 01:03:22 +0000 UTC Type:0 Mac:52:54:00:e0:21:6c Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-105013 Clientid:01:52:54:00:e0:21:6c}
	I0814 00:11:02.210230   32343 main.go:141] libmachine: (ha-105013) DBG | domain ha-105013 has defined IP address 192.168.39.79 and MAC address 52:54:00:e0:21:6c in network mk-ha-105013
	I0814 00:11:02.210419   32343 main.go:141] libmachine: (ha-105013) Calling .GetSSHPort
	I0814 00:11:02.210595   32343 main.go:141] libmachine: (ha-105013) Calling .GetSSHKeyPath
	I0814 00:11:02.210745   32343 main.go:141] libmachine: (ha-105013) Calling .GetSSHUsername
	I0814 00:11:02.210879   32343 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/ha-105013/id_rsa Username:docker}
	I0814 00:11:02.293936   32343 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0814 00:11:02.294023   32343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0814 00:11:02.322797   32343 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0814 00:11:02.322867   32343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0814 00:11:02.346721   32343 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0814 00:11:02.346778   32343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0814 00:11:02.369742   32343 provision.go:87] duration metric: took 306.389195ms to configureAuth
	I0814 00:11:02.369771   32343 buildroot.go:189] setting minikube options for container-runtime
	I0814 00:11:02.370036   32343 config.go:182] Loaded profile config "ha-105013": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 00:11:02.370146   32343 main.go:141] libmachine: (ha-105013) Calling .GetSSHHostname
	I0814 00:11:02.372966   32343 main.go:141] libmachine: (ha-105013) DBG | domain ha-105013 has defined MAC address 52:54:00:e0:21:6c in network mk-ha-105013
	I0814 00:11:02.373431   32343 main.go:141] libmachine: (ha-105013) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:21:6c", ip: ""} in network mk-ha-105013: {Iface:virbr1 ExpiryTime:2024-08-14 01:03:22 +0000 UTC Type:0 Mac:52:54:00:e0:21:6c Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-105013 Clientid:01:52:54:00:e0:21:6c}
	I0814 00:11:02.373456   32343 main.go:141] libmachine: (ha-105013) DBG | domain ha-105013 has defined IP address 192.168.39.79 and MAC address 52:54:00:e0:21:6c in network mk-ha-105013
	I0814 00:11:02.373641   32343 main.go:141] libmachine: (ha-105013) Calling .GetSSHPort
	I0814 00:11:02.373825   32343 main.go:141] libmachine: (ha-105013) Calling .GetSSHKeyPath
	I0814 00:11:02.373979   32343 main.go:141] libmachine: (ha-105013) Calling .GetSSHKeyPath
	I0814 00:11:02.374112   32343 main.go:141] libmachine: (ha-105013) Calling .GetSSHUsername
	I0814 00:11:02.374271   32343 main.go:141] libmachine: Using SSH client type: native
	I0814 00:11:02.374475   32343 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.79 22 <nil> <nil>}
	I0814 00:11:02.374494   32343 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0814 00:12:33.289846   32343 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0814 00:12:33.289877   32343 machine.go:97] duration metric: took 1m31.577918269s to provisionDockerMachine
	I0814 00:12:33.289889   32343 start.go:293] postStartSetup for "ha-105013" (driver="kvm2")
	I0814 00:12:33.289899   32343 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0814 00:12:33.289931   32343 main.go:141] libmachine: (ha-105013) Calling .DriverName
	I0814 00:12:33.290285   32343 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0814 00:12:33.290322   32343 main.go:141] libmachine: (ha-105013) Calling .GetSSHHostname
	I0814 00:12:33.293621   32343 main.go:141] libmachine: (ha-105013) DBG | domain ha-105013 has defined MAC address 52:54:00:e0:21:6c in network mk-ha-105013
	I0814 00:12:33.294069   32343 main.go:141] libmachine: (ha-105013) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:21:6c", ip: ""} in network mk-ha-105013: {Iface:virbr1 ExpiryTime:2024-08-14 01:03:22 +0000 UTC Type:0 Mac:52:54:00:e0:21:6c Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-105013 Clientid:01:52:54:00:e0:21:6c}
	I0814 00:12:33.294098   32343 main.go:141] libmachine: (ha-105013) DBG | domain ha-105013 has defined IP address 192.168.39.79 and MAC address 52:54:00:e0:21:6c in network mk-ha-105013
	I0814 00:12:33.294233   32343 main.go:141] libmachine: (ha-105013) Calling .GetSSHPort
	I0814 00:12:33.294469   32343 main.go:141] libmachine: (ha-105013) Calling .GetSSHKeyPath
	I0814 00:12:33.294647   32343 main.go:141] libmachine: (ha-105013) Calling .GetSSHUsername
	I0814 00:12:33.294829   32343 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/ha-105013/id_rsa Username:docker}
	I0814 00:12:33.381642   32343 ssh_runner.go:195] Run: cat /etc/os-release
	I0814 00:12:33.385433   32343 info.go:137] Remote host: Buildroot 2023.02.9
	I0814 00:12:33.385459   32343 filesync.go:126] Scanning /home/jenkins/minikube-integration/19429-9425/.minikube/addons for local assets ...
	I0814 00:12:33.385520   32343 filesync.go:126] Scanning /home/jenkins/minikube-integration/19429-9425/.minikube/files for local assets ...
	I0814 00:12:33.385608   32343 filesync.go:149] local asset: /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem -> 165892.pem in /etc/ssl/certs
	I0814 00:12:33.385626   32343 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem -> /etc/ssl/certs/165892.pem
	I0814 00:12:33.385717   32343 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0814 00:12:33.394857   32343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem --> /etc/ssl/certs/165892.pem (1708 bytes)
	I0814 00:12:33.417760   32343 start.go:296] duration metric: took 127.857642ms for postStartSetup
	I0814 00:12:33.417825   32343 main.go:141] libmachine: (ha-105013) Calling .DriverName
	I0814 00:12:33.418132   32343 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0814 00:12:33.418169   32343 main.go:141] libmachine: (ha-105013) Calling .GetSSHHostname
	I0814 00:12:33.420782   32343 main.go:141] libmachine: (ha-105013) DBG | domain ha-105013 has defined MAC address 52:54:00:e0:21:6c in network mk-ha-105013
	I0814 00:12:33.421132   32343 main.go:141] libmachine: (ha-105013) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:21:6c", ip: ""} in network mk-ha-105013: {Iface:virbr1 ExpiryTime:2024-08-14 01:03:22 +0000 UTC Type:0 Mac:52:54:00:e0:21:6c Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-105013 Clientid:01:52:54:00:e0:21:6c}
	I0814 00:12:33.421159   32343 main.go:141] libmachine: (ha-105013) DBG | domain ha-105013 has defined IP address 192.168.39.79 and MAC address 52:54:00:e0:21:6c in network mk-ha-105013
	I0814 00:12:33.421308   32343 main.go:141] libmachine: (ha-105013) Calling .GetSSHPort
	I0814 00:12:33.421497   32343 main.go:141] libmachine: (ha-105013) Calling .GetSSHKeyPath
	I0814 00:12:33.421663   32343 main.go:141] libmachine: (ha-105013) Calling .GetSSHUsername
	I0814 00:12:33.421776   32343 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/ha-105013/id_rsa Username:docker}
	W0814 00:12:33.504353   32343 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0814 00:12:33.504392   32343 fix.go:56] duration metric: took 1m31.812906917s for fixHost
	I0814 00:12:33.504420   32343 main.go:141] libmachine: (ha-105013) Calling .GetSSHHostname
	I0814 00:12:33.506827   32343 main.go:141] libmachine: (ha-105013) DBG | domain ha-105013 has defined MAC address 52:54:00:e0:21:6c in network mk-ha-105013
	I0814 00:12:33.507156   32343 main.go:141] libmachine: (ha-105013) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:21:6c", ip: ""} in network mk-ha-105013: {Iface:virbr1 ExpiryTime:2024-08-14 01:03:22 +0000 UTC Type:0 Mac:52:54:00:e0:21:6c Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-105013 Clientid:01:52:54:00:e0:21:6c}
	I0814 00:12:33.507183   32343 main.go:141] libmachine: (ha-105013) DBG | domain ha-105013 has defined IP address 192.168.39.79 and MAC address 52:54:00:e0:21:6c in network mk-ha-105013
	I0814 00:12:33.507311   32343 main.go:141] libmachine: (ha-105013) Calling .GetSSHPort
	I0814 00:12:33.507506   32343 main.go:141] libmachine: (ha-105013) Calling .GetSSHKeyPath
	I0814 00:12:33.507668   32343 main.go:141] libmachine: (ha-105013) Calling .GetSSHKeyPath
	I0814 00:12:33.507804   32343 main.go:141] libmachine: (ha-105013) Calling .GetSSHUsername
	I0814 00:12:33.507965   32343 main.go:141] libmachine: Using SSH client type: native
	I0814 00:12:33.508141   32343 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.79 22 <nil> <nil>}
	I0814 00:12:33.508154   32343 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0814 00:12:33.615332   32343 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723594353.579486004
	
	I0814 00:12:33.615354   32343 fix.go:216] guest clock: 1723594353.579486004
	I0814 00:12:33.615361   32343 fix.go:229] Guest: 2024-08-14 00:12:33.579486004 +0000 UTC Remote: 2024-08-14 00:12:33.504401796 +0000 UTC m=+91.934102516 (delta=75.084208ms)
	I0814 00:12:33.615385   32343 fix.go:200] guest clock delta is within tolerance: 75.084208ms
	I0814 00:12:33.615391   32343 start.go:83] releasing machines lock for "ha-105013", held for 1m31.92392306s
	I0814 00:12:33.615409   32343 main.go:141] libmachine: (ha-105013) Calling .DriverName
	I0814 00:12:33.615679   32343 main.go:141] libmachine: (ha-105013) Calling .GetIP
	I0814 00:12:33.618207   32343 main.go:141] libmachine: (ha-105013) DBG | domain ha-105013 has defined MAC address 52:54:00:e0:21:6c in network mk-ha-105013
	I0814 00:12:33.618570   32343 main.go:141] libmachine: (ha-105013) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:21:6c", ip: ""} in network mk-ha-105013: {Iface:virbr1 ExpiryTime:2024-08-14 01:03:22 +0000 UTC Type:0 Mac:52:54:00:e0:21:6c Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-105013 Clientid:01:52:54:00:e0:21:6c}
	I0814 00:12:33.618599   32343 main.go:141] libmachine: (ha-105013) DBG | domain ha-105013 has defined IP address 192.168.39.79 and MAC address 52:54:00:e0:21:6c in network mk-ha-105013
	I0814 00:12:33.618752   32343 main.go:141] libmachine: (ha-105013) Calling .DriverName
	I0814 00:12:33.619259   32343 main.go:141] libmachine: (ha-105013) Calling .DriverName
	I0814 00:12:33.619481   32343 main.go:141] libmachine: (ha-105013) Calling .DriverName
	I0814 00:12:33.619617   32343 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0814 00:12:33.619658   32343 main.go:141] libmachine: (ha-105013) Calling .GetSSHHostname
	I0814 00:12:33.619713   32343 ssh_runner.go:195] Run: cat /version.json
	I0814 00:12:33.619737   32343 main.go:141] libmachine: (ha-105013) Calling .GetSSHHostname
	I0814 00:12:33.622147   32343 main.go:141] libmachine: (ha-105013) DBG | domain ha-105013 has defined MAC address 52:54:00:e0:21:6c in network mk-ha-105013
	I0814 00:12:33.622513   32343 main.go:141] libmachine: (ha-105013) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:21:6c", ip: ""} in network mk-ha-105013: {Iface:virbr1 ExpiryTime:2024-08-14 01:03:22 +0000 UTC Type:0 Mac:52:54:00:e0:21:6c Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-105013 Clientid:01:52:54:00:e0:21:6c}
	I0814 00:12:33.622539   32343 main.go:141] libmachine: (ha-105013) DBG | domain ha-105013 has defined MAC address 52:54:00:e0:21:6c in network mk-ha-105013
	I0814 00:12:33.622556   32343 main.go:141] libmachine: (ha-105013) DBG | domain ha-105013 has defined IP address 192.168.39.79 and MAC address 52:54:00:e0:21:6c in network mk-ha-105013
	I0814 00:12:33.622700   32343 main.go:141] libmachine: (ha-105013) Calling .GetSSHPort
	I0814 00:12:33.622853   32343 main.go:141] libmachine: (ha-105013) Calling .GetSSHKeyPath
	I0814 00:12:33.623002   32343 main.go:141] libmachine: (ha-105013) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:21:6c", ip: ""} in network mk-ha-105013: {Iface:virbr1 ExpiryTime:2024-08-14 01:03:22 +0000 UTC Type:0 Mac:52:54:00:e0:21:6c Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-105013 Clientid:01:52:54:00:e0:21:6c}
	I0814 00:12:33.623006   32343 main.go:141] libmachine: (ha-105013) Calling .GetSSHUsername
	I0814 00:12:33.623026   32343 main.go:141] libmachine: (ha-105013) DBG | domain ha-105013 has defined IP address 192.168.39.79 and MAC address 52:54:00:e0:21:6c in network mk-ha-105013
	I0814 00:12:33.623164   32343 main.go:141] libmachine: (ha-105013) Calling .GetSSHPort
	I0814 00:12:33.623167   32343 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/ha-105013/id_rsa Username:docker}
	I0814 00:12:33.623323   32343 main.go:141] libmachine: (ha-105013) Calling .GetSSHKeyPath
	I0814 00:12:33.623463   32343 main.go:141] libmachine: (ha-105013) Calling .GetSSHUsername
	I0814 00:12:33.623689   32343 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/ha-105013/id_rsa Username:docker}
	I0814 00:12:33.699292   32343 ssh_runner.go:195] Run: systemctl --version
	I0814 00:12:33.736019   32343 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0814 00:12:33.898316   32343 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0814 00:12:33.909251   32343 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0814 00:12:33.909327   32343 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0814 00:12:33.918778   32343 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0814 00:12:33.918807   32343 start.go:495] detecting cgroup driver to use...
	I0814 00:12:33.918882   32343 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0814 00:12:33.937729   32343 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0814 00:12:33.951960   32343 docker.go:217] disabling cri-docker service (if available) ...
	I0814 00:12:33.952015   32343 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0814 00:12:33.965826   32343 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0814 00:12:33.979623   32343 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0814 00:12:34.134533   32343 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0814 00:12:34.280172   32343 docker.go:233] disabling docker service ...
	I0814 00:12:34.280240   32343 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0814 00:12:34.296238   32343 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0814 00:12:34.309431   32343 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0814 00:12:34.453375   32343 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0814 00:12:34.598121   32343 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0814 00:12:34.611594   32343 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 00:12:34.629089   32343 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0814 00:12:34.629138   32343 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 00:12:34.638758   32343 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0814 00:12:34.638814   32343 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 00:12:34.648110   32343 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 00:12:34.658193   32343 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 00:12:34.669635   32343 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0814 00:12:34.681229   32343 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 00:12:34.692327   32343 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 00:12:34.702591   32343 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 00:12:34.713805   32343 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0814 00:12:34.724011   32343 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0814 00:12:34.733984   32343 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 00:12:34.882844   32343 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0814 00:12:38.037094   32343 ssh_runner.go:235] Completed: sudo systemctl restart crio: (3.154214514s)
	I0814 00:12:38.037133   32343 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0814 00:12:38.037176   32343 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0814 00:12:38.041950   32343 start.go:563] Will wait 60s for crictl version
	I0814 00:12:38.042018   32343 ssh_runner.go:195] Run: which crictl
	I0814 00:12:38.045805   32343 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0814 00:12:38.084890   32343 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0814 00:12:38.084988   32343 ssh_runner.go:195] Run: crio --version
	I0814 00:12:38.116034   32343 ssh_runner.go:195] Run: crio --version
	I0814 00:12:38.147597   32343 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0814 00:12:38.148800   32343 main.go:141] libmachine: (ha-105013) Calling .GetIP
	I0814 00:12:38.151443   32343 main.go:141] libmachine: (ha-105013) DBG | domain ha-105013 has defined MAC address 52:54:00:e0:21:6c in network mk-ha-105013
	I0814 00:12:38.151843   32343 main.go:141] libmachine: (ha-105013) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:21:6c", ip: ""} in network mk-ha-105013: {Iface:virbr1 ExpiryTime:2024-08-14 01:03:22 +0000 UTC Type:0 Mac:52:54:00:e0:21:6c Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-105013 Clientid:01:52:54:00:e0:21:6c}
	I0814 00:12:38.151868   32343 main.go:141] libmachine: (ha-105013) DBG | domain ha-105013 has defined IP address 192.168.39.79 and MAC address 52:54:00:e0:21:6c in network mk-ha-105013
	I0814 00:12:38.152034   32343 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0814 00:12:38.156664   32343 kubeadm.go:883] updating cluster {Name:ha-105013 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cl
usterName:ha-105013 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.79 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.160 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.177 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.102 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fre
shpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0814 00:12:38.156803   32343 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0814 00:12:38.156853   32343 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 00:12:38.197722   32343 crio.go:514] all images are preloaded for cri-o runtime.
	I0814 00:12:38.197744   32343 crio.go:433] Images already preloaded, skipping extraction
	I0814 00:12:38.197797   32343 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 00:12:38.234654   32343 crio.go:514] all images are preloaded for cri-o runtime.
	I0814 00:12:38.234680   32343 cache_images.go:84] Images are preloaded, skipping loading
	I0814 00:12:38.234702   32343 kubeadm.go:934] updating node { 192.168.39.79 8443 v1.31.0 crio true true} ...
	I0814 00:12:38.234826   32343 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-105013 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.79
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-105013 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0814 00:12:38.234909   32343 ssh_runner.go:195] Run: crio config
	I0814 00:12:38.279386   32343 cni.go:84] Creating CNI manager for ""
	I0814 00:12:38.279408   32343 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0814 00:12:38.279420   32343 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0814 00:12:38.279447   32343 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.79 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-105013 NodeName:ha-105013 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.79"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.79 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0814 00:12:38.279583   32343 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.79
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-105013"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.79
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.79"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0814 00:12:38.279610   32343 kube-vip.go:115] generating kube-vip config ...
	I0814 00:12:38.279651   32343 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0814 00:12:38.291204   32343 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0814 00:12:38.291319   32343 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0814 00:12:38.291397   32343 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0814 00:12:38.300946   32343 binaries.go:44] Found k8s binaries, skipping transfer
	I0814 00:12:38.301007   32343 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0814 00:12:38.309906   32343 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0814 00:12:38.324963   32343 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0814 00:12:38.340034   32343 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0814 00:12:38.354958   32343 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0814 00:12:38.377057   32343 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0814 00:12:38.380992   32343 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 00:12:38.538286   32343 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 00:12:38.552348   32343 certs.go:68] Setting up /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/ha-105013 for IP: 192.168.39.79
	I0814 00:12:38.552370   32343 certs.go:194] generating shared ca certs ...
	I0814 00:12:38.552384   32343 certs.go:226] acquiring lock for ca certs: {Name:mke321171338291cf6d66f3acbfe43d3fabcafa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 00:12:38.552528   32343 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19429-9425/.minikube/ca.key
	I0814 00:12:38.552577   32343 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.key
	I0814 00:12:38.552587   32343 certs.go:256] generating profile certs ...
	I0814 00:12:38.552660   32343 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/ha-105013/client.key
	I0814 00:12:38.552687   32343 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/ha-105013/apiserver.key.f6dc1896
	I0814 00:12:38.552707   32343 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/ha-105013/apiserver.crt.f6dc1896 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.79 192.168.39.160 192.168.39.177 192.168.39.254]
	I0814 00:12:38.699793   32343 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/ha-105013/apiserver.crt.f6dc1896 ...
	I0814 00:12:38.699822   32343 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/ha-105013/apiserver.crt.f6dc1896: {Name:mkdb4775096c6c509b34c1363d8ad01cbc342d12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 00:12:38.699979   32343 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/ha-105013/apiserver.key.f6dc1896 ...
	I0814 00:12:38.699993   32343 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/ha-105013/apiserver.key.f6dc1896: {Name:mkceb0965afb0da76da07c8d2c54f2a66a4991ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 00:12:38.700070   32343 certs.go:381] copying /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/ha-105013/apiserver.crt.f6dc1896 -> /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/ha-105013/apiserver.crt
	I0814 00:12:38.700211   32343 certs.go:385] copying /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/ha-105013/apiserver.key.f6dc1896 -> /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/ha-105013/apiserver.key
	I0814 00:12:38.700330   32343 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/ha-105013/proxy-client.key
	I0814 00:12:38.700343   32343 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19429-9425/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0814 00:12:38.700356   32343 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19429-9425/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0814 00:12:38.700369   32343 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0814 00:12:38.700381   32343 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0814 00:12:38.700394   32343 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/ha-105013/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0814 00:12:38.700406   32343 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/ha-105013/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0814 00:12:38.700418   32343 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/ha-105013/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0814 00:12:38.700429   32343 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/ha-105013/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0814 00:12:38.700476   32343 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589.pem (1338 bytes)
	W0814 00:12:38.700505   32343 certs.go:480] ignoring /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589_empty.pem, impossibly tiny 0 bytes
	I0814 00:12:38.700512   32343 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem (1679 bytes)
	I0814 00:12:38.700534   32343 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem (1082 bytes)
	I0814 00:12:38.700563   32343 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem (1123 bytes)
	I0814 00:12:38.700586   32343 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem (1675 bytes)
	I0814 00:12:38.700625   32343 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem (1708 bytes)
	I0814 00:12:38.700650   32343 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19429-9425/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0814 00:12:38.700673   32343 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589.pem -> /usr/share/ca-certificates/16589.pem
	I0814 00:12:38.700692   32343 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem -> /usr/share/ca-certificates/165892.pem
	I0814 00:12:38.701198   32343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0814 00:12:38.726406   32343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0814 00:12:38.749768   32343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0814 00:12:38.772828   32343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0814 00:12:38.801519   32343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/ha-105013/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0814 00:12:38.870024   32343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/ha-105013/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0814 00:12:38.893824   32343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/ha-105013/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0814 00:12:38.962424   32343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/ha-105013/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0814 00:12:38.998664   32343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0814 00:12:39.049339   32343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589.pem --> /usr/share/ca-certificates/16589.pem (1338 bytes)
	I0814 00:12:39.104408   32343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem --> /usr/share/ca-certificates/165892.pem (1708 bytes)
	I0814 00:12:39.156645   32343 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0814 00:12:39.189277   32343 ssh_runner.go:195] Run: openssl version
	I0814 00:12:39.200714   32343 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0814 00:12:39.212604   32343 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0814 00:12:39.221643   32343 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 13 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0814 00:12:39.221696   32343 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0814 00:12:39.239343   32343 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0814 00:12:39.252957   32343 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16589.pem && ln -fs /usr/share/ca-certificates/16589.pem /etc/ssl/certs/16589.pem"
	I0814 00:12:39.265663   32343 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16589.pem
	I0814 00:12:39.270014   32343 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 13 23:59 /usr/share/ca-certificates/16589.pem
	I0814 00:12:39.270098   32343 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16589.pem
	I0814 00:12:39.275744   32343 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16589.pem /etc/ssl/certs/51391683.0"
	I0814 00:12:39.287096   32343 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/165892.pem && ln -fs /usr/share/ca-certificates/165892.pem /etc/ssl/certs/165892.pem"
	I0814 00:12:39.300267   32343 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/165892.pem
	I0814 00:12:39.304693   32343 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 13 23:59 /usr/share/ca-certificates/165892.pem
	I0814 00:12:39.304764   32343 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/165892.pem
	I0814 00:12:39.311422   32343 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/165892.pem /etc/ssl/certs/3ec20f2e.0"
	I0814 00:12:39.325516   32343 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0814 00:12:39.329959   32343 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0814 00:12:39.336104   32343 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0814 00:12:39.341855   32343 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0814 00:12:39.347208   32343 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0814 00:12:39.353407   32343 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0814 00:12:39.358860   32343 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0814 00:12:39.368417   32343 kubeadm.go:392] StartCluster: {Name:ha-105013 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clust
erName:ha-105013 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.79 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.160 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.177 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.102 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 00:12:39.368582   32343 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0814 00:12:39.368667   32343 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 00:12:39.413358   32343 cri.go:89] found id: "4b9280e9ce815f11eda1d904348fe098c65961c0aae63b154a2157ef7caa3dca"
	I0814 00:12:39.413390   32343 cri.go:89] found id: "c23300665d9c76ac06c75fbfb737adf5b17e16c97443028c1a964c023ba15d12"
	I0814 00:12:39.413394   32343 cri.go:89] found id: "a6ce1804a980ec080327c097b9929ce80ad1eaa3cb08408175afbb903d6bccc8"
	I0814 00:12:39.413398   32343 cri.go:89] found id: "e4a69b9d72a8d3a11800a1e7f03e03649b32990bf4c5b668a0dea73074bdf45c"
	I0814 00:12:39.413400   32343 cri.go:89] found id: "ab27379d6e6bb1a395cb47aa01564ceeda01f91b0c78c97a50d2a4856935bed8"
	I0814 00:12:39.413404   32343 cri.go:89] found id: "d773535128c3474359fb39d2e67a85fda4514786ccd1249690454b5c2f1aad45"
	I0814 00:12:39.413406   32343 cri.go:89] found id: "cae4a2039c73c8b44c95f3baeb4245c44b9cf0e510c0c05c79eff9d68a7af5c7"
	I0814 00:12:39.413409   32343 cri.go:89] found id: "e0adc472eb64c654df78233de9a2e57e4c6919b76e471e24a0195621f819fb12"
	I0814 00:12:39.413411   32343 cri.go:89] found id: "f644ed2e094890dd8d28e4ca035634bf6340e598553601368c4025ba64cbbc58"
	I0814 00:12:39.413417   32343 cri.go:89] found id: "9a988632430c243612d4b0086b23d504fe2c075bbb2ecc0786bc1a49ae396241"
	I0814 00:12:39.413421   32343 cri.go:89] found id: "36543aa9640de83e246f62f693f3fa3b071676d8a70cd465f8e1921695121be2"
	I0814 00:12:39.413424   32343 cri.go:89] found id: "8092755d486b62c1b259e654291dfa106a8783f58cd9651dfa51bbc4cf7824a3"
	I0814 00:12:39.413427   32343 cri.go:89] found id: ""
	I0814 00:12:39.413468   32343 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 14 00:14:52 ha-105013 crio[3051]: time="2024-08-14 00:14:52.591390216Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723594492591362151,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:145823,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=eab6f245-6242-4020-be71-b7d3975951ed name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 00:14:52 ha-105013 crio[3051]: time="2024-08-14 00:14:52.591974902Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=29d2e0e8-41ce-4be8-87ff-526bdefce850 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 00:14:52 ha-105013 crio[3051]: time="2024-08-14 00:14:52.592102646Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=29d2e0e8-41ce-4be8-87ff-526bdefce850 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 00:14:52 ha-105013 crio[3051]: time="2024-08-14 00:14:52.592585129Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:34e743530eb45a69fc71d3f83fa27974793125b8efd320233acc9e5ade3e1b86,PodSandboxId:b498a018e05bf3ac9da7579b28c717ed7765b3375bdf19fab16f537265f27584,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723594398910315851,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-lq24p,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a0c03f6d-f644-43d8-a8d6-2079f90d2bf2,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee5d9b99a82fb9bc7d901e38ccc970b5854914d5bdc843ac9d85a1c4a32c0819,PodSandboxId:c9f91079e28b929d7f77ed225df2140b801cedeaeb4f3aac95c73a52a99e98d0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723594396983816289,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-105013,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f1d58febfc5b6695f71d52f2f23febc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0888b9785ccf1d89dbd1c10a23f4f7eaf095635fe96109abd1f407fd39608fd,PodSandboxId:f7f60d4f6540bdb45e5b8d3ab66c241143eee7eeab4bf893cbf2c25a60f54e5a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723594394636411114,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-105013,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfa077ca8bb2ae3949f71c038c9eb784,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination
-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39abf69130b4b993f132c018ec66d7884b2ab2fbe504637625587a1e81f43838,PodSandboxId:de9af903660ccf6557ac92a6884763dbba68904a1befb90197dbb3005e32e049,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723594388632298333,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6555aaf3-8661-4508-8993-e27e91fd75b0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4235873ebc639832799ba88b9f1b85efc11f39bd3b247b1c50252be54d7c9ca,PodSandboxId:3f9f663df8afebeb3bdd97e02d7eceb7d8432250d11b7843bead5f3cef68baf2,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1723594379677924888,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-105013,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eba9f30404fc6595cd517b2e044ad070,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f847be92bcda663b5a64c3ebe241dc754529718628caa01cef0525a11d01209f,PodSandboxId:9da9ca9a7a69d1e091145ac2e2410cbf8f5d15734f6ea2adbee9de5b28876842,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_RUNNING,CreatedAt:1723594365577727349,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6m57q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1349813-8756-4972-9e6c-ae1bf33a8921,},Annotations:map[string]string{io.kubernetes.container.hash: 49c194ef,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:d463e78fa6b27c92b26cd1bc806e34320df69961aae19159122eec7c9250a80b,PodSandboxId:b27cd500e6978907fcff03f511984b81c8eec91472a7cba9c370696ac0e08cb3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723594365529230157,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qvrtb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80c16299-297a-49b0-98bc-97208c289e73,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container
{Id:8d6324bf2404b4c092f212b1262882c454050b8a4c18214d22cbb56d999ed4d4,PodSandboxId:f8e5c70a7011bb1fca0a3d4b7824fd431425469c4d0540e94e955460f58ba58c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723594365434252393,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-105013,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0d6f84c88b276d9caf0c279a2fd73aa,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c74f85631d30d473
5c70a42cc9640d34007847fc31ee08e32623dc8dc6bb949,PodSandboxId:58bac8e5e4e59d11185d31a70f4cd2234e8a17753800ba7d9d99a61743dea7ef,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723594365441095077,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qlqtb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4a3e3e6-b8e7-4c32-a5af-065aa78111f1,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b8c3091f0023c0b7e493a7be78369fd69bc71f870b3a8815a8c78c94c51c560,PodSandboxId:de9af903660ccf6557ac92a6884763dbba68904a1befb90197dbb3005e32e049,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723594365366046360,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6555aaf3-8661-4508-8993-e27e91fd75b0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3adba2eef6dc38a56e8b38b2c0414c99640a21716d5258d8e30c84c11b895f2,PodSandboxId:d5f7e049035c79d28292c24db15c2bc02b24e788548cc97debeb3ee237a9f922,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723594365341838203,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-105013,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ac35313bb7af49a3d3262d37ba9167c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.k
ubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13140714cc06469d86f9745a4c86966c693d3449ed3f3c154fbb6e14ae42ee33,PodSandboxId:c9f91079e28b929d7f77ed225df2140b801cedeaeb4f3aac95c73a52a99e98d0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723594365260100912,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-105013,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f1d58febfc5b6695f71d52f2f23febc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b9280e9ce815f11eda1d904348fe098c65961c0aae63b154a2157ef7caa3dca,PodSandboxId:2f0f36ce454ff2e17dc995ce42d151c07f7af18f30af746f5be432aa9aee5828,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723594359056009370,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-r9b46,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 106e607e-b870-42ca-ad43-d80238452cd4,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"p
rotocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c23300665d9c76ac06c75fbfb737adf5b17e16c97443028c1a964c023ba15d12,PodSandboxId:f7f60d4f6540bdb45e5b8d3ab66c241143eee7eeab4bf893cbf2c25a60f54e5a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723594359011568448,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-105013,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfa077ca8bb2ae3949f71c038c9eb784,},Anno
tations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d7e2e3718db070bf11ac3c5202d785b481f0ffd2bfb576fb739826e1f002f3f,PodSandboxId:cdd40c63d92d41d78866e3821cd5cbe7fa6a7a71f40d8ec433eb78a73c2b0cd8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723594016788579811,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-lq24p,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a0c03f6d-f644-43d8-a8d6-2079f90d2bf2,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4a69b9d72a8d3a11800a1e7f03e03649b32990bf4c5b668a0dea73074bdf45c,PodSandboxId:e9a53f92642e9f4eac65fa9eb0c2b1d5979d991666d848c42bcf5091f5b97c20,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723593844833213150,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-r9b46,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 106e607e-b870-42ca-ad43-d80238452cd4,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6ce1804a980ec080327c097b9929ce80ad1eaa3cb08408175afbb903d6bccc8,PodSandboxId:6283c8ce8359065cdf2c1e90a986552ccc30cd0cd4d238f157e7a2c5194e7b80,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723593844850304177,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-qlqtb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4a3e3e6-b8e7-4c32-a5af-065aa78111f1,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d773535128c3474359fb39d2e67a85fda4514786ccd1249690454b5c2f1aad45,PodSandboxId:b98b6b68f5b5a95386d58d6fb01c306186f6a22cb4df64e4da46de670a827c52,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_EXITED,CreatedAt:1723593832623775335,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6m57q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1349813-8756-4972-9e6c-ae1bf33a8921,},Annotations:map[string]string{io.kubernetes.container.hash: 49c194ef,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cae4a2039c73c8b44c95f3baeb4245c44b9cf0e510c0c05c79eff9d68a7af5c7,PodSandboxId:f4ad05be5bf18bde191989a3918a8be62b318d331a5748204c0e1f6313038119,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e61
62f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723593832475614597,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qvrtb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80c16299-297a-49b0-98bc-97208c289e73,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f644ed2e094890dd8d28e4ca035634bf6340e598553601368c4025ba64cbbc58,PodSandboxId:24f8ff464d5f9230d8cb411739e93c3a558af6fd645023eaef8c52943dc7a7a2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f44
6c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723593821530366342,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-105013,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0d6f84c88b276d9caf0c279a2fd73aa,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a988632430c243612d4b0086b23d504fe2c075bbb2ecc0786bc1a49ae396241,PodSandboxId:8da0fafbf7974c56372b9f6bae5cb9c27185ee89a9d0ecb7ad3bec9aed881dee,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAIN
ER_EXITED,CreatedAt:1723593821450872563,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-105013,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ac35313bb7af49a3d3262d37ba9167c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=29d2e0e8-41ce-4be8-87ff-526bdefce850 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 00:14:52 ha-105013 crio[3051]: time="2024-08-14 00:14:52.632333031Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e7650cda-2425-45be-9785-818ca2d96eff name=/runtime.v1.RuntimeService/Version
	Aug 14 00:14:52 ha-105013 crio[3051]: time="2024-08-14 00:14:52.632410915Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e7650cda-2425-45be-9785-818ca2d96eff name=/runtime.v1.RuntimeService/Version
	Aug 14 00:14:52 ha-105013 crio[3051]: time="2024-08-14 00:14:52.633622413Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b6b68394-fecb-40a9-bfb8-bcffe363a6be name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 00:14:52 ha-105013 crio[3051]: time="2024-08-14 00:14:52.634540724Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723594492634516169,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:145823,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b6b68394-fecb-40a9-bfb8-bcffe363a6be name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 00:14:52 ha-105013 crio[3051]: time="2024-08-14 00:14:52.635196813Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5545d553-be41-4582-aebb-91e6ca8adfd3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 00:14:52 ha-105013 crio[3051]: time="2024-08-14 00:14:52.635251324Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5545d553-be41-4582-aebb-91e6ca8adfd3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 00:14:52 ha-105013 crio[3051]: time="2024-08-14 00:14:52.635650814Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:34e743530eb45a69fc71d3f83fa27974793125b8efd320233acc9e5ade3e1b86,PodSandboxId:b498a018e05bf3ac9da7579b28c717ed7765b3375bdf19fab16f537265f27584,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723594398910315851,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-lq24p,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a0c03f6d-f644-43d8-a8d6-2079f90d2bf2,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee5d9b99a82fb9bc7d901e38ccc970b5854914d5bdc843ac9d85a1c4a32c0819,PodSandboxId:c9f91079e28b929d7f77ed225df2140b801cedeaeb4f3aac95c73a52a99e98d0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723594396983816289,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-105013,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f1d58febfc5b6695f71d52f2f23febc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0888b9785ccf1d89dbd1c10a23f4f7eaf095635fe96109abd1f407fd39608fd,PodSandboxId:f7f60d4f6540bdb45e5b8d3ab66c241143eee7eeab4bf893cbf2c25a60f54e5a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723594394636411114,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-105013,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfa077ca8bb2ae3949f71c038c9eb784,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination
-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39abf69130b4b993f132c018ec66d7884b2ab2fbe504637625587a1e81f43838,PodSandboxId:de9af903660ccf6557ac92a6884763dbba68904a1befb90197dbb3005e32e049,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723594388632298333,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6555aaf3-8661-4508-8993-e27e91fd75b0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4235873ebc639832799ba88b9f1b85efc11f39bd3b247b1c50252be54d7c9ca,PodSandboxId:3f9f663df8afebeb3bdd97e02d7eceb7d8432250d11b7843bead5f3cef68baf2,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1723594379677924888,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-105013,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eba9f30404fc6595cd517b2e044ad070,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f847be92bcda663b5a64c3ebe241dc754529718628caa01cef0525a11d01209f,PodSandboxId:9da9ca9a7a69d1e091145ac2e2410cbf8f5d15734f6ea2adbee9de5b28876842,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_RUNNING,CreatedAt:1723594365577727349,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6m57q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1349813-8756-4972-9e6c-ae1bf33a8921,},Annotations:map[string]string{io.kubernetes.container.hash: 49c194ef,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:d463e78fa6b27c92b26cd1bc806e34320df69961aae19159122eec7c9250a80b,PodSandboxId:b27cd500e6978907fcff03f511984b81c8eec91472a7cba9c370696ac0e08cb3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723594365529230157,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qvrtb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80c16299-297a-49b0-98bc-97208c289e73,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container
{Id:8d6324bf2404b4c092f212b1262882c454050b8a4c18214d22cbb56d999ed4d4,PodSandboxId:f8e5c70a7011bb1fca0a3d4b7824fd431425469c4d0540e94e955460f58ba58c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723594365434252393,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-105013,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0d6f84c88b276d9caf0c279a2fd73aa,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c74f85631d30d473
5c70a42cc9640d34007847fc31ee08e32623dc8dc6bb949,PodSandboxId:58bac8e5e4e59d11185d31a70f4cd2234e8a17753800ba7d9d99a61743dea7ef,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723594365441095077,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qlqtb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4a3e3e6-b8e7-4c32-a5af-065aa78111f1,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b8c3091f0023c0b7e493a7be78369fd69bc71f870b3a8815a8c78c94c51c560,PodSandboxId:de9af903660ccf6557ac92a6884763dbba68904a1befb90197dbb3005e32e049,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723594365366046360,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6555aaf3-8661-4508-8993-e27e91fd75b0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3adba2eef6dc38a56e8b38b2c0414c99640a21716d5258d8e30c84c11b895f2,PodSandboxId:d5f7e049035c79d28292c24db15c2bc02b24e788548cc97debeb3ee237a9f922,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723594365341838203,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-105013,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ac35313bb7af49a3d3262d37ba9167c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.k
ubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13140714cc06469d86f9745a4c86966c693d3449ed3f3c154fbb6e14ae42ee33,PodSandboxId:c9f91079e28b929d7f77ed225df2140b801cedeaeb4f3aac95c73a52a99e98d0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723594365260100912,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-105013,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f1d58febfc5b6695f71d52f2f23febc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b9280e9ce815f11eda1d904348fe098c65961c0aae63b154a2157ef7caa3dca,PodSandboxId:2f0f36ce454ff2e17dc995ce42d151c07f7af18f30af746f5be432aa9aee5828,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723594359056009370,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-r9b46,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 106e607e-b870-42ca-ad43-d80238452cd4,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"p
rotocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c23300665d9c76ac06c75fbfb737adf5b17e16c97443028c1a964c023ba15d12,PodSandboxId:f7f60d4f6540bdb45e5b8d3ab66c241143eee7eeab4bf893cbf2c25a60f54e5a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723594359011568448,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-105013,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfa077ca8bb2ae3949f71c038c9eb784,},Anno
tations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d7e2e3718db070bf11ac3c5202d785b481f0ffd2bfb576fb739826e1f002f3f,PodSandboxId:cdd40c63d92d41d78866e3821cd5cbe7fa6a7a71f40d8ec433eb78a73c2b0cd8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723594016788579811,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-lq24p,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a0c03f6d-f644-43d8-a8d6-2079f90d2bf2,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4a69b9d72a8d3a11800a1e7f03e03649b32990bf4c5b668a0dea73074bdf45c,PodSandboxId:e9a53f92642e9f4eac65fa9eb0c2b1d5979d991666d848c42bcf5091f5b97c20,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723593844833213150,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-r9b46,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 106e607e-b870-42ca-ad43-d80238452cd4,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6ce1804a980ec080327c097b9929ce80ad1eaa3cb08408175afbb903d6bccc8,PodSandboxId:6283c8ce8359065cdf2c1e90a986552ccc30cd0cd4d238f157e7a2c5194e7b80,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723593844850304177,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-qlqtb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4a3e3e6-b8e7-4c32-a5af-065aa78111f1,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d773535128c3474359fb39d2e67a85fda4514786ccd1249690454b5c2f1aad45,PodSandboxId:b98b6b68f5b5a95386d58d6fb01c306186f6a22cb4df64e4da46de670a827c52,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_EXITED,CreatedAt:1723593832623775335,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6m57q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1349813-8756-4972-9e6c-ae1bf33a8921,},Annotations:map[string]string{io.kubernetes.container.hash: 49c194ef,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cae4a2039c73c8b44c95f3baeb4245c44b9cf0e510c0c05c79eff9d68a7af5c7,PodSandboxId:f4ad05be5bf18bde191989a3918a8be62b318d331a5748204c0e1f6313038119,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e61
62f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723593832475614597,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qvrtb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80c16299-297a-49b0-98bc-97208c289e73,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f644ed2e094890dd8d28e4ca035634bf6340e598553601368c4025ba64cbbc58,PodSandboxId:24f8ff464d5f9230d8cb411739e93c3a558af6fd645023eaef8c52943dc7a7a2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f44
6c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723593821530366342,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-105013,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0d6f84c88b276d9caf0c279a2fd73aa,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a988632430c243612d4b0086b23d504fe2c075bbb2ecc0786bc1a49ae396241,PodSandboxId:8da0fafbf7974c56372b9f6bae5cb9c27185ee89a9d0ecb7ad3bec9aed881dee,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAIN
ER_EXITED,CreatedAt:1723593821450872563,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-105013,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ac35313bb7af49a3d3262d37ba9167c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5545d553-be41-4582-aebb-91e6ca8adfd3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 00:14:52 ha-105013 crio[3051]: time="2024-08-14 00:14:52.681069117Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9305949a-dc03-4115-9f8b-4edb3a4d72cb name=/runtime.v1.RuntimeService/Version
	Aug 14 00:14:52 ha-105013 crio[3051]: time="2024-08-14 00:14:52.681187831Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9305949a-dc03-4115-9f8b-4edb3a4d72cb name=/runtime.v1.RuntimeService/Version
	Aug 14 00:14:52 ha-105013 crio[3051]: time="2024-08-14 00:14:52.682271710Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c299c770-b0c8-4125-a368-b82b56d2d2c3 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 00:14:52 ha-105013 crio[3051]: time="2024-08-14 00:14:52.682804408Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723594492682767569,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:145823,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c299c770-b0c8-4125-a368-b82b56d2d2c3 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 00:14:52 ha-105013 crio[3051]: time="2024-08-14 00:14:52.683404540Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=36d913fd-d4ae-4ee6-9f52-eaeff3c9703e name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 00:14:52 ha-105013 crio[3051]: time="2024-08-14 00:14:52.683485920Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=36d913fd-d4ae-4ee6-9f52-eaeff3c9703e name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 00:14:52 ha-105013 crio[3051]: time="2024-08-14 00:14:52.684110108Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:34e743530eb45a69fc71d3f83fa27974793125b8efd320233acc9e5ade3e1b86,PodSandboxId:b498a018e05bf3ac9da7579b28c717ed7765b3375bdf19fab16f537265f27584,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723594398910315851,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-lq24p,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a0c03f6d-f644-43d8-a8d6-2079f90d2bf2,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee5d9b99a82fb9bc7d901e38ccc970b5854914d5bdc843ac9d85a1c4a32c0819,PodSandboxId:c9f91079e28b929d7f77ed225df2140b801cedeaeb4f3aac95c73a52a99e98d0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723594396983816289,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-105013,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f1d58febfc5b6695f71d52f2f23febc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0888b9785ccf1d89dbd1c10a23f4f7eaf095635fe96109abd1f407fd39608fd,PodSandboxId:f7f60d4f6540bdb45e5b8d3ab66c241143eee7eeab4bf893cbf2c25a60f54e5a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723594394636411114,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-105013,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfa077ca8bb2ae3949f71c038c9eb784,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination
-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39abf69130b4b993f132c018ec66d7884b2ab2fbe504637625587a1e81f43838,PodSandboxId:de9af903660ccf6557ac92a6884763dbba68904a1befb90197dbb3005e32e049,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723594388632298333,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6555aaf3-8661-4508-8993-e27e91fd75b0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4235873ebc639832799ba88b9f1b85efc11f39bd3b247b1c50252be54d7c9ca,PodSandboxId:3f9f663df8afebeb3bdd97e02d7eceb7d8432250d11b7843bead5f3cef68baf2,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1723594379677924888,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-105013,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eba9f30404fc6595cd517b2e044ad070,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f847be92bcda663b5a64c3ebe241dc754529718628caa01cef0525a11d01209f,PodSandboxId:9da9ca9a7a69d1e091145ac2e2410cbf8f5d15734f6ea2adbee9de5b28876842,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_RUNNING,CreatedAt:1723594365577727349,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6m57q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1349813-8756-4972-9e6c-ae1bf33a8921,},Annotations:map[string]string{io.kubernetes.container.hash: 49c194ef,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:d463e78fa6b27c92b26cd1bc806e34320df69961aae19159122eec7c9250a80b,PodSandboxId:b27cd500e6978907fcff03f511984b81c8eec91472a7cba9c370696ac0e08cb3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723594365529230157,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qvrtb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80c16299-297a-49b0-98bc-97208c289e73,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container
{Id:8d6324bf2404b4c092f212b1262882c454050b8a4c18214d22cbb56d999ed4d4,PodSandboxId:f8e5c70a7011bb1fca0a3d4b7824fd431425469c4d0540e94e955460f58ba58c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723594365434252393,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-105013,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0d6f84c88b276d9caf0c279a2fd73aa,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c74f85631d30d473
5c70a42cc9640d34007847fc31ee08e32623dc8dc6bb949,PodSandboxId:58bac8e5e4e59d11185d31a70f4cd2234e8a17753800ba7d9d99a61743dea7ef,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723594365441095077,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qlqtb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4a3e3e6-b8e7-4c32-a5af-065aa78111f1,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b8c3091f0023c0b7e493a7be78369fd69bc71f870b3a8815a8c78c94c51c560,PodSandboxId:de9af903660ccf6557ac92a6884763dbba68904a1befb90197dbb3005e32e049,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723594365366046360,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6555aaf3-8661-4508-8993-e27e91fd75b0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3adba2eef6dc38a56e8b38b2c0414c99640a21716d5258d8e30c84c11b895f2,PodSandboxId:d5f7e049035c79d28292c24db15c2bc02b24e788548cc97debeb3ee237a9f922,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723594365341838203,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-105013,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ac35313bb7af49a3d3262d37ba9167c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.k
ubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13140714cc06469d86f9745a4c86966c693d3449ed3f3c154fbb6e14ae42ee33,PodSandboxId:c9f91079e28b929d7f77ed225df2140b801cedeaeb4f3aac95c73a52a99e98d0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723594365260100912,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-105013,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f1d58febfc5b6695f71d52f2f23febc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b9280e9ce815f11eda1d904348fe098c65961c0aae63b154a2157ef7caa3dca,PodSandboxId:2f0f36ce454ff2e17dc995ce42d151c07f7af18f30af746f5be432aa9aee5828,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723594359056009370,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-r9b46,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 106e607e-b870-42ca-ad43-d80238452cd4,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"p
rotocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c23300665d9c76ac06c75fbfb737adf5b17e16c97443028c1a964c023ba15d12,PodSandboxId:f7f60d4f6540bdb45e5b8d3ab66c241143eee7eeab4bf893cbf2c25a60f54e5a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723594359011568448,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-105013,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfa077ca8bb2ae3949f71c038c9eb784,},Anno
tations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d7e2e3718db070bf11ac3c5202d785b481f0ffd2bfb576fb739826e1f002f3f,PodSandboxId:cdd40c63d92d41d78866e3821cd5cbe7fa6a7a71f40d8ec433eb78a73c2b0cd8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723594016788579811,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-lq24p,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a0c03f6d-f644-43d8-a8d6-2079f90d2bf2,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4a69b9d72a8d3a11800a1e7f03e03649b32990bf4c5b668a0dea73074bdf45c,PodSandboxId:e9a53f92642e9f4eac65fa9eb0c2b1d5979d991666d848c42bcf5091f5b97c20,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723593844833213150,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-r9b46,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 106e607e-b870-42ca-ad43-d80238452cd4,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6ce1804a980ec080327c097b9929ce80ad1eaa3cb08408175afbb903d6bccc8,PodSandboxId:6283c8ce8359065cdf2c1e90a986552ccc30cd0cd4d238f157e7a2c5194e7b80,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723593844850304177,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-qlqtb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4a3e3e6-b8e7-4c32-a5af-065aa78111f1,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d773535128c3474359fb39d2e67a85fda4514786ccd1249690454b5c2f1aad45,PodSandboxId:b98b6b68f5b5a95386d58d6fb01c306186f6a22cb4df64e4da46de670a827c52,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_EXITED,CreatedAt:1723593832623775335,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6m57q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1349813-8756-4972-9e6c-ae1bf33a8921,},Annotations:map[string]string{io.kubernetes.container.hash: 49c194ef,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cae4a2039c73c8b44c95f3baeb4245c44b9cf0e510c0c05c79eff9d68a7af5c7,PodSandboxId:f4ad05be5bf18bde191989a3918a8be62b318d331a5748204c0e1f6313038119,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e61
62f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723593832475614597,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qvrtb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80c16299-297a-49b0-98bc-97208c289e73,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f644ed2e094890dd8d28e4ca035634bf6340e598553601368c4025ba64cbbc58,PodSandboxId:24f8ff464d5f9230d8cb411739e93c3a558af6fd645023eaef8c52943dc7a7a2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f44
6c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723593821530366342,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-105013,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0d6f84c88b276d9caf0c279a2fd73aa,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a988632430c243612d4b0086b23d504fe2c075bbb2ecc0786bc1a49ae396241,PodSandboxId:8da0fafbf7974c56372b9f6bae5cb9c27185ee89a9d0ecb7ad3bec9aed881dee,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAIN
ER_EXITED,CreatedAt:1723593821450872563,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-105013,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ac35313bb7af49a3d3262d37ba9167c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=36d913fd-d4ae-4ee6-9f52-eaeff3c9703e name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 00:14:52 ha-105013 crio[3051]: time="2024-08-14 00:14:52.727514154Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4d7f1a07-eac6-465f-bf55-77f23197c51a name=/runtime.v1.RuntimeService/Version
	Aug 14 00:14:52 ha-105013 crio[3051]: time="2024-08-14 00:14:52.727609359Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4d7f1a07-eac6-465f-bf55-77f23197c51a name=/runtime.v1.RuntimeService/Version
	Aug 14 00:14:52 ha-105013 crio[3051]: time="2024-08-14 00:14:52.728766200Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=95c0ffa9-8d9a-48d7-b24c-66aceb550bac name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 00:14:52 ha-105013 crio[3051]: time="2024-08-14 00:14:52.729237005Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723594492729212811,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:145823,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=95c0ffa9-8d9a-48d7-b24c-66aceb550bac name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 00:14:52 ha-105013 crio[3051]: time="2024-08-14 00:14:52.729861790Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=81ffe793-a4f1-4b9d-9607-b0b4320e870c name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 00:14:52 ha-105013 crio[3051]: time="2024-08-14 00:14:52.729958880Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=81ffe793-a4f1-4b9d-9607-b0b4320e870c name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 00:14:52 ha-105013 crio[3051]: time="2024-08-14 00:14:52.730396603Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:34e743530eb45a69fc71d3f83fa27974793125b8efd320233acc9e5ade3e1b86,PodSandboxId:b498a018e05bf3ac9da7579b28c717ed7765b3375bdf19fab16f537265f27584,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723594398910315851,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-lq24p,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a0c03f6d-f644-43d8-a8d6-2079f90d2bf2,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee5d9b99a82fb9bc7d901e38ccc970b5854914d5bdc843ac9d85a1c4a32c0819,PodSandboxId:c9f91079e28b929d7f77ed225df2140b801cedeaeb4f3aac95c73a52a99e98d0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723594396983816289,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-105013,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f1d58febfc5b6695f71d52f2f23febc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0888b9785ccf1d89dbd1c10a23f4f7eaf095635fe96109abd1f407fd39608fd,PodSandboxId:f7f60d4f6540bdb45e5b8d3ab66c241143eee7eeab4bf893cbf2c25a60f54e5a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723594394636411114,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-105013,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfa077ca8bb2ae3949f71c038c9eb784,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination
-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39abf69130b4b993f132c018ec66d7884b2ab2fbe504637625587a1e81f43838,PodSandboxId:de9af903660ccf6557ac92a6884763dbba68904a1befb90197dbb3005e32e049,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723594388632298333,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6555aaf3-8661-4508-8993-e27e91fd75b0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4235873ebc639832799ba88b9f1b85efc11f39bd3b247b1c50252be54d7c9ca,PodSandboxId:3f9f663df8afebeb3bdd97e02d7eceb7d8432250d11b7843bead5f3cef68baf2,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1723594379677924888,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-105013,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eba9f30404fc6595cd517b2e044ad070,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f847be92bcda663b5a64c3ebe241dc754529718628caa01cef0525a11d01209f,PodSandboxId:9da9ca9a7a69d1e091145ac2e2410cbf8f5d15734f6ea2adbee9de5b28876842,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_RUNNING,CreatedAt:1723594365577727349,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6m57q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1349813-8756-4972-9e6c-ae1bf33a8921,},Annotations:map[string]string{io.kubernetes.container.hash: 49c194ef,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:d463e78fa6b27c92b26cd1bc806e34320df69961aae19159122eec7c9250a80b,PodSandboxId:b27cd500e6978907fcff03f511984b81c8eec91472a7cba9c370696ac0e08cb3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723594365529230157,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qvrtb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80c16299-297a-49b0-98bc-97208c289e73,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container
{Id:8d6324bf2404b4c092f212b1262882c454050b8a4c18214d22cbb56d999ed4d4,PodSandboxId:f8e5c70a7011bb1fca0a3d4b7824fd431425469c4d0540e94e955460f58ba58c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723594365434252393,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-105013,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0d6f84c88b276d9caf0c279a2fd73aa,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c74f85631d30d473
5c70a42cc9640d34007847fc31ee08e32623dc8dc6bb949,PodSandboxId:58bac8e5e4e59d11185d31a70f4cd2234e8a17753800ba7d9d99a61743dea7ef,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723594365441095077,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qlqtb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4a3e3e6-b8e7-4c32-a5af-065aa78111f1,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b8c3091f0023c0b7e493a7be78369fd69bc71f870b3a8815a8c78c94c51c560,PodSandboxId:de9af903660ccf6557ac92a6884763dbba68904a1befb90197dbb3005e32e049,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723594365366046360,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6555aaf3-8661-4508-8993-e27e91fd75b0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3adba2eef6dc38a56e8b38b2c0414c99640a21716d5258d8e30c84c11b895f2,PodSandboxId:d5f7e049035c79d28292c24db15c2bc02b24e788548cc97debeb3ee237a9f922,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723594365341838203,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-105013,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ac35313bb7af49a3d3262d37ba9167c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.k
ubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13140714cc06469d86f9745a4c86966c693d3449ed3f3c154fbb6e14ae42ee33,PodSandboxId:c9f91079e28b929d7f77ed225df2140b801cedeaeb4f3aac95c73a52a99e98d0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723594365260100912,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-105013,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f1d58febfc5b6695f71d52f2f23febc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b9280e9ce815f11eda1d904348fe098c65961c0aae63b154a2157ef7caa3dca,PodSandboxId:2f0f36ce454ff2e17dc995ce42d151c07f7af18f30af746f5be432aa9aee5828,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723594359056009370,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-r9b46,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 106e607e-b870-42ca-ad43-d80238452cd4,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"p
rotocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c23300665d9c76ac06c75fbfb737adf5b17e16c97443028c1a964c023ba15d12,PodSandboxId:f7f60d4f6540bdb45e5b8d3ab66c241143eee7eeab4bf893cbf2c25a60f54e5a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723594359011568448,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-105013,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfa077ca8bb2ae3949f71c038c9eb784,},Anno
tations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d7e2e3718db070bf11ac3c5202d785b481f0ffd2bfb576fb739826e1f002f3f,PodSandboxId:cdd40c63d92d41d78866e3821cd5cbe7fa6a7a71f40d8ec433eb78a73c2b0cd8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723594016788579811,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-lq24p,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a0c03f6d-f644-43d8-a8d6-2079f90d2bf2,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4a69b9d72a8d3a11800a1e7f03e03649b32990bf4c5b668a0dea73074bdf45c,PodSandboxId:e9a53f92642e9f4eac65fa9eb0c2b1d5979d991666d848c42bcf5091f5b97c20,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723593844833213150,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-r9b46,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 106e607e-b870-42ca-ad43-d80238452cd4,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6ce1804a980ec080327c097b9929ce80ad1eaa3cb08408175afbb903d6bccc8,PodSandboxId:6283c8ce8359065cdf2c1e90a986552ccc30cd0cd4d238f157e7a2c5194e7b80,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723593844850304177,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-qlqtb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4a3e3e6-b8e7-4c32-a5af-065aa78111f1,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d773535128c3474359fb39d2e67a85fda4514786ccd1249690454b5c2f1aad45,PodSandboxId:b98b6b68f5b5a95386d58d6fb01c306186f6a22cb4df64e4da46de670a827c52,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_EXITED,CreatedAt:1723593832623775335,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6m57q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1349813-8756-4972-9e6c-ae1bf33a8921,},Annotations:map[string]string{io.kubernetes.container.hash: 49c194ef,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cae4a2039c73c8b44c95f3baeb4245c44b9cf0e510c0c05c79eff9d68a7af5c7,PodSandboxId:f4ad05be5bf18bde191989a3918a8be62b318d331a5748204c0e1f6313038119,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e61
62f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723593832475614597,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qvrtb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80c16299-297a-49b0-98bc-97208c289e73,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f644ed2e094890dd8d28e4ca035634bf6340e598553601368c4025ba64cbbc58,PodSandboxId:24f8ff464d5f9230d8cb411739e93c3a558af6fd645023eaef8c52943dc7a7a2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f44
6c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723593821530366342,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-105013,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0d6f84c88b276d9caf0c279a2fd73aa,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a988632430c243612d4b0086b23d504fe2c075bbb2ecc0786bc1a49ae396241,PodSandboxId:8da0fafbf7974c56372b9f6bae5cb9c27185ee89a9d0ecb7ad3bec9aed881dee,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAIN
ER_EXITED,CreatedAt:1723593821450872563,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-105013,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ac35313bb7af49a3d3262d37ba9167c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=81ffe793-a4f1-4b9d-9607-b0b4320e870c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	34e743530eb45       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   b498a018e05bf       busybox-7dff88458-lq24p
	ee5d9b99a82fb       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      About a minute ago   Running             kube-controller-manager   2                   c9f91079e28b9       kube-controller-manager-ha-105013
	b0888b9785ccf       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      About a minute ago   Running             kube-apiserver            2                   f7f60d4f6540b       kube-apiserver-ha-105013
	39abf69130b4b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       2                   de9af903660cc       storage-provisioner
	f4235873ebc63       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      About a minute ago   Running             kube-vip                  0                   3f9f663df8afe       kube-vip-ha-105013
	f847be92bcda6       917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557                                      2 minutes ago        Running             kindnet-cni               1                   9da9ca9a7a69d       kindnet-6m57q
	d463e78fa6b27       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      2 minutes ago        Running             kube-proxy                1                   b27cd500e6978       kube-proxy-qvrtb
	1c74f85631d30       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   58bac8e5e4e59       coredns-6f6b679f8f-qlqtb
	8d6324bf2404b       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      2 minutes ago        Running             kube-scheduler            1                   f8e5c70a7011b       kube-scheduler-ha-105013
	4b8c3091f0023       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago        Exited              storage-provisioner       1                   de9af903660cc       storage-provisioner
	a3adba2eef6dc       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      2 minutes ago        Running             etcd                      1                   d5f7e049035c7       etcd-ha-105013
	13140714cc064       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      2 minutes ago        Exited              kube-controller-manager   1                   c9f91079e28b9       kube-controller-manager-ha-105013
	4b9280e9ce815       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   2f0f36ce454ff       coredns-6f6b679f8f-r9b46
	c23300665d9c7       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      2 minutes ago        Exited              kube-apiserver            1                   f7f60d4f6540b       kube-apiserver-ha-105013
	7d7e2e3718db0       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   7 minutes ago        Exited              busybox                   0                   cdd40c63d92d4       busybox-7dff88458-lq24p
	a6ce1804a980e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      10 minutes ago       Exited              coredns                   0                   6283c8ce83590       coredns-6f6b679f8f-qlqtb
	e4a69b9d72a8d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      10 minutes ago       Exited              coredns                   0                   e9a53f92642e9       coredns-6f6b679f8f-r9b46
	d773535128c34       917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557                                      11 minutes ago       Exited              kindnet-cni               0                   b98b6b68f5b5a       kindnet-6m57q
	cae4a2039c73c       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      11 minutes ago       Exited              kube-proxy                0                   f4ad05be5bf18       kube-proxy-qvrtb
	f644ed2e09489       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      11 minutes ago       Exited              kube-scheduler            0                   24f8ff464d5f9       kube-scheduler-ha-105013
	9a988632430c2       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      11 minutes ago       Exited              etcd                      0                   8da0fafbf7974       etcd-ha-105013
	
	
	==> coredns [1c74f85631d30d4735c70a42cc9640d34007847fc31ee08e32623dc8dc6bb949] <==
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:59654 - 43696 "HINFO IN 8475110043404679788.785676527835054484. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.015583691s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1450265554]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (14-Aug-2024 00:12:46.999) (total time: 10001ms):
	Trace[1450265554]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (00:12:57.001)
	Trace[1450265554]: [10.001464483s] [10.001464483s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[97276489]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (14-Aug-2024 00:12:47.078) (total time: 10001ms):
	Trace[97276489]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (00:12:57.080)
	Trace[97276489]: [10.001600954s] [10.001600954s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: Trace[2030459363]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (14-Aug-2024 00:12:54.304) (total time: 10354ms):
	Trace[2030459363]: ---"Objects listed" error:<nil> 10354ms (00:13:04.658)
	Trace[2030459363]: [10.354303344s] [10.354303344s] END
	
	
	==> coredns [4b9280e9ce815f11eda1d904348fe098c65961c0aae63b154a2157ef7caa3dca] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1214248296]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (14-Aug-2024 00:12:41.518) (total time: 10001ms):
	Trace[1214248296]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (00:12:51.520)
	Trace[1214248296]: [10.001583284s] [10.001583284s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1869583458]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (14-Aug-2024 00:12:41.582) (total time: 10000ms):
	Trace[1869583458]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (00:12:51.583)
	Trace[1869583458]: [10.000797936s] [10.000797936s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1842968646]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (14-Aug-2024 00:12:41.605) (total time: 10001ms):
	Trace[1842968646]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (00:12:51.606)
	Trace[1842968646]: [10.001612448s] [10.001612448s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:50404->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:50404->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [a6ce1804a980ec080327c097b9929ce80ad1eaa3cb08408175afbb903d6bccc8] <==
	[INFO] 10.244.2.2:37198 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001784517s
	[INFO] 10.244.0.4:53333 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000088627s
	[INFO] 10.244.2.3:45255 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.011019418s
	[INFO] 10.244.2.3:57612 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000196158s
	[INFO] 10.244.2.3:54906 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000145382s
	[INFO] 10.244.2.2:38596 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001349954s
	[INFO] 10.244.2.2:34606 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00006091s
	[INFO] 10.244.0.4:44230 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000102209s
	[INFO] 10.244.0.4:38978 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001780187s
	[INFO] 10.244.0.4:50077 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000144003s
	[INFO] 10.244.0.4:56680 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001410286s
	[INFO] 10.244.2.3:55127 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000145617s
	[INFO] 10.244.2.3:51971 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000158815s
	[INFO] 10.244.2.2:39623 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00014097s
	[INFO] 10.244.2.2:37680 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000045642s
	[INFO] 10.244.0.4:58204 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000096528s
	[INFO] 10.244.0.4:56986 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000109539s
	[INFO] 10.244.0.4:44460 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00007298s
	[INFO] 10.244.2.3:58663 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000133643s
	[INFO] 10.244.2.2:41772 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000206806s
	[INFO] 10.244.2.2:59812 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00016311s
	[INFO] 10.244.2.2:44495 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000100783s
	[INFO] 10.244.0.4:35084 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000066256s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [e4a69b9d72a8d3a11800a1e7f03e03649b32990bf4c5b668a0dea73074bdf45c] <==
	[INFO] 10.244.2.3:40341 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000135149s
	[INFO] 10.244.2.2:57488 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000122551s
	[INFO] 10.244.2.2:49117 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001866986s
	[INFO] 10.244.2.2:35755 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000225314s
	[INFO] 10.244.2.2:54831 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000705s
	[INFO] 10.244.2.2:53362 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000278259s
	[INFO] 10.244.2.2:33580 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000070547s
	[INFO] 10.244.0.4:49349 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000114773s
	[INFO] 10.244.0.4:54742 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000075532s
	[INFO] 10.244.0.4:51472 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00018342s
	[INFO] 10.244.0.4:41002 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000108841s
	[INFO] 10.244.2.3:43436 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000206743s
	[INFO] 10.244.2.3:47491 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000085338s
	[INFO] 10.244.2.2:53250 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000197046s
	[INFO] 10.244.2.2:35081 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000089839s
	[INFO] 10.244.0.4:55990 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000325273s
	[INFO] 10.244.2.3:42694 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000184393s
	[INFO] 10.244.2.3:44885 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000230815s
	[INFO] 10.244.2.3:53504 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000185746s
	[INFO] 10.244.2.2:48008 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000154437s
	[INFO] 10.244.0.4:42515 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00048243s
	[INFO] 10.244.0.4:53296 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00007802s
	[INFO] 10.244.0.4:35517 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000194506s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-105013
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-105013
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a5ac17629aabe8562ef9220195ba6559cf416caf
	                    minikube.k8s.io/name=ha-105013
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_14T00_03_48_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 14 Aug 2024 00:03:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-105013
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 14 Aug 2024 00:14:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 14 Aug 2024 00:13:22 +0000   Wed, 14 Aug 2024 00:03:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 14 Aug 2024 00:13:22 +0000   Wed, 14 Aug 2024 00:03:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 14 Aug 2024 00:13:22 +0000   Wed, 14 Aug 2024 00:03:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 14 Aug 2024 00:13:22 +0000   Wed, 14 Aug 2024 00:04:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.79
	  Hostname:    ha-105013
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f0258848a17e4b85b28309eb2ed0d1a0
	  System UUID:                f0258848-a17e-4b85-b283-09eb2ed0d1a0
	  Boot ID:                    52958196-c20d-4175-83fb-2d1dfa35bdf0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-lq24p              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m
	  kube-system                 coredns-6f6b679f8f-qlqtb             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     11m
	  kube-system                 coredns-6f6b679f8f-r9b46             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     11m
	  kube-system                 etcd-ha-105013                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 kindnet-6m57q                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      11m
	  kube-system                 kube-apiserver-ha-105013             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-ha-105013    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-proxy-qvrtb                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-scheduler-ha-105013             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-vip-ha-105013                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         40s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 91s                    kube-proxy       
	  Normal   Starting                 11m                    kube-proxy       
	  Normal   Starting                 11m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  11m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)      kubelet          Node ha-105013 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)      kubelet          Node ha-105013 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)      kubelet          Node ha-105013 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m                    kubelet          Node ha-105013 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m                    kubelet          Node ha-105013 status is now: NodeHasSufficientPID
	  Normal   Starting                 11m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  11m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  11m                    kubelet          Node ha-105013 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           11m                    node-controller  Node ha-105013 event: Registered Node ha-105013 in Controller
	  Normal   NodeReady                10m                    kubelet          Node ha-105013 status is now: NodeReady
	  Normal   RegisteredNode           9m30s                  node-controller  Node ha-105013 event: Registered Node ha-105013 in Controller
	  Normal   RegisteredNode           8m20s                  node-controller  Node ha-105013 event: Registered Node ha-105013 in Controller
	  Normal   RegisteredNode           6m8s                   node-controller  Node ha-105013 event: Registered Node ha-105013 in Controller
	  Warning  ContainerGCFailed        3m6s                   kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotReady             2m15s (x7 over 3m17s)  kubelet          Node ha-105013 status is now: NodeNotReady
	  Normal   RegisteredNode           105s                   node-controller  Node ha-105013 event: Registered Node ha-105013 in Controller
	  Normal   RegisteredNode           93s                    node-controller  Node ha-105013 event: Registered Node ha-105013 in Controller
	  Normal   RegisteredNode           41s                    node-controller  Node ha-105013 event: Registered Node ha-105013 in Controller
	
	
	Name:               ha-105013-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-105013-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a5ac17629aabe8562ef9220195ba6559cf416caf
	                    minikube.k8s.io/name=ha-105013
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_14T00_05_17_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 14 Aug 2024 00:05:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-105013-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 14 Aug 2024 00:14:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 14 Aug 2024 00:14:07 +0000   Wed, 14 Aug 2024 00:05:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 14 Aug 2024 00:14:07 +0000   Wed, 14 Aug 2024 00:05:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 14 Aug 2024 00:14:07 +0000   Wed, 14 Aug 2024 00:05:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 14 Aug 2024 00:14:07 +0000   Wed, 14 Aug 2024 00:05:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.160
	  Hostname:    ha-105013-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6f980530d7ae46eba16cea428a25810e
	  System UUID:                6f980530-d7ae-46eb-a16c-ea428a25810e
	  Boot ID:                    8c1572b2-0519-4093-bfbd-60b6a740c005
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-ha-105013-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m36s
	  kube-system                 kindnet-96bv6                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m38s
	  kube-system                 kube-apiserver-ha-105013-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m36s
	  kube-system                 kube-controller-manager-ha-105013-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m36s
	  kube-system                 kube-proxy-slwhv                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m38s
	  kube-system                 kube-scheduler-ha-105013-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m34s
	  kube-system                 kube-vip-ha-105013-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 99s                    kube-proxy       
	  Normal   Starting                 6m6s                   kube-proxy       
	  Normal   Starting                 9m33s                  kube-proxy       
	  Normal   NodeAllocatableEnforced  9m38s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  9m38s (x8 over 9m38s)  kubelet          Node ha-105013-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m38s (x8 over 9m38s)  kubelet          Node ha-105013-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m38s (x7 over 9m38s)  kubelet          Node ha-105013-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           9m37s                  node-controller  Node ha-105013-m02 event: Registered Node ha-105013-m02 in Controller
	  Normal   RegisteredNode           9m30s                  node-controller  Node ha-105013-m02 event: Registered Node ha-105013-m02 in Controller
	  Normal   RegisteredNode           8m20s                  node-controller  Node ha-105013-m02 event: Registered Node ha-105013-m02 in Controller
	  Normal   NodeHasNoDiskPressure    6m23s                  kubelet          Node ha-105013-m02 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 6m23s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  6m23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  6m23s                  kubelet          Node ha-105013-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     6m23s                  kubelet          Node ha-105013-m02 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 6m23s                  kubelet          Node ha-105013-m02 has been rebooted, boot id: 8c1572b2-0519-4093-bfbd-60b6a740c005
	  Normal   RegisteredNode           6m8s                   node-controller  Node ha-105013-m02 event: Registered Node ha-105013-m02 in Controller
	  Normal   RegisteredNode           105s                   node-controller  Node ha-105013-m02 event: Registered Node ha-105013-m02 in Controller
	  Normal   RegisteredNode           93s                    node-controller  Node ha-105013-m02 event: Registered Node ha-105013-m02 in Controller
	  Normal   RegisteredNode           41s                    node-controller  Node ha-105013-m02 event: Registered Node ha-105013-m02 in Controller
	
	
	Name:               ha-105013-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-105013-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a5ac17629aabe8562ef9220195ba6559cf416caf
	                    minikube.k8s.io/name=ha-105013
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_14T00_06_29_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 14 Aug 2024 00:06:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-105013-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 14 Aug 2024 00:14:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 14 Aug 2024 00:14:27 +0000   Wed, 14 Aug 2024 00:13:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 14 Aug 2024 00:14:27 +0000   Wed, 14 Aug 2024 00:13:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 14 Aug 2024 00:14:27 +0000   Wed, 14 Aug 2024 00:13:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 14 Aug 2024 00:14:27 +0000   Wed, 14 Aug 2024 00:13:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.177
	  Hostname:    ha-105013-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 07541c67fd0644fda5478a03980e907b
	  System UUID:                07541c67-fd06-44fd-a547-8a03980e907b
	  Boot ID:                    de1ce415-254b-4b22-b48b-a0e5d46d71fc
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-5px5v                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m
	  default                     busybox-7dff88458-b6xdd                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m
	  kube-system                 etcd-ha-105013-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         8m27s
	  kube-system                 kindnet-77bnm                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      8m28s
	  kube-system                 kube-apiserver-ha-105013-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m27s
	  kube-system                 kube-controller-manager-ha-105013-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m20s
	  kube-system                 kube-proxy-2ps5t                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m28s
	  kube-system                 kube-scheduler-ha-105013-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m27s
	  kube-system                 kube-vip-ha-105013-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 39s                    kube-proxy       
	  Normal   Starting                 8m23s                  kube-proxy       
	  Normal   NodeAllocatableEnforced  8m29s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  8m29s (x8 over 8m29s)  kubelet          Node ha-105013-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m29s (x8 over 8m29s)  kubelet          Node ha-105013-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m29s (x7 over 8m29s)  kubelet          Node ha-105013-m03 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           8m27s                  node-controller  Node ha-105013-m03 event: Registered Node ha-105013-m03 in Controller
	  Normal   RegisteredNode           8m25s                  node-controller  Node ha-105013-m03 event: Registered Node ha-105013-m03 in Controller
	  Normal   RegisteredNode           8m20s                  node-controller  Node ha-105013-m03 event: Registered Node ha-105013-m03 in Controller
	  Normal   RegisteredNode           6m8s                   node-controller  Node ha-105013-m03 event: Registered Node ha-105013-m03 in Controller
	  Normal   NodeNotReady             5m12s                  node-controller  Node ha-105013-m03 status is now: NodeNotReady
	  Normal   RegisteredNode           105s                   node-controller  Node ha-105013-m03 event: Registered Node ha-105013-m03 in Controller
	  Normal   RegisteredNode           94s                    node-controller  Node ha-105013-m03 event: Registered Node ha-105013-m03 in Controller
	  Normal   Starting                 57s                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  57s                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  57s (x2 over 57s)      kubelet          Node ha-105013-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    57s (x2 over 57s)      kubelet          Node ha-105013-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     57s (x2 over 57s)      kubelet          Node ha-105013-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 57s                    kubelet          Node ha-105013-m03 has been rebooted, boot id: de1ce415-254b-4b22-b48b-a0e5d46d71fc
	  Normal   NodeReady                57s                    kubelet          Node ha-105013-m03 status is now: NodeReady
	  Normal   RegisteredNode           41s                    node-controller  Node ha-105013-m03 event: Registered Node ha-105013-m03 in Controller
	
	
	Name:               ha-105013-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-105013-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a5ac17629aabe8562ef9220195ba6559cf416caf
	                    minikube.k8s.io/name=ha-105013
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_14T00_07_31_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 14 Aug 2024 00:07:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-105013-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 14 Aug 2024 00:14:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 14 Aug 2024 00:14:44 +0000   Wed, 14 Aug 2024 00:14:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 14 Aug 2024 00:14:44 +0000   Wed, 14 Aug 2024 00:14:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 14 Aug 2024 00:14:44 +0000   Wed, 14 Aug 2024 00:14:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 14 Aug 2024 00:14:44 +0000   Wed, 14 Aug 2024 00:14:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.102
	  Hostname:    ha-105013-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6be49b5c3de54a60bb4afcf41f306129
	  System UUID:                6be49b5c-3de5-4a60-bb4a-fcf41f306129
	  Boot ID:                    86fe3c2f-a1e4-4fe1-b1e4-999fa5730da2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-pzk88       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m22s
	  kube-system                 kube-proxy-2cd8m    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 7m16s                  kube-proxy       
	  Normal   Starting                 5s                     kube-proxy       
	  Normal   NodeHasSufficientMemory  7m23s (x2 over 7m23s)  kubelet          Node ha-105013-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7m23s (x2 over 7m23s)  kubelet          Node ha-105013-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7m23s (x2 over 7m23s)  kubelet          Node ha-105013-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  7m23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           7m22s                  node-controller  Node ha-105013-m04 event: Registered Node ha-105013-m04 in Controller
	  Normal   RegisteredNode           7m20s                  node-controller  Node ha-105013-m04 event: Registered Node ha-105013-m04 in Controller
	  Normal   RegisteredNode           7m20s                  node-controller  Node ha-105013-m04 event: Registered Node ha-105013-m04 in Controller
	  Normal   NodeReady                7m3s                   kubelet          Node ha-105013-m04 status is now: NodeReady
	  Normal   RegisteredNode           6m8s                   node-controller  Node ha-105013-m04 event: Registered Node ha-105013-m04 in Controller
	  Normal   NodeNotReady             5m18s                  node-controller  Node ha-105013-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           105s                   node-controller  Node ha-105013-m04 event: Registered Node ha-105013-m04 in Controller
	  Normal   RegisteredNode           93s                    node-controller  Node ha-105013-m04 event: Registered Node ha-105013-m04 in Controller
	  Normal   RegisteredNode           41s                    node-controller  Node ha-105013-m04 event: Registered Node ha-105013-m04 in Controller
	  Normal   Starting                 9s                     kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  9s                     kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 9s (x2 over 9s)        kubelet          Node ha-105013-m04 has been rebooted, boot id: 86fe3c2f-a1e4-4fe1-b1e4-999fa5730da2
	  Normal   NodeHasSufficientMemory  9s (x3 over 9s)        kubelet          Node ha-105013-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9s (x3 over 9s)        kubelet          Node ha-105013-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9s (x3 over 9s)        kubelet          Node ha-105013-m04 status is now: NodeHasSufficientPID
	  Normal   NodeNotReady             9s                     kubelet          Node ha-105013-m04 status is now: NodeNotReady
	  Normal   NodeReady                9s                     kubelet          Node ha-105013-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.521245] systemd-fstab-generator[590]: Ignoring "noauto" option for root device
	[  +0.059332] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.071956] systemd-fstab-generator[602]: Ignoring "noauto" option for root device
	[  +0.158719] systemd-fstab-generator[616]: Ignoring "noauto" option for root device
	[  +0.119205] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +0.265293] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[  +3.742859] systemd-fstab-generator[757]: Ignoring "noauto" option for root device
	[  +5.354821] systemd-fstab-generator[900]: Ignoring "noauto" option for root device
	[  +0.066104] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.215402] systemd-fstab-generator[1317]: Ignoring "noauto" option for root device
	[  +0.072072] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.006156] kauditd_printk_skb: 23 callbacks suppressed
	[Aug14 00:04] kauditd_printk_skb: 36 callbacks suppressed
	[Aug14 00:05] kauditd_printk_skb: 24 callbacks suppressed
	[Aug14 00:12] systemd-fstab-generator[2970]: Ignoring "noauto" option for root device
	[  +0.147407] systemd-fstab-generator[2982]: Ignoring "noauto" option for root device
	[  +0.174813] systemd-fstab-generator[2996]: Ignoring "noauto" option for root device
	[  +0.143640] systemd-fstab-generator[3008]: Ignoring "noauto" option for root device
	[  +0.278510] systemd-fstab-generator[3036]: Ignoring "noauto" option for root device
	[  +3.648203] systemd-fstab-generator[3136]: Ignoring "noauto" option for root device
	[  +0.726992] kauditd_printk_skb: 137 callbacks suppressed
	[ +16.832231] kauditd_printk_skb: 62 callbacks suppressed
	[Aug14 00:13] kauditd_printk_skb: 2 callbacks suppressed
	[ +14.886493] kauditd_printk_skb: 10 callbacks suppressed
	
	
	==> etcd [9a988632430c243612d4b0086b23d504fe2c075bbb2ecc0786bc1a49ae396241] <==
	{"level":"info","ts":"2024-08-14T00:11:02.628365Z","caller":"etcdserver/server.go:1498","msg":"leadership transfer finished","local-member-id":"a91a1bbc2c758cdc","old-leader-member-id":"a91a1bbc2c758cdc","new-leader-member-id":"d2b4737fd3ffd670","took":"100.199181ms"}
	{"level":"info","ts":"2024-08-14T00:11:02.628624Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"b5930f6d9553dfd0"}
	{"level":"info","ts":"2024-08-14T00:11:02.628652Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"b5930f6d9553dfd0"}
	{"level":"info","ts":"2024-08-14T00:11:02.628687Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"b5930f6d9553dfd0"}
	{"level":"info","ts":"2024-08-14T00:11:02.629231Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"a91a1bbc2c758cdc","remote-peer-id":"b5930f6d9553dfd0"}
	{"level":"info","ts":"2024-08-14T00:11:02.629326Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"a91a1bbc2c758cdc","remote-peer-id":"b5930f6d9553dfd0"}
	{"level":"info","ts":"2024-08-14T00:11:02.629365Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"a91a1bbc2c758cdc","remote-peer-id":"b5930f6d9553dfd0"}
	{"level":"info","ts":"2024-08-14T00:11:02.629437Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"b5930f6d9553dfd0"}
	{"level":"info","ts":"2024-08-14T00:11:02.629445Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"d2b4737fd3ffd670"}
	{"level":"warn","ts":"2024-08-14T00:11:02.629745Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"d2b4737fd3ffd670"}
	{"level":"info","ts":"2024-08-14T00:11:02.629777Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"d2b4737fd3ffd670"}
	{"level":"warn","ts":"2024-08-14T00:11:02.629928Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"d2b4737fd3ffd670"}
	{"level":"info","ts":"2024-08-14T00:11:02.629954Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"d2b4737fd3ffd670"}
	{"level":"info","ts":"2024-08-14T00:11:02.630076Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"a91a1bbc2c758cdc","remote-peer-id":"d2b4737fd3ffd670"}
	{"level":"warn","ts":"2024-08-14T00:11:02.630179Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"a91a1bbc2c758cdc","remote-peer-id":"d2b4737fd3ffd670","error":"context canceled"}
	{"level":"warn","ts":"2024-08-14T00:11:02.630259Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"d2b4737fd3ffd670","error":"failed to read d2b4737fd3ffd670 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-08-14T00:11:02.630279Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"a91a1bbc2c758cdc","remote-peer-id":"d2b4737fd3ffd670"}
	{"level":"warn","ts":"2024-08-14T00:11:02.630430Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"a91a1bbc2c758cdc","remote-peer-id":"d2b4737fd3ffd670","error":"context canceled"}
	{"level":"info","ts":"2024-08-14T00:11:02.630445Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"a91a1bbc2c758cdc","remote-peer-id":"d2b4737fd3ffd670"}
	{"level":"info","ts":"2024-08-14T00:11:02.630456Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"d2b4737fd3ffd670"}
	{"level":"info","ts":"2024-08-14T00:11:02.636242Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.79:2380"}
	{"level":"warn","ts":"2024-08-14T00:11:02.636393Z","caller":"embed/config_logging.go:170","msg":"rejected connection on peer endpoint","remote-addr":"192.168.39.160:57234","server-name":"","error":"read tcp 192.168.39.79:2380->192.168.39.160:57234: use of closed network connection"}
	{"level":"warn","ts":"2024-08-14T00:11:02.638605Z","caller":"embed/config_logging.go:170","msg":"rejected connection on peer endpoint","remote-addr":"192.168.39.160:57228","server-name":"","error":"set tcp 192.168.39.79:2380: use of closed network connection"}
	{"level":"info","ts":"2024-08-14T00:11:03.636466Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.79:2380"}
	{"level":"info","ts":"2024-08-14T00:11:03.636505Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-105013","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.79:2380"],"advertise-client-urls":["https://192.168.39.79:2379"]}
	
	
	==> etcd [a3adba2eef6dc38a56e8b38b2c0414c99640a21716d5258d8e30c84c11b895f2] <==
	{"level":"warn","ts":"2024-08-14T00:13:51.270011Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a91a1bbc2c758cdc","from":"a91a1bbc2c758cdc","remote-peer-id":"b5930f6d9553dfd0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-14T00:13:51.292698Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a91a1bbc2c758cdc","from":"a91a1bbc2c758cdc","remote-peer-id":"b5930f6d9553dfd0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-14T00:13:51.336525Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a91a1bbc2c758cdc","from":"a91a1bbc2c758cdc","remote-peer-id":"b5930f6d9553dfd0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-14T00:13:51.371796Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a91a1bbc2c758cdc","from":"a91a1bbc2c758cdc","remote-peer-id":"b5930f6d9553dfd0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-14T00:13:51.373796Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a91a1bbc2c758cdc","from":"a91a1bbc2c758cdc","remote-peer-id":"b5930f6d9553dfd0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-14T00:13:51.436780Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a91a1bbc2c758cdc","from":"a91a1bbc2c758cdc","remote-peer-id":"b5930f6d9553dfd0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-14T00:13:51.451802Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a91a1bbc2c758cdc","from":"a91a1bbc2c758cdc","remote-peer-id":"b5930f6d9553dfd0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-14T00:13:51.536445Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a91a1bbc2c758cdc","from":"a91a1bbc2c758cdc","remote-peer-id":"b5930f6d9553dfd0","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-14T00:13:54.582681Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.177:2380/version","remote-member-id":"b5930f6d9553dfd0","error":"Get \"https://192.168.39.177:2380/version\": dial tcp 192.168.39.177:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-14T00:13:54.582751Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"b5930f6d9553dfd0","error":"Get \"https://192.168.39.177:2380/version\": dial tcp 192.168.39.177:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-14T00:13:56.058571Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"b5930f6d9553dfd0","rtt":"0s","error":"dial tcp 192.168.39.177:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-14T00:13:56.058690Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"b5930f6d9553dfd0","rtt":"0s","error":"dial tcp 192.168.39.177:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-14T00:13:58.584934Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.177:2380/version","remote-member-id":"b5930f6d9553dfd0","error":"Get \"https://192.168.39.177:2380/version\": dial tcp 192.168.39.177:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-14T00:13:58.585005Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"b5930f6d9553dfd0","error":"Get \"https://192.168.39.177:2380/version\": dial tcp 192.168.39.177:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-14T00:14:01.059529Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"b5930f6d9553dfd0","rtt":"0s","error":"dial tcp 192.168.39.177:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-14T00:14:01.059651Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"b5930f6d9553dfd0","rtt":"0s","error":"dial tcp 192.168.39.177:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-14T00:14:02.587068Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.177:2380/version","remote-member-id":"b5930f6d9553dfd0","error":"Get \"https://192.168.39.177:2380/version\": dial tcp 192.168.39.177:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-14T00:14:02.587214Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"b5930f6d9553dfd0","error":"Get \"https://192.168.39.177:2380/version\": dial tcp 192.168.39.177:2380: connect: connection refused"}
	{"level":"info","ts":"2024-08-14T00:14:04.799419Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"b5930f6d9553dfd0"}
	{"level":"info","ts":"2024-08-14T00:14:04.801457Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"a91a1bbc2c758cdc","remote-peer-id":"b5930f6d9553dfd0"}
	{"level":"info","ts":"2024-08-14T00:14:04.802214Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"a91a1bbc2c758cdc","remote-peer-id":"b5930f6d9553dfd0"}
	{"level":"info","ts":"2024-08-14T00:14:04.807265Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"a91a1bbc2c758cdc","to":"b5930f6d9553dfd0","stream-type":"stream Message"}
	{"level":"info","ts":"2024-08-14T00:14:04.807312Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"a91a1bbc2c758cdc","remote-peer-id":"b5930f6d9553dfd0"}
	{"level":"info","ts":"2024-08-14T00:14:04.810674Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"a91a1bbc2c758cdc","to":"b5930f6d9553dfd0","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-08-14T00:14:04.810751Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"a91a1bbc2c758cdc","remote-peer-id":"b5930f6d9553dfd0"}
	
	
	==> kernel <==
	 00:14:53 up 11 min,  0 users,  load average: 1.69, 0.88, 0.46
	Linux ha-105013 5.10.207 #1 SMP Tue Aug 13 22:05:29 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [d773535128c3474359fb39d2e67a85fda4514786ccd1249690454b5c2f1aad45] <==
	I0814 00:10:23.789387       1 main.go:322] Node ha-105013-m04 has CIDR [10.244.3.0/24] 
	I0814 00:10:33.798675       1 main.go:295] Handling node with IPs: map[192.168.39.79:{}]
	I0814 00:10:33.798869       1 main.go:299] handling current node
	I0814 00:10:33.798939       1 main.go:295] Handling node with IPs: map[192.168.39.160:{}]
	I0814 00:10:33.798961       1 main.go:322] Node ha-105013-m02 has CIDR [10.244.1.0/24] 
	I0814 00:10:33.799155       1 main.go:295] Handling node with IPs: map[192.168.39.177:{}]
	I0814 00:10:33.799179       1 main.go:322] Node ha-105013-m03 has CIDR [10.244.2.0/24] 
	I0814 00:10:33.799234       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0814 00:10:33.799251       1 main.go:322] Node ha-105013-m04 has CIDR [10.244.3.0/24] 
	I0814 00:10:43.788723       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0814 00:10:43.788763       1 main.go:322] Node ha-105013-m04 has CIDR [10.244.3.0/24] 
	I0814 00:10:43.788956       1 main.go:295] Handling node with IPs: map[192.168.39.79:{}]
	I0814 00:10:43.788985       1 main.go:299] handling current node
	I0814 00:10:43.788996       1 main.go:295] Handling node with IPs: map[192.168.39.160:{}]
	I0814 00:10:43.789002       1 main.go:322] Node ha-105013-m02 has CIDR [10.244.1.0/24] 
	I0814 00:10:43.789078       1 main.go:295] Handling node with IPs: map[192.168.39.177:{}]
	I0814 00:10:43.789093       1 main.go:322] Node ha-105013-m03 has CIDR [10.244.2.0/24] 
	I0814 00:10:53.789336       1 main.go:295] Handling node with IPs: map[192.168.39.79:{}]
	I0814 00:10:53.789395       1 main.go:299] handling current node
	I0814 00:10:53.789415       1 main.go:295] Handling node with IPs: map[192.168.39.160:{}]
	I0814 00:10:53.789423       1 main.go:322] Node ha-105013-m02 has CIDR [10.244.1.0/24] 
	I0814 00:10:53.789582       1 main.go:295] Handling node with IPs: map[192.168.39.177:{}]
	I0814 00:10:53.789609       1 main.go:322] Node ha-105013-m03 has CIDR [10.244.2.0/24] 
	I0814 00:10:53.789675       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0814 00:10:53.789693       1 main.go:322] Node ha-105013-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [f847be92bcda663b5a64c3ebe241dc754529718628caa01cef0525a11d01209f] <==
	I0814 00:14:16.598995       1 main.go:322] Node ha-105013-m02 has CIDR [10.244.1.0/24] 
	I0814 00:14:26.597384       1 main.go:295] Handling node with IPs: map[192.168.39.79:{}]
	I0814 00:14:26.597502       1 main.go:299] handling current node
	I0814 00:14:26.597535       1 main.go:295] Handling node with IPs: map[192.168.39.160:{}]
	I0814 00:14:26.597541       1 main.go:322] Node ha-105013-m02 has CIDR [10.244.1.0/24] 
	I0814 00:14:26.601109       1 main.go:295] Handling node with IPs: map[192.168.39.177:{}]
	I0814 00:14:26.601213       1 main.go:322] Node ha-105013-m03 has CIDR [10.244.2.0/24] 
	I0814 00:14:26.601332       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0814 00:14:26.601360       1 main.go:322] Node ha-105013-m04 has CIDR [10.244.3.0/24] 
	I0814 00:14:36.596109       1 main.go:295] Handling node with IPs: map[192.168.39.177:{}]
	I0814 00:14:36.596177       1 main.go:322] Node ha-105013-m03 has CIDR [10.244.2.0/24] 
	I0814 00:14:36.596449       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0814 00:14:36.596486       1 main.go:322] Node ha-105013-m04 has CIDR [10.244.3.0/24] 
	I0814 00:14:36.596601       1 main.go:295] Handling node with IPs: map[192.168.39.79:{}]
	I0814 00:14:36.596628       1 main.go:299] handling current node
	I0814 00:14:36.596645       1 main.go:295] Handling node with IPs: map[192.168.39.160:{}]
	I0814 00:14:36.596713       1 main.go:322] Node ha-105013-m02 has CIDR [10.244.1.0/24] 
	I0814 00:14:46.595819       1 main.go:295] Handling node with IPs: map[192.168.39.79:{}]
	I0814 00:14:46.595946       1 main.go:299] handling current node
	I0814 00:14:46.595975       1 main.go:295] Handling node with IPs: map[192.168.39.160:{}]
	I0814 00:14:46.596003       1 main.go:322] Node ha-105013-m02 has CIDR [10.244.1.0/24] 
	I0814 00:14:46.596182       1 main.go:295] Handling node with IPs: map[192.168.39.177:{}]
	I0814 00:14:46.596211       1 main.go:322] Node ha-105013-m03 has CIDR [10.244.2.0/24] 
	I0814 00:14:46.596347       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0814 00:14:46.596373       1 main.go:322] Node ha-105013-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [b0888b9785ccf1d89dbd1c10a23f4f7eaf095635fe96109abd1f407fd39608fd] <==
	I0814 00:13:16.488427       1 apiapproval_controller.go:189] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0814 00:13:16.488443       1 crd_finalizer.go:269] Starting CRDFinalizer
	I0814 00:13:16.565258       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0814 00:13:16.565762       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0814 00:13:16.565936       1 policy_source.go:224] refreshing policies
	I0814 00:13:16.567263       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0814 00:13:16.568088       1 shared_informer.go:320] Caches are synced for configmaps
	I0814 00:13:16.568451       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0814 00:13:16.579700       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0814 00:13:16.583848       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0814 00:13:16.584503       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0814 00:13:16.590541       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0814 00:13:16.591700       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0814 00:13:16.591841       1 aggregator.go:171] initial CRD sync complete...
	I0814 00:13:16.592236       1 autoregister_controller.go:144] Starting autoregister controller
	I0814 00:13:16.592277       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0814 00:13:16.592302       1 cache.go:39] Caches are synced for autoregister controller
	I0814 00:13:16.592776       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	W0814 00:13:16.605442       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.160]
	I0814 00:13:16.606962       1 controller.go:615] quota admission added evaluator for: endpoints
	I0814 00:13:16.619948       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0814 00:13:16.626177       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0814 00:13:16.655751       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0814 00:13:17.473508       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0814 00:13:18.244246       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.160 192.168.39.79]
	
	
	==> kube-apiserver [c23300665d9c76ac06c75fbfb737adf5b17e16c97443028c1a964c023ba15d12] <==
	I0814 00:12:39.244549       1 options.go:228] external host was not specified, using 192.168.39.79
	I0814 00:12:39.248212       1 server.go:142] Version: v1.31.0
	I0814 00:12:39.248503       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W0814 00:12:39.559258       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 00:12:39.559326       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0814 00:12:39.559379       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0814 00:12:39.566965       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0814 00:12:39.570913       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0814 00:12:39.570977       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0814 00:12:39.571173       1 instance.go:232] Using reconciler: lease
	W0814 00:12:39.572051       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 00:12:40.560783       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 00:12:40.560866       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 00:12:40.572607       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 00:12:42.161145       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 00:12:42.216314       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 00:12:42.372287       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 00:12:44.447500       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 00:12:44.677435       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 00:12:45.145452       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 00:12:59.558381       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0814 00:12:59.559313       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0814 00:12:59.571992       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [13140714cc06469d86f9745a4c86966c693d3449ed3f3c154fbb6e14ae42ee33] <==
	I0814 00:12:46.109027       1 serving.go:386] Generated self-signed cert in-memory
	I0814 00:12:46.268452       1 controllermanager.go:197] "Starting" version="v1.31.0"
	I0814 00:12:46.269264       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0814 00:12:46.271177       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0814 00:12:46.271871       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0814 00:12:46.272140       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0814 00:12:46.272306       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0814 00:13:06.274543       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.79:8443/healthz\": dial tcp 192.168.39.79:8443: connect: connection refused"
	
	
	==> kube-controller-manager [ee5d9b99a82fb9bc7d901e38ccc970b5854914d5bdc843ac9d85a1c4a32c0819] <==
	I0814 00:13:20.099370       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0814 00:13:20.504798       1 shared_informer.go:320] Caches are synced for garbage collector
	I0814 00:13:20.546390       1 shared_informer.go:320] Caches are synced for garbage collector
	I0814 00:13:20.546494       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0814 00:13:22.614605       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-105013"
	I0814 00:13:22.847091       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="45.991µs"
	I0814 00:13:28.428651       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-105013-m04"
	I0814 00:13:30.180006       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-105013-m04"
	I0814 00:13:56.961397       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-105013-m03"
	I0814 00:13:56.982828       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-105013-m03"
	I0814 00:13:57.929265       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="47.662µs"
	I0814 00:13:57.949633       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="105.345µs"
	I0814 00:13:58.203437       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-105013-m03"
	I0814 00:14:07.586292       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-105013-m02"
	I0814 00:14:13.130065       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-105013-m04"
	I0814 00:14:13.187171       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-105013-m04"
	I0814 00:14:16.147060       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="50.199781ms"
	I0814 00:14:16.147169       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="50.765µs"
	I0814 00:14:17.113343       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="10.62937ms"
	I0814 00:14:17.113697       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="142.786µs"
	I0814 00:14:27.519818       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-105013-m03"
	I0814 00:14:44.551970       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-105013-m04"
	I0814 00:14:44.552347       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-105013-m04"
	I0814 00:14:44.573066       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-105013-m04"
	I0814 00:14:45.073602       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-105013-m04"
	
	
	==> kube-proxy [cae4a2039c73c8b44c95f3baeb4245c44b9cf0e510c0c05c79eff9d68a7af5c7] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0814 00:03:52.659061       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0814 00:03:52.679318       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.79"]
	E0814 00:03:52.679386       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0814 00:03:52.731286       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0814 00:03:52.731326       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0814 00:03:52.731353       1 server_linux.go:169] "Using iptables Proxier"
	I0814 00:03:52.733433       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0814 00:03:52.733831       1 server.go:483] "Version info" version="v1.31.0"
	I0814 00:03:52.733964       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0814 00:03:52.735266       1 config.go:197] "Starting service config controller"
	I0814 00:03:52.735341       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0814 00:03:52.735408       1 config.go:104] "Starting endpoint slice config controller"
	I0814 00:03:52.735443       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0814 00:03:52.736166       1 config.go:326] "Starting node config controller"
	I0814 00:03:52.736203       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0814 00:03:52.835703       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0814 00:03:52.835796       1 shared_informer.go:320] Caches are synced for service config
	I0814 00:03:52.836469       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [d463e78fa6b27c92b26cd1bc806e34320df69961aae19159122eec7c9250a80b] <==
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0814 00:12:56.391411       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-105013\": net/http: TLS handshake timeout"
	E0814 00:13:03.028604       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-105013\": dial tcp 192.168.39.254:8443: connect: no route to host - error from a previous attempt: read tcp 192.168.39.254:39438->192.168.39.254:8443: read: connection reset by peer"
	E0814 00:13:06.101086       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-105013\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0814 00:13:12.244696       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-105013\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0814 00:13:21.739673       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.79"]
	E0814 00:13:21.739813       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0814 00:13:21.773436       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0814 00:13:21.773478       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0814 00:13:21.773541       1 server_linux.go:169] "Using iptables Proxier"
	I0814 00:13:21.775828       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0814 00:13:21.776161       1 server.go:483] "Version info" version="v1.31.0"
	I0814 00:13:21.776184       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0814 00:13:21.777573       1 config.go:197] "Starting service config controller"
	I0814 00:13:21.777623       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0814 00:13:21.777647       1 config.go:104] "Starting endpoint slice config controller"
	I0814 00:13:21.777662       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0814 00:13:21.778493       1 config.go:326] "Starting node config controller"
	I0814 00:13:21.778519       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0814 00:13:21.878546       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0814 00:13:21.878583       1 shared_informer.go:320] Caches are synced for node config
	I0814 00:13:21.878598       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [8d6324bf2404b4c092f212b1262882c454050b8a4c18214d22cbb56d999ed4d4] <==
	W0814 00:13:08.294572       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.79:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.79:8443: connect: connection refused
	E0814 00:13:08.294833       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://192.168.39.79:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.39.79:8443: connect: connection refused" logger="UnhandledError"
	W0814 00:13:08.370859       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.79:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.79:8443: connect: connection refused
	E0814 00:13:08.371085       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.168.39.79:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.39.79:8443: connect: connection refused" logger="UnhandledError"
	W0814 00:13:08.734096       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.79:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.79:8443: connect: connection refused
	E0814 00:13:08.734151       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.168.39.79:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.39.79:8443: connect: connection refused" logger="UnhandledError"
	W0814 00:13:08.771992       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.79:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.79:8443: connect: connection refused
	E0814 00:13:08.772038       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.79:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.79:8443: connect: connection refused" logger="UnhandledError"
	W0814 00:13:08.904714       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.79:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.79:8443: connect: connection refused
	E0814 00:13:08.904825       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.168.39.79:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.39.79:8443: connect: connection refused" logger="UnhandledError"
	W0814 00:13:08.906290       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.79:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.79:8443: connect: connection refused
	E0814 00:13:08.906386       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.168.39.79:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.79:8443: connect: connection refused" logger="UnhandledError"
	W0814 00:13:09.699746       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.79:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.79:8443: connect: connection refused
	E0814 00:13:09.699801       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.168.39.79:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.39.79:8443: connect: connection refused" logger="UnhandledError"
	W0814 00:13:10.346371       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.79:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.79:8443: connect: connection refused
	E0814 00:13:10.346425       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.168.39.79:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.39.79:8443: connect: connection refused" logger="UnhandledError"
	W0814 00:13:10.796842       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.79:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.79:8443: connect: connection refused
	E0814 00:13:10.797015       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.168.39.79:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.39.79:8443: connect: connection refused" logger="UnhandledError"
	W0814 00:13:16.498465       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0814 00:13:16.498608       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0814 00:13:16.498472       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0814 00:13:16.499409       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0814 00:13:16.505215       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0814 00:13:16.505255       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0814 00:13:41.688494       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [f644ed2e094890dd8d28e4ca035634bf6340e598553601368c4025ba64cbbc58] <==
	I0814 00:03:48.433993       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0814 00:06:53.566989       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="fd3f4fa0-b215-4671-8d8a-310dcd4cac18" pod="default/busybox-7dff88458-5px5v" assumedNode="ha-105013-m03" currentNode="ha-105013-m02"
	E0814 00:06:53.578753       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-5px5v\": pod busybox-7dff88458-5px5v is already assigned to node \"ha-105013-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-5px5v" node="ha-105013-m02"
	E0814 00:06:53.579116       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod fd3f4fa0-b215-4671-8d8a-310dcd4cac18(default/busybox-7dff88458-5px5v) was assumed on ha-105013-m02 but assigned to ha-105013-m03" pod="default/busybox-7dff88458-5px5v"
	E0814 00:06:53.579261       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-5px5v\": pod busybox-7dff88458-5px5v is already assigned to node \"ha-105013-m03\"" pod="default/busybox-7dff88458-5px5v"
	I0814 00:06:53.579379       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-5px5v" node="ha-105013-m03"
	E0814 00:07:31.050023       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-2cd8m\": pod kube-proxy-2cd8m is already assigned to node \"ha-105013-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-2cd8m" node="ha-105013-m04"
	E0814 00:07:31.050117       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod e5bb37bb-b8f9-4a66-8a98-778055989065(kube-system/kube-proxy-2cd8m) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-2cd8m"
	E0814 00:07:31.050142       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-2cd8m\": pod kube-proxy-2cd8m is already assigned to node \"ha-105013-m04\"" pod="kube-system/kube-proxy-2cd8m"
	I0814 00:07:31.050175       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-2cd8m" node="ha-105013-m04"
	E0814 00:07:31.114249       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-5xxs4\": pod kube-proxy-5xxs4 is already assigned to node \"ha-105013-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-5xxs4" node="ha-105013-m04"
	E0814 00:07:31.115258       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-5xxs4\": pod kube-proxy-5xxs4 is already assigned to node \"ha-105013-m04\"" pod="kube-system/kube-proxy-5xxs4"
	E0814 00:07:31.117351       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-t8dfd\": pod kube-proxy-t8dfd is already assigned to node \"ha-105013-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-t8dfd" node="ha-105013-m04"
	E0814 00:07:31.117474       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 34d2f2d4-f6f7-48b0-9325-0a4be891bc91(kube-system/kube-proxy-t8dfd) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-t8dfd"
	E0814 00:07:31.117548       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-t8dfd\": pod kube-proxy-t8dfd is already assigned to node \"ha-105013-m04\"" pod="kube-system/kube-proxy-t8dfd"
	I0814 00:07:31.117593       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-t8dfd" node="ha-105013-m04"
	E0814 00:07:31.118258       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-2dmsx\": pod kindnet-2dmsx is already assigned to node \"ha-105013-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-2dmsx" node="ha-105013-m04"
	E0814 00:07:31.118324       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod c0343723-94eb-47f2-a11c-ed9a25875f46(kube-system/kindnet-2dmsx) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-2dmsx"
	E0814 00:07:31.118343       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-2dmsx\": pod kindnet-2dmsx is already assigned to node \"ha-105013-m04\"" pod="kube-system/kindnet-2dmsx"
	I0814 00:07:31.118359       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-2dmsx" node="ha-105013-m04"
	E0814 00:07:31.128785       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-jgnhw\": pod kindnet-jgnhw is already assigned to node \"ha-105013-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-jgnhw" node="ha-105013-m04"
	E0814 00:07:31.129287       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod ffc28f4d-e2ce-4f73-a7d3-4df8b62d445b(kube-system/kindnet-jgnhw) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-jgnhw"
	E0814 00:07:31.129360       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-jgnhw\": pod kindnet-jgnhw is already assigned to node \"ha-105013-m04\"" pod="kube-system/kindnet-jgnhw"
	I0814 00:07:31.129458       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-jgnhw" node="ha-105013-m04"
	E0814 00:11:02.504669       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Aug 14 00:13:47 ha-105013 kubelet[1324]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 14 00:13:47 ha-105013 kubelet[1324]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 14 00:13:47 ha-105013 kubelet[1324]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 14 00:13:47 ha-105013 kubelet[1324]: E0814 00:13:47.777295    1324 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723594427776794525,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:145823,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 00:13:47 ha-105013 kubelet[1324]: E0814 00:13:47.777352    1324 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723594427776794525,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:145823,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 00:13:57 ha-105013 kubelet[1324]: E0814 00:13:57.780196    1324 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723594437779339711,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:145823,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 00:13:57 ha-105013 kubelet[1324]: E0814 00:13:57.780238    1324 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723594437779339711,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:145823,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 00:14:07 ha-105013 kubelet[1324]: E0814 00:14:07.782796    1324 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723594447782193130,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:145823,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 00:14:07 ha-105013 kubelet[1324]: E0814 00:14:07.783437    1324 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723594447782193130,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:145823,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 00:14:13 ha-105013 kubelet[1324]: I0814 00:14:13.622830    1324 kubelet.go:1895] "Trying to delete pod" pod="kube-system/kube-vip-ha-105013" podUID="d8068b30-e968-4255-9acf-ede6b50ea45b"
	Aug 14 00:14:13 ha-105013 kubelet[1324]: I0814 00:14:13.646078    1324 kubelet.go:1900] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-105013"
	Aug 14 00:14:14 ha-105013 kubelet[1324]: I0814 00:14:14.111025    1324 kubelet.go:1895] "Trying to delete pod" pod="kube-system/kube-vip-ha-105013" podUID="d8068b30-e968-4255-9acf-ede6b50ea45b"
	Aug 14 00:14:17 ha-105013 kubelet[1324]: E0814 00:14:17.786460    1324 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723594457785663736,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:145823,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 00:14:17 ha-105013 kubelet[1324]: E0814 00:14:17.786801    1324 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723594457785663736,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:145823,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 00:14:27 ha-105013 kubelet[1324]: E0814 00:14:27.788305    1324 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723594467787786550,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:145823,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 00:14:27 ha-105013 kubelet[1324]: E0814 00:14:27.788345    1324 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723594467787786550,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:145823,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 00:14:37 ha-105013 kubelet[1324]: E0814 00:14:37.789942    1324 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723594477789494105,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:145823,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 00:14:37 ha-105013 kubelet[1324]: E0814 00:14:37.789999    1324 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723594477789494105,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:145823,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 00:14:47 ha-105013 kubelet[1324]: E0814 00:14:47.643035    1324 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 14 00:14:47 ha-105013 kubelet[1324]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 14 00:14:47 ha-105013 kubelet[1324]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 14 00:14:47 ha-105013 kubelet[1324]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 14 00:14:47 ha-105013 kubelet[1324]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 14 00:14:47 ha-105013 kubelet[1324]: E0814 00:14:47.791814    1324 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723594487791283262,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:145823,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 00:14:47 ha-105013 kubelet[1324]: E0814 00:14:47.791871    1324 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723594487791283262,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:145823,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0814 00:14:52.306228   33547 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19429-9425/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-105013 -n ha-105013
helpers_test.go:261: (dbg) Run:  kubectl --context ha-105013 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (354.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (141.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-105013 stop -v=7 --alsologtostderr
E0814 00:16:28.584983   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/addons-937866/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-105013 stop -v=7 --alsologtostderr: exit status 82 (2m0.457551118s)

                                                
                                                
-- stdout --
	* Stopping node "ha-105013-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 00:15:11.398738   33957 out.go:291] Setting OutFile to fd 1 ...
	I0814 00:15:11.398869   33957 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 00:15:11.398879   33957 out.go:304] Setting ErrFile to fd 2...
	I0814 00:15:11.398886   33957 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 00:15:11.399046   33957 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19429-9425/.minikube/bin
	I0814 00:15:11.399279   33957 out.go:298] Setting JSON to false
	I0814 00:15:11.399373   33957 mustload.go:65] Loading cluster: ha-105013
	I0814 00:15:11.399720   33957 config.go:182] Loaded profile config "ha-105013": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 00:15:11.399815   33957 profile.go:143] Saving config to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/ha-105013/config.json ...
	I0814 00:15:11.400006   33957 mustload.go:65] Loading cluster: ha-105013
	I0814 00:15:11.400157   33957 config.go:182] Loaded profile config "ha-105013": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 00:15:11.400196   33957 stop.go:39] StopHost: ha-105013-m04
	I0814 00:15:11.400586   33957 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 00:15:11.400635   33957 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 00:15:11.415927   33957 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38331
	I0814 00:15:11.416291   33957 main.go:141] libmachine: () Calling .GetVersion
	I0814 00:15:11.416920   33957 main.go:141] libmachine: Using API Version  1
	I0814 00:15:11.416948   33957 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 00:15:11.417278   33957 main.go:141] libmachine: () Calling .GetMachineName
	I0814 00:15:11.419504   33957 out.go:177] * Stopping node "ha-105013-m04"  ...
	I0814 00:15:11.420571   33957 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0814 00:15:11.420601   33957 main.go:141] libmachine: (ha-105013-m04) Calling .DriverName
	I0814 00:15:11.420815   33957 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0814 00:15:11.420854   33957 main.go:141] libmachine: (ha-105013-m04) Calling .GetSSHHostname
	I0814 00:15:11.423790   33957 main.go:141] libmachine: (ha-105013-m04) DBG | domain ha-105013-m04 has defined MAC address 52:54:00:36:47:1b in network mk-ha-105013
	I0814 00:15:11.424132   33957 main.go:141] libmachine: (ha-105013-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:47:1b", ip: ""} in network mk-ha-105013: {Iface:virbr1 ExpiryTime:2024-08-14 01:14:39 +0000 UTC Type:0 Mac:52:54:00:36:47:1b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-105013-m04 Clientid:01:52:54:00:36:47:1b}
	I0814 00:15:11.424157   33957 main.go:141] libmachine: (ha-105013-m04) DBG | domain ha-105013-m04 has defined IP address 192.168.39.102 and MAC address 52:54:00:36:47:1b in network mk-ha-105013
	I0814 00:15:11.424298   33957 main.go:141] libmachine: (ha-105013-m04) Calling .GetSSHPort
	I0814 00:15:11.424481   33957 main.go:141] libmachine: (ha-105013-m04) Calling .GetSSHKeyPath
	I0814 00:15:11.424643   33957 main.go:141] libmachine: (ha-105013-m04) Calling .GetSSHUsername
	I0814 00:15:11.424786   33957 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/ha-105013-m04/id_rsa Username:docker}
	I0814 00:15:11.508086   33957 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0814 00:15:11.560199   33957 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0814 00:15:11.612680   33957 main.go:141] libmachine: Stopping "ha-105013-m04"...
	I0814 00:15:11.612716   33957 main.go:141] libmachine: (ha-105013-m04) Calling .GetState
	I0814 00:15:11.614221   33957 main.go:141] libmachine: (ha-105013-m04) Calling .Stop
	I0814 00:15:11.618406   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 0/120
	I0814 00:15:12.619785   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 1/120
	I0814 00:15:13.621048   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 2/120
	I0814 00:15:14.622410   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 3/120
	I0814 00:15:15.623776   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 4/120
	I0814 00:15:16.625831   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 5/120
	I0814 00:15:17.627349   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 6/120
	I0814 00:15:18.628650   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 7/120
	I0814 00:15:19.631005   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 8/120
	I0814 00:15:20.632496   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 9/120
	I0814 00:15:21.634526   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 10/120
	I0814 00:15:22.636378   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 11/120
	I0814 00:15:23.637566   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 12/120
	I0814 00:15:24.638683   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 13/120
	I0814 00:15:25.640233   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 14/120
	I0814 00:15:26.642181   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 15/120
	I0814 00:15:27.643358   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 16/120
	I0814 00:15:28.644493   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 17/120
	I0814 00:15:29.645798   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 18/120
	I0814 00:15:30.647094   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 19/120
	I0814 00:15:31.649072   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 20/120
	I0814 00:15:32.650428   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 21/120
	I0814 00:15:33.651775   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 22/120
	I0814 00:15:34.652949   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 23/120
	I0814 00:15:35.654479   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 24/120
	I0814 00:15:36.656304   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 25/120
	I0814 00:15:37.657601   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 26/120
	I0814 00:15:38.658979   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 27/120
	I0814 00:15:39.660604   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 28/120
	I0814 00:15:40.662295   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 29/120
	I0814 00:15:41.664436   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 30/120
	I0814 00:15:42.665622   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 31/120
	I0814 00:15:43.667052   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 32/120
	I0814 00:15:44.668320   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 33/120
	I0814 00:15:45.669832   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 34/120
	I0814 00:15:46.671695   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 35/120
	I0814 00:15:47.673218   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 36/120
	I0814 00:15:48.674367   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 37/120
	I0814 00:15:49.676847   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 38/120
	I0814 00:15:50.678177   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 39/120
	I0814 00:15:51.679392   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 40/120
	I0814 00:15:52.681129   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 41/120
	I0814 00:15:53.682427   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 42/120
	I0814 00:15:54.683814   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 43/120
	I0814 00:15:55.685168   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 44/120
	I0814 00:15:56.687221   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 45/120
	I0814 00:15:57.688643   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 46/120
	I0814 00:15:58.690269   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 47/120
	I0814 00:15:59.692451   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 48/120
	I0814 00:16:00.693686   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 49/120
	I0814 00:16:01.695145   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 50/120
	I0814 00:16:02.696309   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 51/120
	I0814 00:16:03.697719   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 52/120
	I0814 00:16:04.699092   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 53/120
	I0814 00:16:05.700464   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 54/120
	I0814 00:16:06.702144   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 55/120
	I0814 00:16:07.703334   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 56/120
	I0814 00:16:08.704514   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 57/120
	I0814 00:16:09.706061   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 58/120
	I0814 00:16:10.707318   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 59/120
	I0814 00:16:11.709183   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 60/120
	I0814 00:16:12.710523   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 61/120
	I0814 00:16:13.711872   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 62/120
	I0814 00:16:14.713771   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 63/120
	I0814 00:16:15.715431   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 64/120
	I0814 00:16:16.717310   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 65/120
	I0814 00:16:17.718538   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 66/120
	I0814 00:16:18.719831   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 67/120
	I0814 00:16:19.721180   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 68/120
	I0814 00:16:20.722428   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 69/120
	I0814 00:16:21.724260   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 70/120
	I0814 00:16:22.725724   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 71/120
	I0814 00:16:23.726868   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 72/120
	I0814 00:16:24.728303   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 73/120
	I0814 00:16:25.730412   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 74/120
	I0814 00:16:26.732330   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 75/120
	I0814 00:16:27.733814   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 76/120
	I0814 00:16:28.735681   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 77/120
	I0814 00:16:29.736991   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 78/120
	I0814 00:16:30.738617   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 79/120
	I0814 00:16:31.740458   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 80/120
	I0814 00:16:32.742143   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 81/120
	I0814 00:16:33.743546   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 82/120
	I0814 00:16:34.745219   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 83/120
	I0814 00:16:35.746929   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 84/120
	I0814 00:16:36.749022   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 85/120
	I0814 00:16:37.751170   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 86/120
	I0814 00:16:38.752456   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 87/120
	I0814 00:16:39.753843   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 88/120
	I0814 00:16:40.755139   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 89/120
	I0814 00:16:41.757217   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 90/120
	I0814 00:16:42.759459   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 91/120
	I0814 00:16:43.761103   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 92/120
	I0814 00:16:44.762348   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 93/120
	I0814 00:16:45.763686   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 94/120
	I0814 00:16:46.765838   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 95/120
	I0814 00:16:47.767053   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 96/120
	I0814 00:16:48.769043   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 97/120
	I0814 00:16:49.770342   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 98/120
	I0814 00:16:50.772583   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 99/120
	I0814 00:16:51.775017   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 100/120
	I0814 00:16:52.776908   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 101/120
	I0814 00:16:53.778658   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 102/120
	I0814 00:16:54.780531   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 103/120
	I0814 00:16:55.782371   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 104/120
	I0814 00:16:56.784387   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 105/120
	I0814 00:16:57.785655   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 106/120
	I0814 00:16:58.787148   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 107/120
	I0814 00:16:59.788407   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 108/120
	I0814 00:17:00.789903   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 109/120
	I0814 00:17:01.792066   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 110/120
	I0814 00:17:02.793484   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 111/120
	I0814 00:17:03.794685   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 112/120
	I0814 00:17:04.796488   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 113/120
	I0814 00:17:05.797788   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 114/120
	I0814 00:17:06.799437   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 115/120
	I0814 00:17:07.800977   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 116/120
	I0814 00:17:08.802176   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 117/120
	I0814 00:17:09.803640   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 118/120
	I0814 00:17:10.804971   33957 main.go:141] libmachine: (ha-105013-m04) Waiting for machine to stop 119/120
	I0814 00:17:11.806005   33957 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0814 00:17:11.806093   33957 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0814 00:17:11.808053   33957 out.go:177] 
	W0814 00:17:11.809331   33957 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0814 00:17:11.809348   33957 out.go:239] * 
	* 
	W0814 00:17:11.811601   33957 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0814 00:17:11.812908   33957 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-105013 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-105013 status -v=7 --alsologtostderr
E0814 00:17:14.186575   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/functional-770612/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-105013 status -v=7 --alsologtostderr: exit status 3 (19.024235025s)

                                                
                                                
-- stdout --
	ha-105013
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-105013-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-105013-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 00:17:11.855955   34400 out.go:291] Setting OutFile to fd 1 ...
	I0814 00:17:11.856225   34400 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 00:17:11.856235   34400 out.go:304] Setting ErrFile to fd 2...
	I0814 00:17:11.856240   34400 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 00:17:11.856447   34400 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19429-9425/.minikube/bin
	I0814 00:17:11.856619   34400 out.go:298] Setting JSON to false
	I0814 00:17:11.856649   34400 mustload.go:65] Loading cluster: ha-105013
	I0814 00:17:11.856753   34400 notify.go:220] Checking for updates...
	I0814 00:17:11.857224   34400 config.go:182] Loaded profile config "ha-105013": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 00:17:11.857243   34400 status.go:255] checking status of ha-105013 ...
	I0814 00:17:11.857722   34400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 00:17:11.857832   34400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 00:17:11.876667   34400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36769
	I0814 00:17:11.877146   34400 main.go:141] libmachine: () Calling .GetVersion
	I0814 00:17:11.877812   34400 main.go:141] libmachine: Using API Version  1
	I0814 00:17:11.877843   34400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 00:17:11.878229   34400 main.go:141] libmachine: () Calling .GetMachineName
	I0814 00:17:11.878415   34400 main.go:141] libmachine: (ha-105013) Calling .GetState
	I0814 00:17:11.880127   34400 status.go:330] ha-105013 host status = "Running" (err=<nil>)
	I0814 00:17:11.880143   34400 host.go:66] Checking if "ha-105013" exists ...
	I0814 00:17:11.880473   34400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 00:17:11.880511   34400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 00:17:11.895284   34400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42849
	I0814 00:17:11.895662   34400 main.go:141] libmachine: () Calling .GetVersion
	I0814 00:17:11.896115   34400 main.go:141] libmachine: Using API Version  1
	I0814 00:17:11.896140   34400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 00:17:11.896417   34400 main.go:141] libmachine: () Calling .GetMachineName
	I0814 00:17:11.896575   34400 main.go:141] libmachine: (ha-105013) Calling .GetIP
	I0814 00:17:11.899461   34400 main.go:141] libmachine: (ha-105013) DBG | domain ha-105013 has defined MAC address 52:54:00:e0:21:6c in network mk-ha-105013
	I0814 00:17:11.899855   34400 main.go:141] libmachine: (ha-105013) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:21:6c", ip: ""} in network mk-ha-105013: {Iface:virbr1 ExpiryTime:2024-08-14 01:03:22 +0000 UTC Type:0 Mac:52:54:00:e0:21:6c Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-105013 Clientid:01:52:54:00:e0:21:6c}
	I0814 00:17:11.899874   34400 main.go:141] libmachine: (ha-105013) DBG | domain ha-105013 has defined IP address 192.168.39.79 and MAC address 52:54:00:e0:21:6c in network mk-ha-105013
	I0814 00:17:11.900043   34400 host.go:66] Checking if "ha-105013" exists ...
	I0814 00:17:11.900357   34400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 00:17:11.900392   34400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 00:17:11.915134   34400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46463
	I0814 00:17:11.915526   34400 main.go:141] libmachine: () Calling .GetVersion
	I0814 00:17:11.916003   34400 main.go:141] libmachine: Using API Version  1
	I0814 00:17:11.916016   34400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 00:17:11.916431   34400 main.go:141] libmachine: () Calling .GetMachineName
	I0814 00:17:11.916622   34400 main.go:141] libmachine: (ha-105013) Calling .DriverName
	I0814 00:17:11.916816   34400 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0814 00:17:11.916834   34400 main.go:141] libmachine: (ha-105013) Calling .GetSSHHostname
	I0814 00:17:11.919427   34400 main.go:141] libmachine: (ha-105013) DBG | domain ha-105013 has defined MAC address 52:54:00:e0:21:6c in network mk-ha-105013
	I0814 00:17:11.919696   34400 main.go:141] libmachine: (ha-105013) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:21:6c", ip: ""} in network mk-ha-105013: {Iface:virbr1 ExpiryTime:2024-08-14 01:03:22 +0000 UTC Type:0 Mac:52:54:00:e0:21:6c Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-105013 Clientid:01:52:54:00:e0:21:6c}
	I0814 00:17:11.919727   34400 main.go:141] libmachine: (ha-105013) DBG | domain ha-105013 has defined IP address 192.168.39.79 and MAC address 52:54:00:e0:21:6c in network mk-ha-105013
	I0814 00:17:11.919907   34400 main.go:141] libmachine: (ha-105013) Calling .GetSSHPort
	I0814 00:17:11.920049   34400 main.go:141] libmachine: (ha-105013) Calling .GetSSHKeyPath
	I0814 00:17:11.920174   34400 main.go:141] libmachine: (ha-105013) Calling .GetSSHUsername
	I0814 00:17:11.920276   34400 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/ha-105013/id_rsa Username:docker}
	I0814 00:17:12.007129   34400 ssh_runner.go:195] Run: systemctl --version
	I0814 00:17:12.015041   34400 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 00:17:12.033387   34400 kubeconfig.go:125] found "ha-105013" server: "https://192.168.39.254:8443"
	I0814 00:17:12.033415   34400 api_server.go:166] Checking apiserver status ...
	I0814 00:17:12.033444   34400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 00:17:12.050938   34400 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4253/cgroup
	W0814 00:17:12.063010   34400 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/4253/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0814 00:17:12.063066   34400 ssh_runner.go:195] Run: ls
	I0814 00:17:12.067677   34400 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0814 00:17:12.071796   34400 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0814 00:17:12.071814   34400 status.go:422] ha-105013 apiserver status = Running (err=<nil>)
	I0814 00:17:12.071823   34400 status.go:257] ha-105013 status: &{Name:ha-105013 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0814 00:17:12.071836   34400 status.go:255] checking status of ha-105013-m02 ...
	I0814 00:17:12.072221   34400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 00:17:12.072255   34400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 00:17:12.086669   34400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39547
	I0814 00:17:12.087094   34400 main.go:141] libmachine: () Calling .GetVersion
	I0814 00:17:12.087512   34400 main.go:141] libmachine: Using API Version  1
	I0814 00:17:12.087532   34400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 00:17:12.087816   34400 main.go:141] libmachine: () Calling .GetMachineName
	I0814 00:17:12.087990   34400 main.go:141] libmachine: (ha-105013-m02) Calling .GetState
	I0814 00:17:12.089485   34400 status.go:330] ha-105013-m02 host status = "Running" (err=<nil>)
	I0814 00:17:12.089498   34400 host.go:66] Checking if "ha-105013-m02" exists ...
	I0814 00:17:12.089867   34400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 00:17:12.089905   34400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 00:17:12.104736   34400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33515
	I0814 00:17:12.105082   34400 main.go:141] libmachine: () Calling .GetVersion
	I0814 00:17:12.105544   34400 main.go:141] libmachine: Using API Version  1
	I0814 00:17:12.105563   34400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 00:17:12.105842   34400 main.go:141] libmachine: () Calling .GetMachineName
	I0814 00:17:12.106067   34400 main.go:141] libmachine: (ha-105013-m02) Calling .GetIP
	I0814 00:17:12.108720   34400 main.go:141] libmachine: (ha-105013-m02) DBG | domain ha-105013-m02 has defined MAC address 52:54:00:ef:09:06 in network mk-ha-105013
	I0814 00:17:12.109147   34400 main.go:141] libmachine: (ha-105013-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:09:06", ip: ""} in network mk-ha-105013: {Iface:virbr1 ExpiryTime:2024-08-14 01:08:24 +0000 UTC Type:0 Mac:52:54:00:ef:09:06 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-105013-m02 Clientid:01:52:54:00:ef:09:06}
	I0814 00:17:12.109168   34400 main.go:141] libmachine: (ha-105013-m02) DBG | domain ha-105013-m02 has defined IP address 192.168.39.160 and MAC address 52:54:00:ef:09:06 in network mk-ha-105013
	I0814 00:17:12.109308   34400 host.go:66] Checking if "ha-105013-m02" exists ...
	I0814 00:17:12.109614   34400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 00:17:12.109647   34400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 00:17:12.123705   34400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33343
	I0814 00:17:12.124062   34400 main.go:141] libmachine: () Calling .GetVersion
	I0814 00:17:12.124470   34400 main.go:141] libmachine: Using API Version  1
	I0814 00:17:12.124491   34400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 00:17:12.124810   34400 main.go:141] libmachine: () Calling .GetMachineName
	I0814 00:17:12.124970   34400 main.go:141] libmachine: (ha-105013-m02) Calling .DriverName
	I0814 00:17:12.125145   34400 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0814 00:17:12.125162   34400 main.go:141] libmachine: (ha-105013-m02) Calling .GetSSHHostname
	I0814 00:17:12.127428   34400 main.go:141] libmachine: (ha-105013-m02) DBG | domain ha-105013-m02 has defined MAC address 52:54:00:ef:09:06 in network mk-ha-105013
	I0814 00:17:12.127764   34400 main.go:141] libmachine: (ha-105013-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:09:06", ip: ""} in network mk-ha-105013: {Iface:virbr1 ExpiryTime:2024-08-14 01:08:24 +0000 UTC Type:0 Mac:52:54:00:ef:09:06 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-105013-m02 Clientid:01:52:54:00:ef:09:06}
	I0814 00:17:12.127791   34400 main.go:141] libmachine: (ha-105013-m02) DBG | domain ha-105013-m02 has defined IP address 192.168.39.160 and MAC address 52:54:00:ef:09:06 in network mk-ha-105013
	I0814 00:17:12.127881   34400 main.go:141] libmachine: (ha-105013-m02) Calling .GetSSHPort
	I0814 00:17:12.128043   34400 main.go:141] libmachine: (ha-105013-m02) Calling .GetSSHKeyPath
	I0814 00:17:12.128205   34400 main.go:141] libmachine: (ha-105013-m02) Calling .GetSSHUsername
	I0814 00:17:12.128335   34400 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/ha-105013-m02/id_rsa Username:docker}
	I0814 00:17:12.211031   34400 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 00:17:12.226144   34400 kubeconfig.go:125] found "ha-105013" server: "https://192.168.39.254:8443"
	I0814 00:17:12.226172   34400 api_server.go:166] Checking apiserver status ...
	I0814 00:17:12.226216   34400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 00:17:12.240111   34400 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/3716/cgroup
	W0814 00:17:12.249750   34400 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/3716/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0814 00:17:12.249792   34400 ssh_runner.go:195] Run: ls
	I0814 00:17:12.253506   34400 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0814 00:17:12.257512   34400 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0814 00:17:12.257531   34400 status.go:422] ha-105013-m02 apiserver status = Running (err=<nil>)
	I0814 00:17:12.257541   34400 status.go:257] ha-105013-m02 status: &{Name:ha-105013-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0814 00:17:12.257569   34400 status.go:255] checking status of ha-105013-m04 ...
	I0814 00:17:12.257934   34400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 00:17:12.257980   34400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 00:17:12.272800   34400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38749
	I0814 00:17:12.273152   34400 main.go:141] libmachine: () Calling .GetVersion
	I0814 00:17:12.273573   34400 main.go:141] libmachine: Using API Version  1
	I0814 00:17:12.273596   34400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 00:17:12.273882   34400 main.go:141] libmachine: () Calling .GetMachineName
	I0814 00:17:12.274079   34400 main.go:141] libmachine: (ha-105013-m04) Calling .GetState
	I0814 00:17:12.275606   34400 status.go:330] ha-105013-m04 host status = "Running" (err=<nil>)
	I0814 00:17:12.275618   34400 host.go:66] Checking if "ha-105013-m04" exists ...
	I0814 00:17:12.275914   34400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 00:17:12.275951   34400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 00:17:12.291025   34400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44131
	I0814 00:17:12.291446   34400 main.go:141] libmachine: () Calling .GetVersion
	I0814 00:17:12.291909   34400 main.go:141] libmachine: Using API Version  1
	I0814 00:17:12.291926   34400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 00:17:12.292220   34400 main.go:141] libmachine: () Calling .GetMachineName
	I0814 00:17:12.292373   34400 main.go:141] libmachine: (ha-105013-m04) Calling .GetIP
	I0814 00:17:12.295465   34400 main.go:141] libmachine: (ha-105013-m04) DBG | domain ha-105013-m04 has defined MAC address 52:54:00:36:47:1b in network mk-ha-105013
	I0814 00:17:12.295913   34400 main.go:141] libmachine: (ha-105013-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:47:1b", ip: ""} in network mk-ha-105013: {Iface:virbr1 ExpiryTime:2024-08-14 01:14:39 +0000 UTC Type:0 Mac:52:54:00:36:47:1b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-105013-m04 Clientid:01:52:54:00:36:47:1b}
	I0814 00:17:12.295939   34400 main.go:141] libmachine: (ha-105013-m04) DBG | domain ha-105013-m04 has defined IP address 192.168.39.102 and MAC address 52:54:00:36:47:1b in network mk-ha-105013
	I0814 00:17:12.296051   34400 host.go:66] Checking if "ha-105013-m04" exists ...
	I0814 00:17:12.296341   34400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 00:17:12.296374   34400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 00:17:12.310846   34400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33491
	I0814 00:17:12.311239   34400 main.go:141] libmachine: () Calling .GetVersion
	I0814 00:17:12.311736   34400 main.go:141] libmachine: Using API Version  1
	I0814 00:17:12.311756   34400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 00:17:12.312103   34400 main.go:141] libmachine: () Calling .GetMachineName
	I0814 00:17:12.312275   34400 main.go:141] libmachine: (ha-105013-m04) Calling .DriverName
	I0814 00:17:12.312551   34400 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0814 00:17:12.312569   34400 main.go:141] libmachine: (ha-105013-m04) Calling .GetSSHHostname
	I0814 00:17:12.315406   34400 main.go:141] libmachine: (ha-105013-m04) DBG | domain ha-105013-m04 has defined MAC address 52:54:00:36:47:1b in network mk-ha-105013
	I0814 00:17:12.315867   34400 main.go:141] libmachine: (ha-105013-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:47:1b", ip: ""} in network mk-ha-105013: {Iface:virbr1 ExpiryTime:2024-08-14 01:14:39 +0000 UTC Type:0 Mac:52:54:00:36:47:1b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-105013-m04 Clientid:01:52:54:00:36:47:1b}
	I0814 00:17:12.315925   34400 main.go:141] libmachine: (ha-105013-m04) DBG | domain ha-105013-m04 has defined IP address 192.168.39.102 and MAC address 52:54:00:36:47:1b in network mk-ha-105013
	I0814 00:17:12.316017   34400 main.go:141] libmachine: (ha-105013-m04) Calling .GetSSHPort
	I0814 00:17:12.316178   34400 main.go:141] libmachine: (ha-105013-m04) Calling .GetSSHKeyPath
	I0814 00:17:12.316321   34400 main.go:141] libmachine: (ha-105013-m04) Calling .GetSSHUsername
	I0814 00:17:12.316459   34400 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/ha-105013-m04/id_rsa Username:docker}
	W0814 00:17:30.838282   34400 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.102:22: connect: no route to host
	W0814 00:17:30.838387   34400 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.102:22: connect: no route to host
	E0814 00:17:30.838403   34400 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.102:22: connect: no route to host
	I0814 00:17:30.838411   34400 status.go:257] ha-105013-m04 status: &{Name:ha-105013-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0814 00:17:30.838427   34400 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.102:22: connect: no route to host

                                                
                                                
** /stderr **
ha_test.go:540: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-105013 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-105013 -n ha-105013
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-105013 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-105013 logs -n 25: (1.565317226s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-105013 ssh -n ha-105013-m02 sudo cat                                          | ha-105013 | jenkins | v1.33.1 | 14 Aug 24 00:08 UTC | 14 Aug 24 00:08 UTC |
	|         | /home/docker/cp-test_ha-105013-m03_ha-105013-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-105013 cp ha-105013-m03:/home/docker/cp-test.txt                              | ha-105013 | jenkins | v1.33.1 | 14 Aug 24 00:08 UTC | 14 Aug 24 00:08 UTC |
	|         | ha-105013-m04:/home/docker/cp-test_ha-105013-m03_ha-105013-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-105013 ssh -n                                                                 | ha-105013 | jenkins | v1.33.1 | 14 Aug 24 00:08 UTC | 14 Aug 24 00:08 UTC |
	|         | ha-105013-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-105013 ssh -n ha-105013-m04 sudo cat                                          | ha-105013 | jenkins | v1.33.1 | 14 Aug 24 00:08 UTC | 14 Aug 24 00:08 UTC |
	|         | /home/docker/cp-test_ha-105013-m03_ha-105013-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-105013 cp testdata/cp-test.txt                                                | ha-105013 | jenkins | v1.33.1 | 14 Aug 24 00:08 UTC | 14 Aug 24 00:08 UTC |
	|         | ha-105013-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-105013 ssh -n                                                                 | ha-105013 | jenkins | v1.33.1 | 14 Aug 24 00:08 UTC | 14 Aug 24 00:08 UTC |
	|         | ha-105013-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-105013 cp ha-105013-m04:/home/docker/cp-test.txt                              | ha-105013 | jenkins | v1.33.1 | 14 Aug 24 00:08 UTC | 14 Aug 24 00:08 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2764919469/001/cp-test_ha-105013-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-105013 ssh -n                                                                 | ha-105013 | jenkins | v1.33.1 | 14 Aug 24 00:08 UTC | 14 Aug 24 00:08 UTC |
	|         | ha-105013-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-105013 cp ha-105013-m04:/home/docker/cp-test.txt                              | ha-105013 | jenkins | v1.33.1 | 14 Aug 24 00:08 UTC | 14 Aug 24 00:08 UTC |
	|         | ha-105013:/home/docker/cp-test_ha-105013-m04_ha-105013.txt                       |           |         |         |                     |                     |
	| ssh     | ha-105013 ssh -n                                                                 | ha-105013 | jenkins | v1.33.1 | 14 Aug 24 00:08 UTC | 14 Aug 24 00:08 UTC |
	|         | ha-105013-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-105013 ssh -n ha-105013 sudo cat                                              | ha-105013 | jenkins | v1.33.1 | 14 Aug 24 00:08 UTC | 14 Aug 24 00:08 UTC |
	|         | /home/docker/cp-test_ha-105013-m04_ha-105013.txt                                 |           |         |         |                     |                     |
	| cp      | ha-105013 cp ha-105013-m04:/home/docker/cp-test.txt                              | ha-105013 | jenkins | v1.33.1 | 14 Aug 24 00:08 UTC | 14 Aug 24 00:08 UTC |
	|         | ha-105013-m02:/home/docker/cp-test_ha-105013-m04_ha-105013-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-105013 ssh -n                                                                 | ha-105013 | jenkins | v1.33.1 | 14 Aug 24 00:08 UTC | 14 Aug 24 00:08 UTC |
	|         | ha-105013-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-105013 ssh -n ha-105013-m02 sudo cat                                          | ha-105013 | jenkins | v1.33.1 | 14 Aug 24 00:08 UTC | 14 Aug 24 00:08 UTC |
	|         | /home/docker/cp-test_ha-105013-m04_ha-105013-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-105013 cp ha-105013-m04:/home/docker/cp-test.txt                              | ha-105013 | jenkins | v1.33.1 | 14 Aug 24 00:08 UTC | 14 Aug 24 00:08 UTC |
	|         | ha-105013-m03:/home/docker/cp-test_ha-105013-m04_ha-105013-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-105013 ssh -n                                                                 | ha-105013 | jenkins | v1.33.1 | 14 Aug 24 00:08 UTC | 14 Aug 24 00:08 UTC |
	|         | ha-105013-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-105013 ssh -n ha-105013-m03 sudo cat                                          | ha-105013 | jenkins | v1.33.1 | 14 Aug 24 00:08 UTC | 14 Aug 24 00:08 UTC |
	|         | /home/docker/cp-test_ha-105013-m04_ha-105013-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-105013 node stop m02 -v=7                                                     | ha-105013 | jenkins | v1.33.1 | 14 Aug 24 00:08 UTC | 14 Aug 24 00:08 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-105013 node start m02 -v=7                                                    | ha-105013 | jenkins | v1.33.1 | 14 Aug 24 00:08 UTC | 14 Aug 24 00:08 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-105013 -v=7                                                           | ha-105013 | jenkins | v1.33.1 | 14 Aug 24 00:08 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-105013 -v=7                                                                | ha-105013 | jenkins | v1.33.1 | 14 Aug 24 00:08 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-105013 --wait=true -v=7                                                    | ha-105013 | jenkins | v1.33.1 | 14 Aug 24 00:11 UTC | 14 Aug 24 00:14 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-105013                                                                | ha-105013 | jenkins | v1.33.1 | 14 Aug 24 00:14 UTC |                     |
	| node    | ha-105013 node delete m03 -v=7                                                   | ha-105013 | jenkins | v1.33.1 | 14 Aug 24 00:14 UTC | 14 Aug 24 00:15 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-105013 stop -v=7                                                              | ha-105013 | jenkins | v1.33.1 | 14 Aug 24 00:15 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/14 00:11:01
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0814 00:11:01.603645   32343 out.go:291] Setting OutFile to fd 1 ...
	I0814 00:11:01.603869   32343 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 00:11:01.603877   32343 out.go:304] Setting ErrFile to fd 2...
	I0814 00:11:01.603881   32343 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 00:11:01.604023   32343 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19429-9425/.minikube/bin
	I0814 00:11:01.604532   32343 out.go:298] Setting JSON to false
	I0814 00:11:01.605605   32343 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3208,"bootTime":1723591054,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0814 00:11:01.605747   32343 start.go:139] virtualization: kvm guest
	I0814 00:11:01.608323   32343 out.go:177] * [ha-105013] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0814 00:11:01.609579   32343 notify.go:220] Checking for updates...
	I0814 00:11:01.609599   32343 out.go:177]   - MINIKUBE_LOCATION=19429
	I0814 00:11:01.611171   32343 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0814 00:11:01.612889   32343 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19429-9425/kubeconfig
	I0814 00:11:01.614091   32343 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19429-9425/.minikube
	I0814 00:11:01.615186   32343 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0814 00:11:01.616542   32343 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0814 00:11:01.617947   32343 config.go:182] Loaded profile config "ha-105013": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 00:11:01.618071   32343 driver.go:392] Setting default libvirt URI to qemu:///system
	I0814 00:11:01.618465   32343 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 00:11:01.618516   32343 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 00:11:01.633575   32343 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39001
	I0814 00:11:01.634070   32343 main.go:141] libmachine: () Calling .GetVersion
	I0814 00:11:01.634588   32343 main.go:141] libmachine: Using API Version  1
	I0814 00:11:01.634613   32343 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 00:11:01.634960   32343 main.go:141] libmachine: () Calling .GetMachineName
	I0814 00:11:01.635138   32343 main.go:141] libmachine: (ha-105013) Calling .DriverName
	I0814 00:11:01.670109   32343 out.go:177] * Using the kvm2 driver based on existing profile
	I0814 00:11:01.671351   32343 start.go:297] selected driver: kvm2
	I0814 00:11:01.671371   32343 start.go:901] validating driver "kvm2" against &{Name:ha-105013 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.0 ClusterName:ha-105013 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.79 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.160 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.177 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.102 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false ef
k:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 00:11:01.671542   32343 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0814 00:11:01.671847   32343 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 00:11:01.671918   32343 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19429-9425/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0814 00:11:01.686470   32343 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0814 00:11:01.687452   32343 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 00:11:01.687551   32343 cni.go:84] Creating CNI manager for ""
	I0814 00:11:01.687569   32343 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0814 00:11:01.687660   32343 start.go:340] cluster config:
	{Name:ha-105013 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-105013 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.79 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.160 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.177 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.102 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-til
ler:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 00:11:01.687842   32343 iso.go:125] acquiring lock: {Name:mk654171f0e78c238a265344dbbd1eacb21d0f1b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 00:11:01.689692   32343 out.go:177] * Starting "ha-105013" primary control-plane node in "ha-105013" cluster
	I0814 00:11:01.690919   32343 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0814 00:11:01.690968   32343 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0814 00:11:01.690980   32343 cache.go:56] Caching tarball of preloaded images
	I0814 00:11:01.691058   32343 preload.go:172] Found /home/jenkins/minikube-integration/19429-9425/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0814 00:11:01.691070   32343 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0814 00:11:01.691183   32343 profile.go:143] Saving config to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/ha-105013/config.json ...
	I0814 00:11:01.691404   32343 start.go:360] acquireMachinesLock for ha-105013: {Name:mk8ab1b491404dd6cc95a37a50c74a14d6d0e4c9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 00:11:01.691456   32343 start.go:364] duration metric: took 33.019µs to acquireMachinesLock for "ha-105013"
	I0814 00:11:01.691475   32343 start.go:96] Skipping create...Using existing machine configuration
	I0814 00:11:01.691492   32343 fix.go:54] fixHost starting: 
	I0814 00:11:01.691774   32343 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 00:11:01.691813   32343 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 00:11:01.705778   32343 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33045
	I0814 00:11:01.706192   32343 main.go:141] libmachine: () Calling .GetVersion
	I0814 00:11:01.706654   32343 main.go:141] libmachine: Using API Version  1
	I0814 00:11:01.706676   32343 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 00:11:01.706964   32343 main.go:141] libmachine: () Calling .GetMachineName
	I0814 00:11:01.707134   32343 main.go:141] libmachine: (ha-105013) Calling .DriverName
	I0814 00:11:01.707289   32343 main.go:141] libmachine: (ha-105013) Calling .GetState
	I0814 00:11:01.708800   32343 fix.go:112] recreateIfNeeded on ha-105013: state=Running err=<nil>
	W0814 00:11:01.708820   32343 fix.go:138] unexpected machine state, will restart: <nil>
	I0814 00:11:01.710621   32343 out.go:177] * Updating the running kvm2 "ha-105013" VM ...
	I0814 00:11:01.711941   32343 machine.go:94] provisionDockerMachine start ...
	I0814 00:11:01.711966   32343 main.go:141] libmachine: (ha-105013) Calling .DriverName
	I0814 00:11:01.712173   32343 main.go:141] libmachine: (ha-105013) Calling .GetSSHHostname
	I0814 00:11:01.714197   32343 main.go:141] libmachine: (ha-105013) DBG | domain ha-105013 has defined MAC address 52:54:00:e0:21:6c in network mk-ha-105013
	I0814 00:11:01.714615   32343 main.go:141] libmachine: (ha-105013) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:21:6c", ip: ""} in network mk-ha-105013: {Iface:virbr1 ExpiryTime:2024-08-14 01:03:22 +0000 UTC Type:0 Mac:52:54:00:e0:21:6c Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-105013 Clientid:01:52:54:00:e0:21:6c}
	I0814 00:11:01.714643   32343 main.go:141] libmachine: (ha-105013) DBG | domain ha-105013 has defined IP address 192.168.39.79 and MAC address 52:54:00:e0:21:6c in network mk-ha-105013
	I0814 00:11:01.714736   32343 main.go:141] libmachine: (ha-105013) Calling .GetSSHPort
	I0814 00:11:01.714911   32343 main.go:141] libmachine: (ha-105013) Calling .GetSSHKeyPath
	I0814 00:11:01.715067   32343 main.go:141] libmachine: (ha-105013) Calling .GetSSHKeyPath
	I0814 00:11:01.715209   32343 main.go:141] libmachine: (ha-105013) Calling .GetSSHUsername
	I0814 00:11:01.715365   32343 main.go:141] libmachine: Using SSH client type: native
	I0814 00:11:01.715548   32343 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.79 22 <nil> <nil>}
	I0814 00:11:01.715561   32343 main.go:141] libmachine: About to run SSH command:
	hostname
	I0814 00:11:01.827484   32343 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-105013
	
	I0814 00:11:01.827513   32343 main.go:141] libmachine: (ha-105013) Calling .GetMachineName
	I0814 00:11:01.827823   32343 buildroot.go:166] provisioning hostname "ha-105013"
	I0814 00:11:01.827850   32343 main.go:141] libmachine: (ha-105013) Calling .GetMachineName
	I0814 00:11:01.828071   32343 main.go:141] libmachine: (ha-105013) Calling .GetSSHHostname
	I0814 00:11:01.830717   32343 main.go:141] libmachine: (ha-105013) DBG | domain ha-105013 has defined MAC address 52:54:00:e0:21:6c in network mk-ha-105013
	I0814 00:11:01.831169   32343 main.go:141] libmachine: (ha-105013) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:21:6c", ip: ""} in network mk-ha-105013: {Iface:virbr1 ExpiryTime:2024-08-14 01:03:22 +0000 UTC Type:0 Mac:52:54:00:e0:21:6c Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-105013 Clientid:01:52:54:00:e0:21:6c}
	I0814 00:11:01.831192   32343 main.go:141] libmachine: (ha-105013) DBG | domain ha-105013 has defined IP address 192.168.39.79 and MAC address 52:54:00:e0:21:6c in network mk-ha-105013
	I0814 00:11:01.831365   32343 main.go:141] libmachine: (ha-105013) Calling .GetSSHPort
	I0814 00:11:01.831534   32343 main.go:141] libmachine: (ha-105013) Calling .GetSSHKeyPath
	I0814 00:11:01.831718   32343 main.go:141] libmachine: (ha-105013) Calling .GetSSHKeyPath
	I0814 00:11:01.831879   32343 main.go:141] libmachine: (ha-105013) Calling .GetSSHUsername
	I0814 00:11:01.832041   32343 main.go:141] libmachine: Using SSH client type: native
	I0814 00:11:01.832240   32343 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.79 22 <nil> <nil>}
	I0814 00:11:01.832254   32343 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-105013 && echo "ha-105013" | sudo tee /etc/hostname
	I0814 00:11:01.953778   32343 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-105013
	
	I0814 00:11:01.953811   32343 main.go:141] libmachine: (ha-105013) Calling .GetSSHHostname
	I0814 00:11:01.956433   32343 main.go:141] libmachine: (ha-105013) DBG | domain ha-105013 has defined MAC address 52:54:00:e0:21:6c in network mk-ha-105013
	I0814 00:11:01.956877   32343 main.go:141] libmachine: (ha-105013) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:21:6c", ip: ""} in network mk-ha-105013: {Iface:virbr1 ExpiryTime:2024-08-14 01:03:22 +0000 UTC Type:0 Mac:52:54:00:e0:21:6c Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-105013 Clientid:01:52:54:00:e0:21:6c}
	I0814 00:11:01.956905   32343 main.go:141] libmachine: (ha-105013) DBG | domain ha-105013 has defined IP address 192.168.39.79 and MAC address 52:54:00:e0:21:6c in network mk-ha-105013
	I0814 00:11:01.957039   32343 main.go:141] libmachine: (ha-105013) Calling .GetSSHPort
	I0814 00:11:01.957222   32343 main.go:141] libmachine: (ha-105013) Calling .GetSSHKeyPath
	I0814 00:11:01.957363   32343 main.go:141] libmachine: (ha-105013) Calling .GetSSHKeyPath
	I0814 00:11:01.957503   32343 main.go:141] libmachine: (ha-105013) Calling .GetSSHUsername
	I0814 00:11:01.957663   32343 main.go:141] libmachine: Using SSH client type: native
	I0814 00:11:01.957893   32343 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.79 22 <nil> <nil>}
	I0814 00:11:01.957916   32343 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-105013' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-105013/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-105013' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0814 00:11:02.063262   32343 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 00:11:02.063302   32343 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19429-9425/.minikube CaCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19429-9425/.minikube}
	I0814 00:11:02.063329   32343 buildroot.go:174] setting up certificates
	I0814 00:11:02.063339   32343 provision.go:84] configureAuth start
	I0814 00:11:02.063348   32343 main.go:141] libmachine: (ha-105013) Calling .GetMachineName
	I0814 00:11:02.063662   32343 main.go:141] libmachine: (ha-105013) Calling .GetIP
	I0814 00:11:02.066704   32343 main.go:141] libmachine: (ha-105013) DBG | domain ha-105013 has defined MAC address 52:54:00:e0:21:6c in network mk-ha-105013
	I0814 00:11:02.067260   32343 main.go:141] libmachine: (ha-105013) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:21:6c", ip: ""} in network mk-ha-105013: {Iface:virbr1 ExpiryTime:2024-08-14 01:03:22 +0000 UTC Type:0 Mac:52:54:00:e0:21:6c Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-105013 Clientid:01:52:54:00:e0:21:6c}
	I0814 00:11:02.067288   32343 main.go:141] libmachine: (ha-105013) DBG | domain ha-105013 has defined IP address 192.168.39.79 and MAC address 52:54:00:e0:21:6c in network mk-ha-105013
	I0814 00:11:02.067426   32343 main.go:141] libmachine: (ha-105013) Calling .GetSSHHostname
	I0814 00:11:02.069799   32343 main.go:141] libmachine: (ha-105013) DBG | domain ha-105013 has defined MAC address 52:54:00:e0:21:6c in network mk-ha-105013
	I0814 00:11:02.070165   32343 main.go:141] libmachine: (ha-105013) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:21:6c", ip: ""} in network mk-ha-105013: {Iface:virbr1 ExpiryTime:2024-08-14 01:03:22 +0000 UTC Type:0 Mac:52:54:00:e0:21:6c Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-105013 Clientid:01:52:54:00:e0:21:6c}
	I0814 00:11:02.070193   32343 main.go:141] libmachine: (ha-105013) DBG | domain ha-105013 has defined IP address 192.168.39.79 and MAC address 52:54:00:e0:21:6c in network mk-ha-105013
	I0814 00:11:02.070313   32343 provision.go:143] copyHostCerts
	I0814 00:11:02.070344   32343 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem
	I0814 00:11:02.070387   32343 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem, removing ...
	I0814 00:11:02.070403   32343 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem
	I0814 00:11:02.070471   32343 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem (1082 bytes)
	I0814 00:11:02.070578   32343 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem
	I0814 00:11:02.070621   32343 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem, removing ...
	I0814 00:11:02.070634   32343 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem
	I0814 00:11:02.070676   32343 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem (1123 bytes)
	I0814 00:11:02.070749   32343 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem
	I0814 00:11:02.070773   32343 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem, removing ...
	I0814 00:11:02.070782   32343 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem
	I0814 00:11:02.070818   32343 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem (1675 bytes)
	I0814 00:11:02.070890   32343 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem org=jenkins.ha-105013 san=[127.0.0.1 192.168.39.79 ha-105013 localhost minikube]
	I0814 00:11:02.206902   32343 provision.go:177] copyRemoteCerts
	I0814 00:11:02.206961   32343 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0814 00:11:02.206982   32343 main.go:141] libmachine: (ha-105013) Calling .GetSSHHostname
	I0814 00:11:02.209768   32343 main.go:141] libmachine: (ha-105013) DBG | domain ha-105013 has defined MAC address 52:54:00:e0:21:6c in network mk-ha-105013
	I0814 00:11:02.210200   32343 main.go:141] libmachine: (ha-105013) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:21:6c", ip: ""} in network mk-ha-105013: {Iface:virbr1 ExpiryTime:2024-08-14 01:03:22 +0000 UTC Type:0 Mac:52:54:00:e0:21:6c Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-105013 Clientid:01:52:54:00:e0:21:6c}
	I0814 00:11:02.210230   32343 main.go:141] libmachine: (ha-105013) DBG | domain ha-105013 has defined IP address 192.168.39.79 and MAC address 52:54:00:e0:21:6c in network mk-ha-105013
	I0814 00:11:02.210419   32343 main.go:141] libmachine: (ha-105013) Calling .GetSSHPort
	I0814 00:11:02.210595   32343 main.go:141] libmachine: (ha-105013) Calling .GetSSHKeyPath
	I0814 00:11:02.210745   32343 main.go:141] libmachine: (ha-105013) Calling .GetSSHUsername
	I0814 00:11:02.210879   32343 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/ha-105013/id_rsa Username:docker}
	I0814 00:11:02.293936   32343 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0814 00:11:02.294023   32343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0814 00:11:02.322797   32343 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0814 00:11:02.322867   32343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0814 00:11:02.346721   32343 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0814 00:11:02.346778   32343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0814 00:11:02.369742   32343 provision.go:87] duration metric: took 306.389195ms to configureAuth
	I0814 00:11:02.369771   32343 buildroot.go:189] setting minikube options for container-runtime
	I0814 00:11:02.370036   32343 config.go:182] Loaded profile config "ha-105013": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 00:11:02.370146   32343 main.go:141] libmachine: (ha-105013) Calling .GetSSHHostname
	I0814 00:11:02.372966   32343 main.go:141] libmachine: (ha-105013) DBG | domain ha-105013 has defined MAC address 52:54:00:e0:21:6c in network mk-ha-105013
	I0814 00:11:02.373431   32343 main.go:141] libmachine: (ha-105013) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:21:6c", ip: ""} in network mk-ha-105013: {Iface:virbr1 ExpiryTime:2024-08-14 01:03:22 +0000 UTC Type:0 Mac:52:54:00:e0:21:6c Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-105013 Clientid:01:52:54:00:e0:21:6c}
	I0814 00:11:02.373456   32343 main.go:141] libmachine: (ha-105013) DBG | domain ha-105013 has defined IP address 192.168.39.79 and MAC address 52:54:00:e0:21:6c in network mk-ha-105013
	I0814 00:11:02.373641   32343 main.go:141] libmachine: (ha-105013) Calling .GetSSHPort
	I0814 00:11:02.373825   32343 main.go:141] libmachine: (ha-105013) Calling .GetSSHKeyPath
	I0814 00:11:02.373979   32343 main.go:141] libmachine: (ha-105013) Calling .GetSSHKeyPath
	I0814 00:11:02.374112   32343 main.go:141] libmachine: (ha-105013) Calling .GetSSHUsername
	I0814 00:11:02.374271   32343 main.go:141] libmachine: Using SSH client type: native
	I0814 00:11:02.374475   32343 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.79 22 <nil> <nil>}
	I0814 00:11:02.374494   32343 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0814 00:12:33.289846   32343 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0814 00:12:33.289877   32343 machine.go:97] duration metric: took 1m31.577918269s to provisionDockerMachine
	I0814 00:12:33.289889   32343 start.go:293] postStartSetup for "ha-105013" (driver="kvm2")
	I0814 00:12:33.289899   32343 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0814 00:12:33.289931   32343 main.go:141] libmachine: (ha-105013) Calling .DriverName
	I0814 00:12:33.290285   32343 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0814 00:12:33.290322   32343 main.go:141] libmachine: (ha-105013) Calling .GetSSHHostname
	I0814 00:12:33.293621   32343 main.go:141] libmachine: (ha-105013) DBG | domain ha-105013 has defined MAC address 52:54:00:e0:21:6c in network mk-ha-105013
	I0814 00:12:33.294069   32343 main.go:141] libmachine: (ha-105013) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:21:6c", ip: ""} in network mk-ha-105013: {Iface:virbr1 ExpiryTime:2024-08-14 01:03:22 +0000 UTC Type:0 Mac:52:54:00:e0:21:6c Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-105013 Clientid:01:52:54:00:e0:21:6c}
	I0814 00:12:33.294098   32343 main.go:141] libmachine: (ha-105013) DBG | domain ha-105013 has defined IP address 192.168.39.79 and MAC address 52:54:00:e0:21:6c in network mk-ha-105013
	I0814 00:12:33.294233   32343 main.go:141] libmachine: (ha-105013) Calling .GetSSHPort
	I0814 00:12:33.294469   32343 main.go:141] libmachine: (ha-105013) Calling .GetSSHKeyPath
	I0814 00:12:33.294647   32343 main.go:141] libmachine: (ha-105013) Calling .GetSSHUsername
	I0814 00:12:33.294829   32343 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/ha-105013/id_rsa Username:docker}
	I0814 00:12:33.381642   32343 ssh_runner.go:195] Run: cat /etc/os-release
	I0814 00:12:33.385433   32343 info.go:137] Remote host: Buildroot 2023.02.9
	I0814 00:12:33.385459   32343 filesync.go:126] Scanning /home/jenkins/minikube-integration/19429-9425/.minikube/addons for local assets ...
	I0814 00:12:33.385520   32343 filesync.go:126] Scanning /home/jenkins/minikube-integration/19429-9425/.minikube/files for local assets ...
	I0814 00:12:33.385608   32343 filesync.go:149] local asset: /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem -> 165892.pem in /etc/ssl/certs
	I0814 00:12:33.385626   32343 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem -> /etc/ssl/certs/165892.pem
	I0814 00:12:33.385717   32343 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0814 00:12:33.394857   32343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem --> /etc/ssl/certs/165892.pem (1708 bytes)
	I0814 00:12:33.417760   32343 start.go:296] duration metric: took 127.857642ms for postStartSetup
	I0814 00:12:33.417825   32343 main.go:141] libmachine: (ha-105013) Calling .DriverName
	I0814 00:12:33.418132   32343 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0814 00:12:33.418169   32343 main.go:141] libmachine: (ha-105013) Calling .GetSSHHostname
	I0814 00:12:33.420782   32343 main.go:141] libmachine: (ha-105013) DBG | domain ha-105013 has defined MAC address 52:54:00:e0:21:6c in network mk-ha-105013
	I0814 00:12:33.421132   32343 main.go:141] libmachine: (ha-105013) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:21:6c", ip: ""} in network mk-ha-105013: {Iface:virbr1 ExpiryTime:2024-08-14 01:03:22 +0000 UTC Type:0 Mac:52:54:00:e0:21:6c Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-105013 Clientid:01:52:54:00:e0:21:6c}
	I0814 00:12:33.421159   32343 main.go:141] libmachine: (ha-105013) DBG | domain ha-105013 has defined IP address 192.168.39.79 and MAC address 52:54:00:e0:21:6c in network mk-ha-105013
	I0814 00:12:33.421308   32343 main.go:141] libmachine: (ha-105013) Calling .GetSSHPort
	I0814 00:12:33.421497   32343 main.go:141] libmachine: (ha-105013) Calling .GetSSHKeyPath
	I0814 00:12:33.421663   32343 main.go:141] libmachine: (ha-105013) Calling .GetSSHUsername
	I0814 00:12:33.421776   32343 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/ha-105013/id_rsa Username:docker}
	W0814 00:12:33.504353   32343 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0814 00:12:33.504392   32343 fix.go:56] duration metric: took 1m31.812906917s for fixHost
	I0814 00:12:33.504420   32343 main.go:141] libmachine: (ha-105013) Calling .GetSSHHostname
	I0814 00:12:33.506827   32343 main.go:141] libmachine: (ha-105013) DBG | domain ha-105013 has defined MAC address 52:54:00:e0:21:6c in network mk-ha-105013
	I0814 00:12:33.507156   32343 main.go:141] libmachine: (ha-105013) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:21:6c", ip: ""} in network mk-ha-105013: {Iface:virbr1 ExpiryTime:2024-08-14 01:03:22 +0000 UTC Type:0 Mac:52:54:00:e0:21:6c Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-105013 Clientid:01:52:54:00:e0:21:6c}
	I0814 00:12:33.507183   32343 main.go:141] libmachine: (ha-105013) DBG | domain ha-105013 has defined IP address 192.168.39.79 and MAC address 52:54:00:e0:21:6c in network mk-ha-105013
	I0814 00:12:33.507311   32343 main.go:141] libmachine: (ha-105013) Calling .GetSSHPort
	I0814 00:12:33.507506   32343 main.go:141] libmachine: (ha-105013) Calling .GetSSHKeyPath
	I0814 00:12:33.507668   32343 main.go:141] libmachine: (ha-105013) Calling .GetSSHKeyPath
	I0814 00:12:33.507804   32343 main.go:141] libmachine: (ha-105013) Calling .GetSSHUsername
	I0814 00:12:33.507965   32343 main.go:141] libmachine: Using SSH client type: native
	I0814 00:12:33.508141   32343 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.79 22 <nil> <nil>}
	I0814 00:12:33.508154   32343 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0814 00:12:33.615332   32343 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723594353.579486004
	
	I0814 00:12:33.615354   32343 fix.go:216] guest clock: 1723594353.579486004
	I0814 00:12:33.615361   32343 fix.go:229] Guest: 2024-08-14 00:12:33.579486004 +0000 UTC Remote: 2024-08-14 00:12:33.504401796 +0000 UTC m=+91.934102516 (delta=75.084208ms)
	I0814 00:12:33.615385   32343 fix.go:200] guest clock delta is within tolerance: 75.084208ms
	I0814 00:12:33.615391   32343 start.go:83] releasing machines lock for "ha-105013", held for 1m31.92392306s
	I0814 00:12:33.615409   32343 main.go:141] libmachine: (ha-105013) Calling .DriverName
	I0814 00:12:33.615679   32343 main.go:141] libmachine: (ha-105013) Calling .GetIP
	I0814 00:12:33.618207   32343 main.go:141] libmachine: (ha-105013) DBG | domain ha-105013 has defined MAC address 52:54:00:e0:21:6c in network mk-ha-105013
	I0814 00:12:33.618570   32343 main.go:141] libmachine: (ha-105013) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:21:6c", ip: ""} in network mk-ha-105013: {Iface:virbr1 ExpiryTime:2024-08-14 01:03:22 +0000 UTC Type:0 Mac:52:54:00:e0:21:6c Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-105013 Clientid:01:52:54:00:e0:21:6c}
	I0814 00:12:33.618599   32343 main.go:141] libmachine: (ha-105013) DBG | domain ha-105013 has defined IP address 192.168.39.79 and MAC address 52:54:00:e0:21:6c in network mk-ha-105013
	I0814 00:12:33.618752   32343 main.go:141] libmachine: (ha-105013) Calling .DriverName
	I0814 00:12:33.619259   32343 main.go:141] libmachine: (ha-105013) Calling .DriverName
	I0814 00:12:33.619481   32343 main.go:141] libmachine: (ha-105013) Calling .DriverName
	I0814 00:12:33.619617   32343 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0814 00:12:33.619658   32343 main.go:141] libmachine: (ha-105013) Calling .GetSSHHostname
	I0814 00:12:33.619713   32343 ssh_runner.go:195] Run: cat /version.json
	I0814 00:12:33.619737   32343 main.go:141] libmachine: (ha-105013) Calling .GetSSHHostname
	I0814 00:12:33.622147   32343 main.go:141] libmachine: (ha-105013) DBG | domain ha-105013 has defined MAC address 52:54:00:e0:21:6c in network mk-ha-105013
	I0814 00:12:33.622513   32343 main.go:141] libmachine: (ha-105013) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:21:6c", ip: ""} in network mk-ha-105013: {Iface:virbr1 ExpiryTime:2024-08-14 01:03:22 +0000 UTC Type:0 Mac:52:54:00:e0:21:6c Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-105013 Clientid:01:52:54:00:e0:21:6c}
	I0814 00:12:33.622539   32343 main.go:141] libmachine: (ha-105013) DBG | domain ha-105013 has defined MAC address 52:54:00:e0:21:6c in network mk-ha-105013
	I0814 00:12:33.622556   32343 main.go:141] libmachine: (ha-105013) DBG | domain ha-105013 has defined IP address 192.168.39.79 and MAC address 52:54:00:e0:21:6c in network mk-ha-105013
	I0814 00:12:33.622700   32343 main.go:141] libmachine: (ha-105013) Calling .GetSSHPort
	I0814 00:12:33.622853   32343 main.go:141] libmachine: (ha-105013) Calling .GetSSHKeyPath
	I0814 00:12:33.623002   32343 main.go:141] libmachine: (ha-105013) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:21:6c", ip: ""} in network mk-ha-105013: {Iface:virbr1 ExpiryTime:2024-08-14 01:03:22 +0000 UTC Type:0 Mac:52:54:00:e0:21:6c Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-105013 Clientid:01:52:54:00:e0:21:6c}
	I0814 00:12:33.623006   32343 main.go:141] libmachine: (ha-105013) Calling .GetSSHUsername
	I0814 00:12:33.623026   32343 main.go:141] libmachine: (ha-105013) DBG | domain ha-105013 has defined IP address 192.168.39.79 and MAC address 52:54:00:e0:21:6c in network mk-ha-105013
	I0814 00:12:33.623164   32343 main.go:141] libmachine: (ha-105013) Calling .GetSSHPort
	I0814 00:12:33.623167   32343 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/ha-105013/id_rsa Username:docker}
	I0814 00:12:33.623323   32343 main.go:141] libmachine: (ha-105013) Calling .GetSSHKeyPath
	I0814 00:12:33.623463   32343 main.go:141] libmachine: (ha-105013) Calling .GetSSHUsername
	I0814 00:12:33.623689   32343 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/ha-105013/id_rsa Username:docker}
	I0814 00:12:33.699292   32343 ssh_runner.go:195] Run: systemctl --version
	I0814 00:12:33.736019   32343 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0814 00:12:33.898316   32343 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0814 00:12:33.909251   32343 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0814 00:12:33.909327   32343 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0814 00:12:33.918778   32343 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0814 00:12:33.918807   32343 start.go:495] detecting cgroup driver to use...
	I0814 00:12:33.918882   32343 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0814 00:12:33.937729   32343 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0814 00:12:33.951960   32343 docker.go:217] disabling cri-docker service (if available) ...
	I0814 00:12:33.952015   32343 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0814 00:12:33.965826   32343 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0814 00:12:33.979623   32343 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0814 00:12:34.134533   32343 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0814 00:12:34.280172   32343 docker.go:233] disabling docker service ...
	I0814 00:12:34.280240   32343 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0814 00:12:34.296238   32343 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0814 00:12:34.309431   32343 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0814 00:12:34.453375   32343 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0814 00:12:34.598121   32343 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0814 00:12:34.611594   32343 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 00:12:34.629089   32343 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0814 00:12:34.629138   32343 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 00:12:34.638758   32343 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0814 00:12:34.638814   32343 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 00:12:34.648110   32343 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 00:12:34.658193   32343 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 00:12:34.669635   32343 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0814 00:12:34.681229   32343 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 00:12:34.692327   32343 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 00:12:34.702591   32343 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 00:12:34.713805   32343 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0814 00:12:34.724011   32343 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0814 00:12:34.733984   32343 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 00:12:34.882844   32343 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0814 00:12:38.037094   32343 ssh_runner.go:235] Completed: sudo systemctl restart crio: (3.154214514s)
	I0814 00:12:38.037133   32343 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0814 00:12:38.037176   32343 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0814 00:12:38.041950   32343 start.go:563] Will wait 60s for crictl version
	I0814 00:12:38.042018   32343 ssh_runner.go:195] Run: which crictl
	I0814 00:12:38.045805   32343 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0814 00:12:38.084890   32343 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0814 00:12:38.084988   32343 ssh_runner.go:195] Run: crio --version
	I0814 00:12:38.116034   32343 ssh_runner.go:195] Run: crio --version
	I0814 00:12:38.147597   32343 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0814 00:12:38.148800   32343 main.go:141] libmachine: (ha-105013) Calling .GetIP
	I0814 00:12:38.151443   32343 main.go:141] libmachine: (ha-105013) DBG | domain ha-105013 has defined MAC address 52:54:00:e0:21:6c in network mk-ha-105013
	I0814 00:12:38.151843   32343 main.go:141] libmachine: (ha-105013) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:21:6c", ip: ""} in network mk-ha-105013: {Iface:virbr1 ExpiryTime:2024-08-14 01:03:22 +0000 UTC Type:0 Mac:52:54:00:e0:21:6c Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-105013 Clientid:01:52:54:00:e0:21:6c}
	I0814 00:12:38.151868   32343 main.go:141] libmachine: (ha-105013) DBG | domain ha-105013 has defined IP address 192.168.39.79 and MAC address 52:54:00:e0:21:6c in network mk-ha-105013
	I0814 00:12:38.152034   32343 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0814 00:12:38.156664   32343 kubeadm.go:883] updating cluster {Name:ha-105013 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cl
usterName:ha-105013 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.79 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.160 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.177 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.102 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fre
shpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0814 00:12:38.156803   32343 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0814 00:12:38.156853   32343 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 00:12:38.197722   32343 crio.go:514] all images are preloaded for cri-o runtime.
	I0814 00:12:38.197744   32343 crio.go:433] Images already preloaded, skipping extraction
	I0814 00:12:38.197797   32343 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 00:12:38.234654   32343 crio.go:514] all images are preloaded for cri-o runtime.
	I0814 00:12:38.234680   32343 cache_images.go:84] Images are preloaded, skipping loading
	I0814 00:12:38.234702   32343 kubeadm.go:934] updating node { 192.168.39.79 8443 v1.31.0 crio true true} ...
	I0814 00:12:38.234826   32343 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-105013 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.79
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-105013 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0814 00:12:38.234909   32343 ssh_runner.go:195] Run: crio config
	I0814 00:12:38.279386   32343 cni.go:84] Creating CNI manager for ""
	I0814 00:12:38.279408   32343 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0814 00:12:38.279420   32343 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0814 00:12:38.279447   32343 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.79 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-105013 NodeName:ha-105013 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.79"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.79 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0814 00:12:38.279583   32343 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.79
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-105013"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.79
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.79"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0814 00:12:38.279610   32343 kube-vip.go:115] generating kube-vip config ...
	I0814 00:12:38.279651   32343 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0814 00:12:38.291204   32343 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0814 00:12:38.291319   32343 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0814 00:12:38.291397   32343 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0814 00:12:38.300946   32343 binaries.go:44] Found k8s binaries, skipping transfer
	I0814 00:12:38.301007   32343 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0814 00:12:38.309906   32343 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0814 00:12:38.324963   32343 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0814 00:12:38.340034   32343 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0814 00:12:38.354958   32343 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0814 00:12:38.377057   32343 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0814 00:12:38.380992   32343 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 00:12:38.538286   32343 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 00:12:38.552348   32343 certs.go:68] Setting up /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/ha-105013 for IP: 192.168.39.79
	I0814 00:12:38.552370   32343 certs.go:194] generating shared ca certs ...
	I0814 00:12:38.552384   32343 certs.go:226] acquiring lock for ca certs: {Name:mke321171338291cf6d66f3acbfe43d3fabcafa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 00:12:38.552528   32343 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19429-9425/.minikube/ca.key
	I0814 00:12:38.552577   32343 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.key
	I0814 00:12:38.552587   32343 certs.go:256] generating profile certs ...
	I0814 00:12:38.552660   32343 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/ha-105013/client.key
	I0814 00:12:38.552687   32343 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/ha-105013/apiserver.key.f6dc1896
	I0814 00:12:38.552707   32343 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/ha-105013/apiserver.crt.f6dc1896 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.79 192.168.39.160 192.168.39.177 192.168.39.254]
	I0814 00:12:38.699793   32343 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/ha-105013/apiserver.crt.f6dc1896 ...
	I0814 00:12:38.699822   32343 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/ha-105013/apiserver.crt.f6dc1896: {Name:mkdb4775096c6c509b34c1363d8ad01cbc342d12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 00:12:38.699979   32343 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/ha-105013/apiserver.key.f6dc1896 ...
	I0814 00:12:38.699993   32343 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/ha-105013/apiserver.key.f6dc1896: {Name:mkceb0965afb0da76da07c8d2c54f2a66a4991ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 00:12:38.700070   32343 certs.go:381] copying /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/ha-105013/apiserver.crt.f6dc1896 -> /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/ha-105013/apiserver.crt
	I0814 00:12:38.700211   32343 certs.go:385] copying /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/ha-105013/apiserver.key.f6dc1896 -> /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/ha-105013/apiserver.key
	I0814 00:12:38.700330   32343 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/ha-105013/proxy-client.key
	I0814 00:12:38.700343   32343 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19429-9425/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0814 00:12:38.700356   32343 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19429-9425/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0814 00:12:38.700369   32343 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0814 00:12:38.700381   32343 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0814 00:12:38.700394   32343 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/ha-105013/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0814 00:12:38.700406   32343 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/ha-105013/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0814 00:12:38.700418   32343 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/ha-105013/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0814 00:12:38.700429   32343 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/ha-105013/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0814 00:12:38.700476   32343 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589.pem (1338 bytes)
	W0814 00:12:38.700505   32343 certs.go:480] ignoring /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589_empty.pem, impossibly tiny 0 bytes
	I0814 00:12:38.700512   32343 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem (1679 bytes)
	I0814 00:12:38.700534   32343 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem (1082 bytes)
	I0814 00:12:38.700563   32343 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem (1123 bytes)
	I0814 00:12:38.700586   32343 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem (1675 bytes)
	I0814 00:12:38.700625   32343 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem (1708 bytes)
	I0814 00:12:38.700650   32343 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19429-9425/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0814 00:12:38.700673   32343 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589.pem -> /usr/share/ca-certificates/16589.pem
	I0814 00:12:38.700692   32343 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem -> /usr/share/ca-certificates/165892.pem
	I0814 00:12:38.701198   32343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0814 00:12:38.726406   32343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0814 00:12:38.749768   32343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0814 00:12:38.772828   32343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0814 00:12:38.801519   32343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/ha-105013/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0814 00:12:38.870024   32343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/ha-105013/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0814 00:12:38.893824   32343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/ha-105013/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0814 00:12:38.962424   32343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/ha-105013/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0814 00:12:38.998664   32343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0814 00:12:39.049339   32343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589.pem --> /usr/share/ca-certificates/16589.pem (1338 bytes)
	I0814 00:12:39.104408   32343 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem --> /usr/share/ca-certificates/165892.pem (1708 bytes)
	I0814 00:12:39.156645   32343 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0814 00:12:39.189277   32343 ssh_runner.go:195] Run: openssl version
	I0814 00:12:39.200714   32343 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0814 00:12:39.212604   32343 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0814 00:12:39.221643   32343 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 13 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0814 00:12:39.221696   32343 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0814 00:12:39.239343   32343 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0814 00:12:39.252957   32343 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16589.pem && ln -fs /usr/share/ca-certificates/16589.pem /etc/ssl/certs/16589.pem"
	I0814 00:12:39.265663   32343 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16589.pem
	I0814 00:12:39.270014   32343 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 13 23:59 /usr/share/ca-certificates/16589.pem
	I0814 00:12:39.270098   32343 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16589.pem
	I0814 00:12:39.275744   32343 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16589.pem /etc/ssl/certs/51391683.0"
	I0814 00:12:39.287096   32343 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/165892.pem && ln -fs /usr/share/ca-certificates/165892.pem /etc/ssl/certs/165892.pem"
	I0814 00:12:39.300267   32343 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/165892.pem
	I0814 00:12:39.304693   32343 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 13 23:59 /usr/share/ca-certificates/165892.pem
	I0814 00:12:39.304764   32343 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/165892.pem
	I0814 00:12:39.311422   32343 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/165892.pem /etc/ssl/certs/3ec20f2e.0"
	I0814 00:12:39.325516   32343 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0814 00:12:39.329959   32343 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0814 00:12:39.336104   32343 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0814 00:12:39.341855   32343 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0814 00:12:39.347208   32343 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0814 00:12:39.353407   32343 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0814 00:12:39.358860   32343 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0814 00:12:39.368417   32343 kubeadm.go:392] StartCluster: {Name:ha-105013 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clust
erName:ha-105013 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.79 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.160 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.177 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.102 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 00:12:39.368582   32343 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0814 00:12:39.368667   32343 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 00:12:39.413358   32343 cri.go:89] found id: "4b9280e9ce815f11eda1d904348fe098c65961c0aae63b154a2157ef7caa3dca"
	I0814 00:12:39.413390   32343 cri.go:89] found id: "c23300665d9c76ac06c75fbfb737adf5b17e16c97443028c1a964c023ba15d12"
	I0814 00:12:39.413394   32343 cri.go:89] found id: "a6ce1804a980ec080327c097b9929ce80ad1eaa3cb08408175afbb903d6bccc8"
	I0814 00:12:39.413398   32343 cri.go:89] found id: "e4a69b9d72a8d3a11800a1e7f03e03649b32990bf4c5b668a0dea73074bdf45c"
	I0814 00:12:39.413400   32343 cri.go:89] found id: "ab27379d6e6bb1a395cb47aa01564ceeda01f91b0c78c97a50d2a4856935bed8"
	I0814 00:12:39.413404   32343 cri.go:89] found id: "d773535128c3474359fb39d2e67a85fda4514786ccd1249690454b5c2f1aad45"
	I0814 00:12:39.413406   32343 cri.go:89] found id: "cae4a2039c73c8b44c95f3baeb4245c44b9cf0e510c0c05c79eff9d68a7af5c7"
	I0814 00:12:39.413409   32343 cri.go:89] found id: "e0adc472eb64c654df78233de9a2e57e4c6919b76e471e24a0195621f819fb12"
	I0814 00:12:39.413411   32343 cri.go:89] found id: "f644ed2e094890dd8d28e4ca035634bf6340e598553601368c4025ba64cbbc58"
	I0814 00:12:39.413417   32343 cri.go:89] found id: "9a988632430c243612d4b0086b23d504fe2c075bbb2ecc0786bc1a49ae396241"
	I0814 00:12:39.413421   32343 cri.go:89] found id: "36543aa9640de83e246f62f693f3fa3b071676d8a70cd465f8e1921695121be2"
	I0814 00:12:39.413424   32343 cri.go:89] found id: "8092755d486b62c1b259e654291dfa106a8783f58cd9651dfa51bbc4cf7824a3"
	I0814 00:12:39.413427   32343 cri.go:89] found id: ""
	I0814 00:12:39.413468   32343 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 14 00:17:31 ha-105013 crio[3051]: time="2024-08-14 00:17:31.425317402Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723594651425293971,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:145823,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=60e6d8d1-5cb8-47bd-a463-e78630c9240a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 00:17:31 ha-105013 crio[3051]: time="2024-08-14 00:17:31.425819754Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c8d6d983-b125-47bf-9715-b9f5bd78844d name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 00:17:31 ha-105013 crio[3051]: time="2024-08-14 00:17:31.425917065Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c8d6d983-b125-47bf-9715-b9f5bd78844d name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 00:17:31 ha-105013 crio[3051]: time="2024-08-14 00:17:31.426981338Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:34e743530eb45a69fc71d3f83fa27974793125b8efd320233acc9e5ade3e1b86,PodSandboxId:b498a018e05bf3ac9da7579b28c717ed7765b3375bdf19fab16f537265f27584,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723594398910315851,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-lq24p,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a0c03f6d-f644-43d8-a8d6-2079f90d2bf2,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee5d9b99a82fb9bc7d901e38ccc970b5854914d5bdc843ac9d85a1c4a32c0819,PodSandboxId:c9f91079e28b929d7f77ed225df2140b801cedeaeb4f3aac95c73a52a99e98d0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723594396983816289,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-105013,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f1d58febfc5b6695f71d52f2f23febc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0888b9785ccf1d89dbd1c10a23f4f7eaf095635fe96109abd1f407fd39608fd,PodSandboxId:f7f60d4f6540bdb45e5b8d3ab66c241143eee7eeab4bf893cbf2c25a60f54e5a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723594394636411114,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-105013,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfa077ca8bb2ae3949f71c038c9eb784,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination
-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39abf69130b4b993f132c018ec66d7884b2ab2fbe504637625587a1e81f43838,PodSandboxId:de9af903660ccf6557ac92a6884763dbba68904a1befb90197dbb3005e32e049,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723594388632298333,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6555aaf3-8661-4508-8993-e27e91fd75b0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4235873ebc639832799ba88b9f1b85efc11f39bd3b247b1c50252be54d7c9ca,PodSandboxId:3f9f663df8afebeb3bdd97e02d7eceb7d8432250d11b7843bead5f3cef68baf2,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1723594379677924888,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-105013,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eba9f30404fc6595cd517b2e044ad070,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f847be92bcda663b5a64c3ebe241dc754529718628caa01cef0525a11d01209f,PodSandboxId:9da9ca9a7a69d1e091145ac2e2410cbf8f5d15734f6ea2adbee9de5b28876842,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_RUNNING,CreatedAt:1723594365577727349,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6m57q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1349813-8756-4972-9e6c-ae1bf33a8921,},Annotations:map[string]string{io.kubernetes.container.hash: 49c194ef,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:d463e78fa6b27c92b26cd1bc806e34320df69961aae19159122eec7c9250a80b,PodSandboxId:b27cd500e6978907fcff03f511984b81c8eec91472a7cba9c370696ac0e08cb3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723594365529230157,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qvrtb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80c16299-297a-49b0-98bc-97208c289e73,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container
{Id:8d6324bf2404b4c092f212b1262882c454050b8a4c18214d22cbb56d999ed4d4,PodSandboxId:f8e5c70a7011bb1fca0a3d4b7824fd431425469c4d0540e94e955460f58ba58c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723594365434252393,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-105013,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0d6f84c88b276d9caf0c279a2fd73aa,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c74f85631d30d473
5c70a42cc9640d34007847fc31ee08e32623dc8dc6bb949,PodSandboxId:58bac8e5e4e59d11185d31a70f4cd2234e8a17753800ba7d9d99a61743dea7ef,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723594365441095077,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qlqtb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4a3e3e6-b8e7-4c32-a5af-065aa78111f1,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b8c3091f0023c0b7e493a7be78369fd69bc71f870b3a8815a8c78c94c51c560,PodSandboxId:de9af903660ccf6557ac92a6884763dbba68904a1befb90197dbb3005e32e049,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723594365366046360,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6555aaf3-8661-4508-8993-e27e91fd75b0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3adba2eef6dc38a56e8b38b2c0414c99640a21716d5258d8e30c84c11b895f2,PodSandboxId:d5f7e049035c79d28292c24db15c2bc02b24e788548cc97debeb3ee237a9f922,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723594365341838203,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-105013,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ac35313bb7af49a3d3262d37ba9167c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.k
ubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13140714cc06469d86f9745a4c86966c693d3449ed3f3c154fbb6e14ae42ee33,PodSandboxId:c9f91079e28b929d7f77ed225df2140b801cedeaeb4f3aac95c73a52a99e98d0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723594365260100912,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-105013,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f1d58febfc5b6695f71d52f2f23febc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b9280e9ce815f11eda1d904348fe098c65961c0aae63b154a2157ef7caa3dca,PodSandboxId:2f0f36ce454ff2e17dc995ce42d151c07f7af18f30af746f5be432aa9aee5828,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723594359056009370,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-r9b46,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 106e607e-b870-42ca-ad43-d80238452cd4,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"p
rotocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c23300665d9c76ac06c75fbfb737adf5b17e16c97443028c1a964c023ba15d12,PodSandboxId:f7f60d4f6540bdb45e5b8d3ab66c241143eee7eeab4bf893cbf2c25a60f54e5a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723594359011568448,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-105013,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfa077ca8bb2ae3949f71c038c9eb784,},Anno
tations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d7e2e3718db070bf11ac3c5202d785b481f0ffd2bfb576fb739826e1f002f3f,PodSandboxId:cdd40c63d92d41d78866e3821cd5cbe7fa6a7a71f40d8ec433eb78a73c2b0cd8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723594016788579811,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-lq24p,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a0c03f6d-f644-43d8-a8d6-2079f90d2bf2,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4a69b9d72a8d3a11800a1e7f03e03649b32990bf4c5b668a0dea73074bdf45c,PodSandboxId:e9a53f92642e9f4eac65fa9eb0c2b1d5979d991666d848c42bcf5091f5b97c20,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723593844833213150,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-r9b46,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 106e607e-b870-42ca-ad43-d80238452cd4,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6ce1804a980ec080327c097b9929ce80ad1eaa3cb08408175afbb903d6bccc8,PodSandboxId:6283c8ce8359065cdf2c1e90a986552ccc30cd0cd4d238f157e7a2c5194e7b80,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723593844850304177,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-qlqtb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4a3e3e6-b8e7-4c32-a5af-065aa78111f1,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d773535128c3474359fb39d2e67a85fda4514786ccd1249690454b5c2f1aad45,PodSandboxId:b98b6b68f5b5a95386d58d6fb01c306186f6a22cb4df64e4da46de670a827c52,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_EXITED,CreatedAt:1723593832623775335,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6m57q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1349813-8756-4972-9e6c-ae1bf33a8921,},Annotations:map[string]string{io.kubernetes.container.hash: 49c194ef,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cae4a2039c73c8b44c95f3baeb4245c44b9cf0e510c0c05c79eff9d68a7af5c7,PodSandboxId:f4ad05be5bf18bde191989a3918a8be62b318d331a5748204c0e1f6313038119,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e61
62f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723593832475614597,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qvrtb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80c16299-297a-49b0-98bc-97208c289e73,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f644ed2e094890dd8d28e4ca035634bf6340e598553601368c4025ba64cbbc58,PodSandboxId:24f8ff464d5f9230d8cb411739e93c3a558af6fd645023eaef8c52943dc7a7a2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f44
6c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723593821530366342,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-105013,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0d6f84c88b276d9caf0c279a2fd73aa,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a988632430c243612d4b0086b23d504fe2c075bbb2ecc0786bc1a49ae396241,PodSandboxId:8da0fafbf7974c56372b9f6bae5cb9c27185ee89a9d0ecb7ad3bec9aed881dee,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAIN
ER_EXITED,CreatedAt:1723593821450872563,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-105013,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ac35313bb7af49a3d3262d37ba9167c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c8d6d983-b125-47bf-9715-b9f5bd78844d name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 00:17:31 ha-105013 crio[3051]: time="2024-08-14 00:17:31.475387292Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0f67325e-ada5-4efc-ab81-b7d098c598c1 name=/runtime.v1.RuntimeService/Version
	Aug 14 00:17:31 ha-105013 crio[3051]: time="2024-08-14 00:17:31.475477449Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0f67325e-ada5-4efc-ab81-b7d098c598c1 name=/runtime.v1.RuntimeService/Version
	Aug 14 00:17:31 ha-105013 crio[3051]: time="2024-08-14 00:17:31.476551314Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a6d909d8-9280-4935-9480-2e72fed64465 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 00:17:31 ha-105013 crio[3051]: time="2024-08-14 00:17:31.477059180Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723594651477033990,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:145823,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a6d909d8-9280-4935-9480-2e72fed64465 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 00:17:31 ha-105013 crio[3051]: time="2024-08-14 00:17:31.477485711Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3ad7847f-20d1-41b0-ba41-41b557088853 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 00:17:31 ha-105013 crio[3051]: time="2024-08-14 00:17:31.477556810Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3ad7847f-20d1-41b0-ba41-41b557088853 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 00:17:31 ha-105013 crio[3051]: time="2024-08-14 00:17:31.478060219Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:34e743530eb45a69fc71d3f83fa27974793125b8efd320233acc9e5ade3e1b86,PodSandboxId:b498a018e05bf3ac9da7579b28c717ed7765b3375bdf19fab16f537265f27584,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723594398910315851,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-lq24p,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a0c03f6d-f644-43d8-a8d6-2079f90d2bf2,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee5d9b99a82fb9bc7d901e38ccc970b5854914d5bdc843ac9d85a1c4a32c0819,PodSandboxId:c9f91079e28b929d7f77ed225df2140b801cedeaeb4f3aac95c73a52a99e98d0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723594396983816289,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-105013,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f1d58febfc5b6695f71d52f2f23febc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0888b9785ccf1d89dbd1c10a23f4f7eaf095635fe96109abd1f407fd39608fd,PodSandboxId:f7f60d4f6540bdb45e5b8d3ab66c241143eee7eeab4bf893cbf2c25a60f54e5a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723594394636411114,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-105013,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfa077ca8bb2ae3949f71c038c9eb784,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination
-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39abf69130b4b993f132c018ec66d7884b2ab2fbe504637625587a1e81f43838,PodSandboxId:de9af903660ccf6557ac92a6884763dbba68904a1befb90197dbb3005e32e049,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723594388632298333,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6555aaf3-8661-4508-8993-e27e91fd75b0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4235873ebc639832799ba88b9f1b85efc11f39bd3b247b1c50252be54d7c9ca,PodSandboxId:3f9f663df8afebeb3bdd97e02d7eceb7d8432250d11b7843bead5f3cef68baf2,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1723594379677924888,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-105013,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eba9f30404fc6595cd517b2e044ad070,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f847be92bcda663b5a64c3ebe241dc754529718628caa01cef0525a11d01209f,PodSandboxId:9da9ca9a7a69d1e091145ac2e2410cbf8f5d15734f6ea2adbee9de5b28876842,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_RUNNING,CreatedAt:1723594365577727349,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6m57q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1349813-8756-4972-9e6c-ae1bf33a8921,},Annotations:map[string]string{io.kubernetes.container.hash: 49c194ef,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:d463e78fa6b27c92b26cd1bc806e34320df69961aae19159122eec7c9250a80b,PodSandboxId:b27cd500e6978907fcff03f511984b81c8eec91472a7cba9c370696ac0e08cb3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723594365529230157,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qvrtb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80c16299-297a-49b0-98bc-97208c289e73,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container
{Id:8d6324bf2404b4c092f212b1262882c454050b8a4c18214d22cbb56d999ed4d4,PodSandboxId:f8e5c70a7011bb1fca0a3d4b7824fd431425469c4d0540e94e955460f58ba58c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723594365434252393,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-105013,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0d6f84c88b276d9caf0c279a2fd73aa,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c74f85631d30d473
5c70a42cc9640d34007847fc31ee08e32623dc8dc6bb949,PodSandboxId:58bac8e5e4e59d11185d31a70f4cd2234e8a17753800ba7d9d99a61743dea7ef,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723594365441095077,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qlqtb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4a3e3e6-b8e7-4c32-a5af-065aa78111f1,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b8c3091f0023c0b7e493a7be78369fd69bc71f870b3a8815a8c78c94c51c560,PodSandboxId:de9af903660ccf6557ac92a6884763dbba68904a1befb90197dbb3005e32e049,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723594365366046360,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6555aaf3-8661-4508-8993-e27e91fd75b0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3adba2eef6dc38a56e8b38b2c0414c99640a21716d5258d8e30c84c11b895f2,PodSandboxId:d5f7e049035c79d28292c24db15c2bc02b24e788548cc97debeb3ee237a9f922,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723594365341838203,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-105013,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ac35313bb7af49a3d3262d37ba9167c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.k
ubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13140714cc06469d86f9745a4c86966c693d3449ed3f3c154fbb6e14ae42ee33,PodSandboxId:c9f91079e28b929d7f77ed225df2140b801cedeaeb4f3aac95c73a52a99e98d0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723594365260100912,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-105013,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f1d58febfc5b6695f71d52f2f23febc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b9280e9ce815f11eda1d904348fe098c65961c0aae63b154a2157ef7caa3dca,PodSandboxId:2f0f36ce454ff2e17dc995ce42d151c07f7af18f30af746f5be432aa9aee5828,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723594359056009370,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-r9b46,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 106e607e-b870-42ca-ad43-d80238452cd4,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"p
rotocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c23300665d9c76ac06c75fbfb737adf5b17e16c97443028c1a964c023ba15d12,PodSandboxId:f7f60d4f6540bdb45e5b8d3ab66c241143eee7eeab4bf893cbf2c25a60f54e5a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723594359011568448,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-105013,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfa077ca8bb2ae3949f71c038c9eb784,},Anno
tations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d7e2e3718db070bf11ac3c5202d785b481f0ffd2bfb576fb739826e1f002f3f,PodSandboxId:cdd40c63d92d41d78866e3821cd5cbe7fa6a7a71f40d8ec433eb78a73c2b0cd8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723594016788579811,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-lq24p,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a0c03f6d-f644-43d8-a8d6-2079f90d2bf2,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4a69b9d72a8d3a11800a1e7f03e03649b32990bf4c5b668a0dea73074bdf45c,PodSandboxId:e9a53f92642e9f4eac65fa9eb0c2b1d5979d991666d848c42bcf5091f5b97c20,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723593844833213150,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-r9b46,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 106e607e-b870-42ca-ad43-d80238452cd4,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6ce1804a980ec080327c097b9929ce80ad1eaa3cb08408175afbb903d6bccc8,PodSandboxId:6283c8ce8359065cdf2c1e90a986552ccc30cd0cd4d238f157e7a2c5194e7b80,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723593844850304177,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-qlqtb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4a3e3e6-b8e7-4c32-a5af-065aa78111f1,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d773535128c3474359fb39d2e67a85fda4514786ccd1249690454b5c2f1aad45,PodSandboxId:b98b6b68f5b5a95386d58d6fb01c306186f6a22cb4df64e4da46de670a827c52,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_EXITED,CreatedAt:1723593832623775335,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6m57q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1349813-8756-4972-9e6c-ae1bf33a8921,},Annotations:map[string]string{io.kubernetes.container.hash: 49c194ef,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cae4a2039c73c8b44c95f3baeb4245c44b9cf0e510c0c05c79eff9d68a7af5c7,PodSandboxId:f4ad05be5bf18bde191989a3918a8be62b318d331a5748204c0e1f6313038119,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e61
62f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723593832475614597,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qvrtb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80c16299-297a-49b0-98bc-97208c289e73,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f644ed2e094890dd8d28e4ca035634bf6340e598553601368c4025ba64cbbc58,PodSandboxId:24f8ff464d5f9230d8cb411739e93c3a558af6fd645023eaef8c52943dc7a7a2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f44
6c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723593821530366342,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-105013,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0d6f84c88b276d9caf0c279a2fd73aa,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a988632430c243612d4b0086b23d504fe2c075bbb2ecc0786bc1a49ae396241,PodSandboxId:8da0fafbf7974c56372b9f6bae5cb9c27185ee89a9d0ecb7ad3bec9aed881dee,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAIN
ER_EXITED,CreatedAt:1723593821450872563,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-105013,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ac35313bb7af49a3d3262d37ba9167c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3ad7847f-20d1-41b0-ba41-41b557088853 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 00:17:31 ha-105013 crio[3051]: time="2024-08-14 00:17:31.521184456Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bcdfa101-4960-4406-a114-98912cd4cd5b name=/runtime.v1.RuntimeService/Version
	Aug 14 00:17:31 ha-105013 crio[3051]: time="2024-08-14 00:17:31.521263950Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bcdfa101-4960-4406-a114-98912cd4cd5b name=/runtime.v1.RuntimeService/Version
	Aug 14 00:17:31 ha-105013 crio[3051]: time="2024-08-14 00:17:31.523066435Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ba462fac-08ce-4788-a269-2753543a385c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 00:17:31 ha-105013 crio[3051]: time="2024-08-14 00:17:31.523511932Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723594651523484504,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:145823,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ba462fac-08ce-4788-a269-2753543a385c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 00:17:31 ha-105013 crio[3051]: time="2024-08-14 00:17:31.524238710Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8e735872-8812-4b61-b735-25e099bdf5e8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 00:17:31 ha-105013 crio[3051]: time="2024-08-14 00:17:31.524363337Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8e735872-8812-4b61-b735-25e099bdf5e8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 00:17:31 ha-105013 crio[3051]: time="2024-08-14 00:17:31.525166273Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:34e743530eb45a69fc71d3f83fa27974793125b8efd320233acc9e5ade3e1b86,PodSandboxId:b498a018e05bf3ac9da7579b28c717ed7765b3375bdf19fab16f537265f27584,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723594398910315851,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-lq24p,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a0c03f6d-f644-43d8-a8d6-2079f90d2bf2,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee5d9b99a82fb9bc7d901e38ccc970b5854914d5bdc843ac9d85a1c4a32c0819,PodSandboxId:c9f91079e28b929d7f77ed225df2140b801cedeaeb4f3aac95c73a52a99e98d0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723594396983816289,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-105013,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f1d58febfc5b6695f71d52f2f23febc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0888b9785ccf1d89dbd1c10a23f4f7eaf095635fe96109abd1f407fd39608fd,PodSandboxId:f7f60d4f6540bdb45e5b8d3ab66c241143eee7eeab4bf893cbf2c25a60f54e5a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723594394636411114,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-105013,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfa077ca8bb2ae3949f71c038c9eb784,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination
-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39abf69130b4b993f132c018ec66d7884b2ab2fbe504637625587a1e81f43838,PodSandboxId:de9af903660ccf6557ac92a6884763dbba68904a1befb90197dbb3005e32e049,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723594388632298333,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6555aaf3-8661-4508-8993-e27e91fd75b0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4235873ebc639832799ba88b9f1b85efc11f39bd3b247b1c50252be54d7c9ca,PodSandboxId:3f9f663df8afebeb3bdd97e02d7eceb7d8432250d11b7843bead5f3cef68baf2,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1723594379677924888,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-105013,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eba9f30404fc6595cd517b2e044ad070,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f847be92bcda663b5a64c3ebe241dc754529718628caa01cef0525a11d01209f,PodSandboxId:9da9ca9a7a69d1e091145ac2e2410cbf8f5d15734f6ea2adbee9de5b28876842,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_RUNNING,CreatedAt:1723594365577727349,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6m57q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1349813-8756-4972-9e6c-ae1bf33a8921,},Annotations:map[string]string{io.kubernetes.container.hash: 49c194ef,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:d463e78fa6b27c92b26cd1bc806e34320df69961aae19159122eec7c9250a80b,PodSandboxId:b27cd500e6978907fcff03f511984b81c8eec91472a7cba9c370696ac0e08cb3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723594365529230157,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qvrtb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80c16299-297a-49b0-98bc-97208c289e73,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container
{Id:8d6324bf2404b4c092f212b1262882c454050b8a4c18214d22cbb56d999ed4d4,PodSandboxId:f8e5c70a7011bb1fca0a3d4b7824fd431425469c4d0540e94e955460f58ba58c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723594365434252393,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-105013,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0d6f84c88b276d9caf0c279a2fd73aa,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c74f85631d30d473
5c70a42cc9640d34007847fc31ee08e32623dc8dc6bb949,PodSandboxId:58bac8e5e4e59d11185d31a70f4cd2234e8a17753800ba7d9d99a61743dea7ef,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723594365441095077,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qlqtb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4a3e3e6-b8e7-4c32-a5af-065aa78111f1,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b8c3091f0023c0b7e493a7be78369fd69bc71f870b3a8815a8c78c94c51c560,PodSandboxId:de9af903660ccf6557ac92a6884763dbba68904a1befb90197dbb3005e32e049,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723594365366046360,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6555aaf3-8661-4508-8993-e27e91fd75b0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3adba2eef6dc38a56e8b38b2c0414c99640a21716d5258d8e30c84c11b895f2,PodSandboxId:d5f7e049035c79d28292c24db15c2bc02b24e788548cc97debeb3ee237a9f922,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723594365341838203,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-105013,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ac35313bb7af49a3d3262d37ba9167c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.k
ubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13140714cc06469d86f9745a4c86966c693d3449ed3f3c154fbb6e14ae42ee33,PodSandboxId:c9f91079e28b929d7f77ed225df2140b801cedeaeb4f3aac95c73a52a99e98d0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723594365260100912,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-105013,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f1d58febfc5b6695f71d52f2f23febc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b9280e9ce815f11eda1d904348fe098c65961c0aae63b154a2157ef7caa3dca,PodSandboxId:2f0f36ce454ff2e17dc995ce42d151c07f7af18f30af746f5be432aa9aee5828,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723594359056009370,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-r9b46,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 106e607e-b870-42ca-ad43-d80238452cd4,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"p
rotocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c23300665d9c76ac06c75fbfb737adf5b17e16c97443028c1a964c023ba15d12,PodSandboxId:f7f60d4f6540bdb45e5b8d3ab66c241143eee7eeab4bf893cbf2c25a60f54e5a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723594359011568448,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-105013,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfa077ca8bb2ae3949f71c038c9eb784,},Anno
tations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d7e2e3718db070bf11ac3c5202d785b481f0ffd2bfb576fb739826e1f002f3f,PodSandboxId:cdd40c63d92d41d78866e3821cd5cbe7fa6a7a71f40d8ec433eb78a73c2b0cd8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723594016788579811,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-lq24p,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a0c03f6d-f644-43d8-a8d6-2079f90d2bf2,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4a69b9d72a8d3a11800a1e7f03e03649b32990bf4c5b668a0dea73074bdf45c,PodSandboxId:e9a53f92642e9f4eac65fa9eb0c2b1d5979d991666d848c42bcf5091f5b97c20,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723593844833213150,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-r9b46,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 106e607e-b870-42ca-ad43-d80238452cd4,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6ce1804a980ec080327c097b9929ce80ad1eaa3cb08408175afbb903d6bccc8,PodSandboxId:6283c8ce8359065cdf2c1e90a986552ccc30cd0cd4d238f157e7a2c5194e7b80,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723593844850304177,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-qlqtb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4a3e3e6-b8e7-4c32-a5af-065aa78111f1,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d773535128c3474359fb39d2e67a85fda4514786ccd1249690454b5c2f1aad45,PodSandboxId:b98b6b68f5b5a95386d58d6fb01c306186f6a22cb4df64e4da46de670a827c52,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_EXITED,CreatedAt:1723593832623775335,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6m57q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1349813-8756-4972-9e6c-ae1bf33a8921,},Annotations:map[string]string{io.kubernetes.container.hash: 49c194ef,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cae4a2039c73c8b44c95f3baeb4245c44b9cf0e510c0c05c79eff9d68a7af5c7,PodSandboxId:f4ad05be5bf18bde191989a3918a8be62b318d331a5748204c0e1f6313038119,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e61
62f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723593832475614597,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qvrtb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80c16299-297a-49b0-98bc-97208c289e73,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f644ed2e094890dd8d28e4ca035634bf6340e598553601368c4025ba64cbbc58,PodSandboxId:24f8ff464d5f9230d8cb411739e93c3a558af6fd645023eaef8c52943dc7a7a2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f44
6c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723593821530366342,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-105013,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0d6f84c88b276d9caf0c279a2fd73aa,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a988632430c243612d4b0086b23d504fe2c075bbb2ecc0786bc1a49ae396241,PodSandboxId:8da0fafbf7974c56372b9f6bae5cb9c27185ee89a9d0ecb7ad3bec9aed881dee,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAIN
ER_EXITED,CreatedAt:1723593821450872563,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-105013,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ac35313bb7af49a3d3262d37ba9167c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8e735872-8812-4b61-b735-25e099bdf5e8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 00:17:31 ha-105013 crio[3051]: time="2024-08-14 00:17:31.565752161Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=81b09eae-c747-4f51-a6a7-e3c817f3e131 name=/runtime.v1.RuntimeService/Version
	Aug 14 00:17:31 ha-105013 crio[3051]: time="2024-08-14 00:17:31.565828530Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=81b09eae-c747-4f51-a6a7-e3c817f3e131 name=/runtime.v1.RuntimeService/Version
	Aug 14 00:17:31 ha-105013 crio[3051]: time="2024-08-14 00:17:31.567397840Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=91f9d739-2995-4486-bd83-5126dcdd4371 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 00:17:31 ha-105013 crio[3051]: time="2024-08-14 00:17:31.567798006Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723594651567776207,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:145823,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=91f9d739-2995-4486-bd83-5126dcdd4371 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 00:17:31 ha-105013 crio[3051]: time="2024-08-14 00:17:31.568252560Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0e62d058-b08d-4618-9401-92e94215a569 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 00:17:31 ha-105013 crio[3051]: time="2024-08-14 00:17:31.568332304Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0e62d058-b08d-4618-9401-92e94215a569 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 00:17:31 ha-105013 crio[3051]: time="2024-08-14 00:17:31.568753098Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:34e743530eb45a69fc71d3f83fa27974793125b8efd320233acc9e5ade3e1b86,PodSandboxId:b498a018e05bf3ac9da7579b28c717ed7765b3375bdf19fab16f537265f27584,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723594398910315851,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-lq24p,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a0c03f6d-f644-43d8-a8d6-2079f90d2bf2,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee5d9b99a82fb9bc7d901e38ccc970b5854914d5bdc843ac9d85a1c4a32c0819,PodSandboxId:c9f91079e28b929d7f77ed225df2140b801cedeaeb4f3aac95c73a52a99e98d0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723594396983816289,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-105013,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f1d58febfc5b6695f71d52f2f23febc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0888b9785ccf1d89dbd1c10a23f4f7eaf095635fe96109abd1f407fd39608fd,PodSandboxId:f7f60d4f6540bdb45e5b8d3ab66c241143eee7eeab4bf893cbf2c25a60f54e5a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723594394636411114,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-105013,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfa077ca8bb2ae3949f71c038c9eb784,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination
-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39abf69130b4b993f132c018ec66d7884b2ab2fbe504637625587a1e81f43838,PodSandboxId:de9af903660ccf6557ac92a6884763dbba68904a1befb90197dbb3005e32e049,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723594388632298333,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6555aaf3-8661-4508-8993-e27e91fd75b0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4235873ebc639832799ba88b9f1b85efc11f39bd3b247b1c50252be54d7c9ca,PodSandboxId:3f9f663df8afebeb3bdd97e02d7eceb7d8432250d11b7843bead5f3cef68baf2,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1723594379677924888,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-105013,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eba9f30404fc6595cd517b2e044ad070,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f847be92bcda663b5a64c3ebe241dc754529718628caa01cef0525a11d01209f,PodSandboxId:9da9ca9a7a69d1e091145ac2e2410cbf8f5d15734f6ea2adbee9de5b28876842,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_RUNNING,CreatedAt:1723594365577727349,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6m57q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1349813-8756-4972-9e6c-ae1bf33a8921,},Annotations:map[string]string{io.kubernetes.container.hash: 49c194ef,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:d463e78fa6b27c92b26cd1bc806e34320df69961aae19159122eec7c9250a80b,PodSandboxId:b27cd500e6978907fcff03f511984b81c8eec91472a7cba9c370696ac0e08cb3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723594365529230157,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qvrtb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80c16299-297a-49b0-98bc-97208c289e73,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container
{Id:8d6324bf2404b4c092f212b1262882c454050b8a4c18214d22cbb56d999ed4d4,PodSandboxId:f8e5c70a7011bb1fca0a3d4b7824fd431425469c4d0540e94e955460f58ba58c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723594365434252393,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-105013,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0d6f84c88b276d9caf0c279a2fd73aa,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c74f85631d30d473
5c70a42cc9640d34007847fc31ee08e32623dc8dc6bb949,PodSandboxId:58bac8e5e4e59d11185d31a70f4cd2234e8a17753800ba7d9d99a61743dea7ef,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723594365441095077,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qlqtb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4a3e3e6-b8e7-4c32-a5af-065aa78111f1,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b8c3091f0023c0b7e493a7be78369fd69bc71f870b3a8815a8c78c94c51c560,PodSandboxId:de9af903660ccf6557ac92a6884763dbba68904a1befb90197dbb3005e32e049,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723594365366046360,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6555aaf3-8661-4508-8993-e27e91fd75b0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3adba2eef6dc38a56e8b38b2c0414c99640a21716d5258d8e30c84c11b895f2,PodSandboxId:d5f7e049035c79d28292c24db15c2bc02b24e788548cc97debeb3ee237a9f922,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723594365341838203,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-105013,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ac35313bb7af49a3d3262d37ba9167c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.k
ubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13140714cc06469d86f9745a4c86966c693d3449ed3f3c154fbb6e14ae42ee33,PodSandboxId:c9f91079e28b929d7f77ed225df2140b801cedeaeb4f3aac95c73a52a99e98d0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723594365260100912,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-105013,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f1d58febfc5b6695f71d52f2f23febc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b9280e9ce815f11eda1d904348fe098c65961c0aae63b154a2157ef7caa3dca,PodSandboxId:2f0f36ce454ff2e17dc995ce42d151c07f7af18f30af746f5be432aa9aee5828,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723594359056009370,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-r9b46,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 106e607e-b870-42ca-ad43-d80238452cd4,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"p
rotocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c23300665d9c76ac06c75fbfb737adf5b17e16c97443028c1a964c023ba15d12,PodSandboxId:f7f60d4f6540bdb45e5b8d3ab66c241143eee7eeab4bf893cbf2c25a60f54e5a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723594359011568448,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-105013,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfa077ca8bb2ae3949f71c038c9eb784,},Anno
tations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d7e2e3718db070bf11ac3c5202d785b481f0ffd2bfb576fb739826e1f002f3f,PodSandboxId:cdd40c63d92d41d78866e3821cd5cbe7fa6a7a71f40d8ec433eb78a73c2b0cd8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723594016788579811,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-lq24p,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a0c03f6d-f644-43d8-a8d6-2079f90d2bf2,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4a69b9d72a8d3a11800a1e7f03e03649b32990bf4c5b668a0dea73074bdf45c,PodSandboxId:e9a53f92642e9f4eac65fa9eb0c2b1d5979d991666d848c42bcf5091f5b97c20,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723593844833213150,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-r9b46,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 106e607e-b870-42ca-ad43-d80238452cd4,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6ce1804a980ec080327c097b9929ce80ad1eaa3cb08408175afbb903d6bccc8,PodSandboxId:6283c8ce8359065cdf2c1e90a986552ccc30cd0cd4d238f157e7a2c5194e7b80,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723593844850304177,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-qlqtb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4a3e3e6-b8e7-4c32-a5af-065aa78111f1,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d773535128c3474359fb39d2e67a85fda4514786ccd1249690454b5c2f1aad45,PodSandboxId:b98b6b68f5b5a95386d58d6fb01c306186f6a22cb4df64e4da46de670a827c52,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_EXITED,CreatedAt:1723593832623775335,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6m57q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1349813-8756-4972-9e6c-ae1bf33a8921,},Annotations:map[string]string{io.kubernetes.container.hash: 49c194ef,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cae4a2039c73c8b44c95f3baeb4245c44b9cf0e510c0c05c79eff9d68a7af5c7,PodSandboxId:f4ad05be5bf18bde191989a3918a8be62b318d331a5748204c0e1f6313038119,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e61
62f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723593832475614597,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qvrtb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80c16299-297a-49b0-98bc-97208c289e73,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f644ed2e094890dd8d28e4ca035634bf6340e598553601368c4025ba64cbbc58,PodSandboxId:24f8ff464d5f9230d8cb411739e93c3a558af6fd645023eaef8c52943dc7a7a2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f44
6c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723593821530366342,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-105013,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0d6f84c88b276d9caf0c279a2fd73aa,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a988632430c243612d4b0086b23d504fe2c075bbb2ecc0786bc1a49ae396241,PodSandboxId:8da0fafbf7974c56372b9f6bae5cb9c27185ee89a9d0ecb7ad3bec9aed881dee,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAIN
ER_EXITED,CreatedAt:1723593821450872563,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-105013,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ac35313bb7af49a3d3262d37ba9167c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0e62d058-b08d-4618-9401-92e94215a569 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	34e743530eb45       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      4 minutes ago       Running             busybox                   1                   b498a018e05bf       busybox-7dff88458-lq24p
	ee5d9b99a82fb       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      4 minutes ago       Running             kube-controller-manager   2                   c9f91079e28b9       kube-controller-manager-ha-105013
	b0888b9785ccf       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      4 minutes ago       Running             kube-apiserver            2                   f7f60d4f6540b       kube-apiserver-ha-105013
	39abf69130b4b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       2                   de9af903660cc       storage-provisioner
	f4235873ebc63       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      4 minutes ago       Running             kube-vip                  0                   3f9f663df8afe       kube-vip-ha-105013
	f847be92bcda6       917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557                                      4 minutes ago       Running             kindnet-cni               1                   9da9ca9a7a69d       kindnet-6m57q
	d463e78fa6b27       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      4 minutes ago       Running             kube-proxy                1                   b27cd500e6978       kube-proxy-qvrtb
	1c74f85631d30       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago       Running             coredns                   1                   58bac8e5e4e59       coredns-6f6b679f8f-qlqtb
	8d6324bf2404b       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      4 minutes ago       Running             kube-scheduler            1                   f8e5c70a7011b       kube-scheduler-ha-105013
	4b8c3091f0023       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Exited              storage-provisioner       1                   de9af903660cc       storage-provisioner
	a3adba2eef6dc       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      4 minutes ago       Running             etcd                      1                   d5f7e049035c7       etcd-ha-105013
	13140714cc064       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      4 minutes ago       Exited              kube-controller-manager   1                   c9f91079e28b9       kube-controller-manager-ha-105013
	4b9280e9ce815       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago       Running             coredns                   1                   2f0f36ce454ff       coredns-6f6b679f8f-r9b46
	c23300665d9c7       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      4 minutes ago       Exited              kube-apiserver            1                   f7f60d4f6540b       kube-apiserver-ha-105013
	7d7e2e3718db0       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   10 minutes ago      Exited              busybox                   0                   cdd40c63d92d4       busybox-7dff88458-lq24p
	a6ce1804a980e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago      Exited              coredns                   0                   6283c8ce83590       coredns-6f6b679f8f-qlqtb
	e4a69b9d72a8d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago      Exited              coredns                   0                   e9a53f92642e9       coredns-6f6b679f8f-r9b46
	d773535128c34       917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557                                      13 minutes ago      Exited              kindnet-cni               0                   b98b6b68f5b5a       kindnet-6m57q
	cae4a2039c73c       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      13 minutes ago      Exited              kube-proxy                0                   f4ad05be5bf18       kube-proxy-qvrtb
	f644ed2e09489       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      13 minutes ago      Exited              kube-scheduler            0                   24f8ff464d5f9       kube-scheduler-ha-105013
	9a988632430c2       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      13 minutes ago      Exited              etcd                      0                   8da0fafbf7974       etcd-ha-105013
	
	
	==> coredns [1c74f85631d30d4735c70a42cc9640d34007847fc31ee08e32623dc8dc6bb949] <==
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:59654 - 43696 "HINFO IN 8475110043404679788.785676527835054484. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.015583691s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1450265554]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (14-Aug-2024 00:12:46.999) (total time: 10001ms):
	Trace[1450265554]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (00:12:57.001)
	Trace[1450265554]: [10.001464483s] [10.001464483s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[97276489]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (14-Aug-2024 00:12:47.078) (total time: 10001ms):
	Trace[97276489]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (00:12:57.080)
	Trace[97276489]: [10.001600954s] [10.001600954s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: Trace[2030459363]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (14-Aug-2024 00:12:54.304) (total time: 10354ms):
	Trace[2030459363]: ---"Objects listed" error:<nil> 10354ms (00:13:04.658)
	Trace[2030459363]: [10.354303344s] [10.354303344s] END
	
	
	==> coredns [4b9280e9ce815f11eda1d904348fe098c65961c0aae63b154a2157ef7caa3dca] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1214248296]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (14-Aug-2024 00:12:41.518) (total time: 10001ms):
	Trace[1214248296]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (00:12:51.520)
	Trace[1214248296]: [10.001583284s] [10.001583284s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1869583458]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (14-Aug-2024 00:12:41.582) (total time: 10000ms):
	Trace[1869583458]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (00:12:51.583)
	Trace[1869583458]: [10.000797936s] [10.000797936s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1842968646]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (14-Aug-2024 00:12:41.605) (total time: 10001ms):
	Trace[1842968646]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (00:12:51.606)
	Trace[1842968646]: [10.001612448s] [10.001612448s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:50404->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:50404->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [a6ce1804a980ec080327c097b9929ce80ad1eaa3cb08408175afbb903d6bccc8] <==
	[INFO] 10.244.2.2:37198 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001784517s
	[INFO] 10.244.0.4:53333 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000088627s
	[INFO] 10.244.2.3:45255 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.011019418s
	[INFO] 10.244.2.3:57612 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000196158s
	[INFO] 10.244.2.3:54906 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000145382s
	[INFO] 10.244.2.2:38596 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001349954s
	[INFO] 10.244.2.2:34606 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00006091s
	[INFO] 10.244.0.4:44230 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000102209s
	[INFO] 10.244.0.4:38978 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001780187s
	[INFO] 10.244.0.4:50077 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000144003s
	[INFO] 10.244.0.4:56680 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001410286s
	[INFO] 10.244.2.3:55127 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000145617s
	[INFO] 10.244.2.3:51971 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000158815s
	[INFO] 10.244.2.2:39623 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00014097s
	[INFO] 10.244.2.2:37680 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000045642s
	[INFO] 10.244.0.4:58204 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000096528s
	[INFO] 10.244.0.4:56986 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000109539s
	[INFO] 10.244.0.4:44460 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00007298s
	[INFO] 10.244.2.3:58663 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000133643s
	[INFO] 10.244.2.2:41772 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000206806s
	[INFO] 10.244.2.2:59812 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00016311s
	[INFO] 10.244.2.2:44495 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000100783s
	[INFO] 10.244.0.4:35084 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000066256s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [e4a69b9d72a8d3a11800a1e7f03e03649b32990bf4c5b668a0dea73074bdf45c] <==
	[INFO] 10.244.2.3:40341 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000135149s
	[INFO] 10.244.2.2:57488 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000122551s
	[INFO] 10.244.2.2:49117 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001866986s
	[INFO] 10.244.2.2:35755 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000225314s
	[INFO] 10.244.2.2:54831 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000705s
	[INFO] 10.244.2.2:53362 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000278259s
	[INFO] 10.244.2.2:33580 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000070547s
	[INFO] 10.244.0.4:49349 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000114773s
	[INFO] 10.244.0.4:54742 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000075532s
	[INFO] 10.244.0.4:51472 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00018342s
	[INFO] 10.244.0.4:41002 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000108841s
	[INFO] 10.244.2.3:43436 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000206743s
	[INFO] 10.244.2.3:47491 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000085338s
	[INFO] 10.244.2.2:53250 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000197046s
	[INFO] 10.244.2.2:35081 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000089839s
	[INFO] 10.244.0.4:55990 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000325273s
	[INFO] 10.244.2.3:42694 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000184393s
	[INFO] 10.244.2.3:44885 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000230815s
	[INFO] 10.244.2.3:53504 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000185746s
	[INFO] 10.244.2.2:48008 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000154437s
	[INFO] 10.244.0.4:42515 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00048243s
	[INFO] 10.244.0.4:53296 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00007802s
	[INFO] 10.244.0.4:35517 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000194506s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-105013
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-105013
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a5ac17629aabe8562ef9220195ba6559cf416caf
	                    minikube.k8s.io/name=ha-105013
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_14T00_03_48_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 14 Aug 2024 00:03:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-105013
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 14 Aug 2024 00:17:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 14 Aug 2024 00:13:22 +0000   Wed, 14 Aug 2024 00:03:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 14 Aug 2024 00:13:22 +0000   Wed, 14 Aug 2024 00:03:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 14 Aug 2024 00:13:22 +0000   Wed, 14 Aug 2024 00:03:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 14 Aug 2024 00:13:22 +0000   Wed, 14 Aug 2024 00:04:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.79
	  Hostname:    ha-105013
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f0258848a17e4b85b28309eb2ed0d1a0
	  System UUID:                f0258848-a17e-4b85-b283-09eb2ed0d1a0
	  Boot ID:                    52958196-c20d-4175-83fb-2d1dfa35bdf0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-lq24p              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 coredns-6f6b679f8f-qlqtb             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 coredns-6f6b679f8f-r9b46             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-ha-105013                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kindnet-6m57q                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-ha-105013             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-ha-105013    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-qvrtb                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-ha-105013             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-vip-ha-105013                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m18s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4m10s                  kube-proxy       
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   Starting                 13m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     13m (x7 over 13m)      kubelet          Node ha-105013 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    13m (x8 over 13m)      kubelet          Node ha-105013 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  13m (x8 over 13m)      kubelet          Node ha-105013 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m                    kubelet          Node ha-105013 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m                    kubelet          Node ha-105013 status is now: NodeHasSufficientPID
	  Normal   Starting                 13m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  13m                    kubelet          Node ha-105013 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           13m                    node-controller  Node ha-105013 event: Registered Node ha-105013 in Controller
	  Normal   NodeReady                13m                    kubelet          Node ha-105013 status is now: NodeReady
	  Normal   RegisteredNode           12m                    node-controller  Node ha-105013 event: Registered Node ha-105013 in Controller
	  Normal   RegisteredNode           10m                    node-controller  Node ha-105013 event: Registered Node ha-105013 in Controller
	  Normal   RegisteredNode           8m46s                  node-controller  Node ha-105013 event: Registered Node ha-105013 in Controller
	  Warning  ContainerGCFailed        5m44s                  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotReady             4m53s (x7 over 5m55s)  kubelet          Node ha-105013 status is now: NodeNotReady
	  Normal   RegisteredNode           4m23s                  node-controller  Node ha-105013 event: Registered Node ha-105013 in Controller
	  Normal   RegisteredNode           4m11s                  node-controller  Node ha-105013 event: Registered Node ha-105013 in Controller
	  Normal   RegisteredNode           3m19s                  node-controller  Node ha-105013 event: Registered Node ha-105013 in Controller
	
	
	Name:               ha-105013-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-105013-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a5ac17629aabe8562ef9220195ba6559cf416caf
	                    minikube.k8s.io/name=ha-105013
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_14T00_05_17_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 14 Aug 2024 00:05:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-105013-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 14 Aug 2024 00:17:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 14 Aug 2024 00:15:08 +0000   Wed, 14 Aug 2024 00:05:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 14 Aug 2024 00:15:08 +0000   Wed, 14 Aug 2024 00:05:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 14 Aug 2024 00:15:08 +0000   Wed, 14 Aug 2024 00:05:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 14 Aug 2024 00:15:08 +0000   Wed, 14 Aug 2024 00:05:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.160
	  Hostname:    ha-105013-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6f980530d7ae46eba16cea428a25810e
	  System UUID:                6f980530-d7ae-46eb-a16c-ea428a25810e
	  Boot ID:                    8c1572b2-0519-4093-bfbd-60b6a740c005
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-9zndz                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m38s
	  kube-system                 etcd-ha-105013-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-96bv6                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-apiserver-ha-105013-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-ha-105013-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-slwhv                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-ha-105013-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-vip-ha-105013-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 4m18s              kube-proxy       
	  Normal   Starting                 8m45s              kube-proxy       
	  Normal   Starting                 12m                kube-proxy       
	  Normal   NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node ha-105013-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node ha-105013-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x7 over 12m)  kubelet          Node ha-105013-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m                node-controller  Node ha-105013-m02 event: Registered Node ha-105013-m02 in Controller
	  Normal   RegisteredNode           12m                node-controller  Node ha-105013-m02 event: Registered Node ha-105013-m02 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-105013-m02 event: Registered Node ha-105013-m02 in Controller
	  Normal   NodeHasNoDiskPressure    9m2s               kubelet          Node ha-105013-m02 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 9m2s               kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  9m2s               kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  9m2s               kubelet          Node ha-105013-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     9m2s               kubelet          Node ha-105013-m02 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 9m2s               kubelet          Node ha-105013-m02 has been rebooted, boot id: 8c1572b2-0519-4093-bfbd-60b6a740c005
	  Normal   RegisteredNode           8m47s              node-controller  Node ha-105013-m02 event: Registered Node ha-105013-m02 in Controller
	  Normal   RegisteredNode           4m24s              node-controller  Node ha-105013-m02 event: Registered Node ha-105013-m02 in Controller
	  Normal   RegisteredNode           4m12s              node-controller  Node ha-105013-m02 event: Registered Node ha-105013-m02 in Controller
	  Normal   RegisteredNode           3m20s              node-controller  Node ha-105013-m02 event: Registered Node ha-105013-m02 in Controller
	
	
	Name:               ha-105013-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-105013-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a5ac17629aabe8562ef9220195ba6559cf416caf
	                    minikube.k8s.io/name=ha-105013
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_14T00_07_31_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 14 Aug 2024 00:07:30 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-105013-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 14 Aug 2024 00:15:04 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 14 Aug 2024 00:14:44 +0000   Wed, 14 Aug 2024 00:15:45 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 14 Aug 2024 00:14:44 +0000   Wed, 14 Aug 2024 00:15:45 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 14 Aug 2024 00:14:44 +0000   Wed, 14 Aug 2024 00:15:45 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 14 Aug 2024 00:14:44 +0000   Wed, 14 Aug 2024 00:15:45 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.102
	  Hostname:    ha-105013-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6be49b5c3de54a60bb4afcf41f306129
	  System UUID:                6be49b5c-3de5-4a60-bb4a-fcf41f306129
	  Boot ID:                    86fe3c2f-a1e4-4fe1-b1e4-999fa5730da2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-cc6ch    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m38s
	  kube-system                 kindnet-pzk88              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-proxy-2cd8m           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 9m55s                  kube-proxy       
	  Normal   Starting                 2m43s                  kube-proxy       
	  Normal   NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     10m (x2 over 10m)      kubelet          Node ha-105013-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    10m (x2 over 10m)      kubelet          Node ha-105013-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  10m (x2 over 10m)      kubelet          Node ha-105013-m04 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           10m                    node-controller  Node ha-105013-m04 event: Registered Node ha-105013-m04 in Controller
	  Normal   RegisteredNode           9m59s                  node-controller  Node ha-105013-m04 event: Registered Node ha-105013-m04 in Controller
	  Normal   RegisteredNode           9m59s                  node-controller  Node ha-105013-m04 event: Registered Node ha-105013-m04 in Controller
	  Normal   NodeReady                9m42s                  kubelet          Node ha-105013-m04 status is now: NodeReady
	  Normal   RegisteredNode           8m47s                  node-controller  Node ha-105013-m04 event: Registered Node ha-105013-m04 in Controller
	  Normal   NodeNotReady             7m57s                  node-controller  Node ha-105013-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           4m24s                  node-controller  Node ha-105013-m04 event: Registered Node ha-105013-m04 in Controller
	  Normal   RegisteredNode           4m12s                  node-controller  Node ha-105013-m04 event: Registered Node ha-105013-m04 in Controller
	  Normal   RegisteredNode           3m20s                  node-controller  Node ha-105013-m04 event: Registered Node ha-105013-m04 in Controller
	  Normal   Starting                 2m48s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  2m48s                  kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 2m48s (x2 over 2m48s)  kubelet          Node ha-105013-m04 has been rebooted, boot id: 86fe3c2f-a1e4-4fe1-b1e4-999fa5730da2
	  Normal   NodeHasSufficientMemory  2m48s (x3 over 2m48s)  kubelet          Node ha-105013-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m48s (x3 over 2m48s)  kubelet          Node ha-105013-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m48s (x3 over 2m48s)  kubelet          Node ha-105013-m04 status is now: NodeHasSufficientPID
	  Normal   NodeNotReady             2m48s                  kubelet          Node ha-105013-m04 status is now: NodeNotReady
	  Normal   NodeReady                2m48s                  kubelet          Node ha-105013-m04 status is now: NodeReady
	  Normal   NodeNotReady             107s                   node-controller  Node ha-105013-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.521245] systemd-fstab-generator[590]: Ignoring "noauto" option for root device
	[  +0.059332] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.071956] systemd-fstab-generator[602]: Ignoring "noauto" option for root device
	[  +0.158719] systemd-fstab-generator[616]: Ignoring "noauto" option for root device
	[  +0.119205] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +0.265293] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[  +3.742859] systemd-fstab-generator[757]: Ignoring "noauto" option for root device
	[  +5.354821] systemd-fstab-generator[900]: Ignoring "noauto" option for root device
	[  +0.066104] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.215402] systemd-fstab-generator[1317]: Ignoring "noauto" option for root device
	[  +0.072072] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.006156] kauditd_printk_skb: 23 callbacks suppressed
	[Aug14 00:04] kauditd_printk_skb: 36 callbacks suppressed
	[Aug14 00:05] kauditd_printk_skb: 24 callbacks suppressed
	[Aug14 00:12] systemd-fstab-generator[2970]: Ignoring "noauto" option for root device
	[  +0.147407] systemd-fstab-generator[2982]: Ignoring "noauto" option for root device
	[  +0.174813] systemd-fstab-generator[2996]: Ignoring "noauto" option for root device
	[  +0.143640] systemd-fstab-generator[3008]: Ignoring "noauto" option for root device
	[  +0.278510] systemd-fstab-generator[3036]: Ignoring "noauto" option for root device
	[  +3.648203] systemd-fstab-generator[3136]: Ignoring "noauto" option for root device
	[  +0.726992] kauditd_printk_skb: 137 callbacks suppressed
	[ +16.832231] kauditd_printk_skb: 62 callbacks suppressed
	[Aug14 00:13] kauditd_printk_skb: 2 callbacks suppressed
	[ +14.886493] kauditd_printk_skb: 10 callbacks suppressed
	
	
	==> etcd [9a988632430c243612d4b0086b23d504fe2c075bbb2ecc0786bc1a49ae396241] <==
	{"level":"info","ts":"2024-08-14T00:11:02.628365Z","caller":"etcdserver/server.go:1498","msg":"leadership transfer finished","local-member-id":"a91a1bbc2c758cdc","old-leader-member-id":"a91a1bbc2c758cdc","new-leader-member-id":"d2b4737fd3ffd670","took":"100.199181ms"}
	{"level":"info","ts":"2024-08-14T00:11:02.628624Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"b5930f6d9553dfd0"}
	{"level":"info","ts":"2024-08-14T00:11:02.628652Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"b5930f6d9553dfd0"}
	{"level":"info","ts":"2024-08-14T00:11:02.628687Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"b5930f6d9553dfd0"}
	{"level":"info","ts":"2024-08-14T00:11:02.629231Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"a91a1bbc2c758cdc","remote-peer-id":"b5930f6d9553dfd0"}
	{"level":"info","ts":"2024-08-14T00:11:02.629326Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"a91a1bbc2c758cdc","remote-peer-id":"b5930f6d9553dfd0"}
	{"level":"info","ts":"2024-08-14T00:11:02.629365Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"a91a1bbc2c758cdc","remote-peer-id":"b5930f6d9553dfd0"}
	{"level":"info","ts":"2024-08-14T00:11:02.629437Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"b5930f6d9553dfd0"}
	{"level":"info","ts":"2024-08-14T00:11:02.629445Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"d2b4737fd3ffd670"}
	{"level":"warn","ts":"2024-08-14T00:11:02.629745Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"d2b4737fd3ffd670"}
	{"level":"info","ts":"2024-08-14T00:11:02.629777Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"d2b4737fd3ffd670"}
	{"level":"warn","ts":"2024-08-14T00:11:02.629928Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"d2b4737fd3ffd670"}
	{"level":"info","ts":"2024-08-14T00:11:02.629954Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"d2b4737fd3ffd670"}
	{"level":"info","ts":"2024-08-14T00:11:02.630076Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"a91a1bbc2c758cdc","remote-peer-id":"d2b4737fd3ffd670"}
	{"level":"warn","ts":"2024-08-14T00:11:02.630179Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"a91a1bbc2c758cdc","remote-peer-id":"d2b4737fd3ffd670","error":"context canceled"}
	{"level":"warn","ts":"2024-08-14T00:11:02.630259Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"d2b4737fd3ffd670","error":"failed to read d2b4737fd3ffd670 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-08-14T00:11:02.630279Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"a91a1bbc2c758cdc","remote-peer-id":"d2b4737fd3ffd670"}
	{"level":"warn","ts":"2024-08-14T00:11:02.630430Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"a91a1bbc2c758cdc","remote-peer-id":"d2b4737fd3ffd670","error":"context canceled"}
	{"level":"info","ts":"2024-08-14T00:11:02.630445Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"a91a1bbc2c758cdc","remote-peer-id":"d2b4737fd3ffd670"}
	{"level":"info","ts":"2024-08-14T00:11:02.630456Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"d2b4737fd3ffd670"}
	{"level":"info","ts":"2024-08-14T00:11:02.636242Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.79:2380"}
	{"level":"warn","ts":"2024-08-14T00:11:02.636393Z","caller":"embed/config_logging.go:170","msg":"rejected connection on peer endpoint","remote-addr":"192.168.39.160:57234","server-name":"","error":"read tcp 192.168.39.79:2380->192.168.39.160:57234: use of closed network connection"}
	{"level":"warn","ts":"2024-08-14T00:11:02.638605Z","caller":"embed/config_logging.go:170","msg":"rejected connection on peer endpoint","remote-addr":"192.168.39.160:57228","server-name":"","error":"set tcp 192.168.39.79:2380: use of closed network connection"}
	{"level":"info","ts":"2024-08-14T00:11:03.636466Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.79:2380"}
	{"level":"info","ts":"2024-08-14T00:11:03.636505Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-105013","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.79:2380"],"advertise-client-urls":["https://192.168.39.79:2379"]}
	
	
	==> etcd [a3adba2eef6dc38a56e8b38b2c0414c99640a21716d5258d8e30c84c11b895f2] <==
	{"level":"info","ts":"2024-08-14T00:14:04.799419Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"b5930f6d9553dfd0"}
	{"level":"info","ts":"2024-08-14T00:14:04.801457Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"a91a1bbc2c758cdc","remote-peer-id":"b5930f6d9553dfd0"}
	{"level":"info","ts":"2024-08-14T00:14:04.802214Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"a91a1bbc2c758cdc","remote-peer-id":"b5930f6d9553dfd0"}
	{"level":"info","ts":"2024-08-14T00:14:04.807265Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"a91a1bbc2c758cdc","to":"b5930f6d9553dfd0","stream-type":"stream Message"}
	{"level":"info","ts":"2024-08-14T00:14:04.807312Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"a91a1bbc2c758cdc","remote-peer-id":"b5930f6d9553dfd0"}
	{"level":"info","ts":"2024-08-14T00:14:04.810674Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"a91a1bbc2c758cdc","to":"b5930f6d9553dfd0","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-08-14T00:14:04.810751Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"a91a1bbc2c758cdc","remote-peer-id":"b5930f6d9553dfd0"}
	{"level":"warn","ts":"2024-08-14T00:14:58.362707Z","caller":"embed/config_logging.go:170","msg":"rejected connection on client endpoint","remote-addr":"192.168.39.177:52784","server-name":"","error":"EOF"}
	{"level":"info","ts":"2024-08-14T00:14:58.397776Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a91a1bbc2c758cdc switched to configuration voters=(12185082236818001116 15182887236627584624)"}
	{"level":"info","ts":"2024-08-14T00:14:58.401994Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"1edb09d3fc38073e","local-member-id":"a91a1bbc2c758cdc","removed-remote-peer-id":"b5930f6d9553dfd0","removed-remote-peer-urls":["https://192.168.39.177:2380"]}
	{"level":"info","ts":"2024-08-14T00:14:58.402088Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"b5930f6d9553dfd0"}
	{"level":"warn","ts":"2024-08-14T00:14:58.402354Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"b5930f6d9553dfd0"}
	{"level":"info","ts":"2024-08-14T00:14:58.402406Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"b5930f6d9553dfd0"}
	{"level":"warn","ts":"2024-08-14T00:14:58.403168Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"b5930f6d9553dfd0"}
	{"level":"info","ts":"2024-08-14T00:14:58.403222Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"b5930f6d9553dfd0"}
	{"level":"info","ts":"2024-08-14T00:14:58.403332Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"a91a1bbc2c758cdc","remote-peer-id":"b5930f6d9553dfd0"}
	{"level":"warn","ts":"2024-08-14T00:14:58.403477Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"a91a1bbc2c758cdc","remote-peer-id":"b5930f6d9553dfd0","error":"context canceled"}
	{"level":"warn","ts":"2024-08-14T00:14:58.403529Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"b5930f6d9553dfd0","error":"failed to read b5930f6d9553dfd0 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-08-14T00:14:58.403558Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"a91a1bbc2c758cdc","remote-peer-id":"b5930f6d9553dfd0"}
	{"level":"warn","ts":"2024-08-14T00:14:58.403669Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"a91a1bbc2c758cdc","remote-peer-id":"b5930f6d9553dfd0","error":"context canceled"}
	{"level":"info","ts":"2024-08-14T00:14:58.403709Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"a91a1bbc2c758cdc","remote-peer-id":"b5930f6d9553dfd0"}
	{"level":"info","ts":"2024-08-14T00:14:58.403726Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"b5930f6d9553dfd0"}
	{"level":"info","ts":"2024-08-14T00:14:58.403740Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"a91a1bbc2c758cdc","removed-remote-peer-id":"b5930f6d9553dfd0"}
	{"level":"warn","ts":"2024-08-14T00:14:58.417149Z","caller":"embed/config_logging.go:170","msg":"rejected connection on peer endpoint","remote-addr":"192.168.39.177:52032","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2024-08-14T00:14:58.417982Z","caller":"embed/config_logging.go:170","msg":"rejected connection on peer endpoint","remote-addr":"192.168.39.177:52040","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 00:17:32 up 14 min,  0 users,  load average: 0.94, 0.73, 0.46
	Linux ha-105013 5.10.207 #1 SMP Tue Aug 13 22:05:29 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [d773535128c3474359fb39d2e67a85fda4514786ccd1249690454b5c2f1aad45] <==
	I0814 00:10:23.789387       1 main.go:322] Node ha-105013-m04 has CIDR [10.244.3.0/24] 
	I0814 00:10:33.798675       1 main.go:295] Handling node with IPs: map[192.168.39.79:{}]
	I0814 00:10:33.798869       1 main.go:299] handling current node
	I0814 00:10:33.798939       1 main.go:295] Handling node with IPs: map[192.168.39.160:{}]
	I0814 00:10:33.798961       1 main.go:322] Node ha-105013-m02 has CIDR [10.244.1.0/24] 
	I0814 00:10:33.799155       1 main.go:295] Handling node with IPs: map[192.168.39.177:{}]
	I0814 00:10:33.799179       1 main.go:322] Node ha-105013-m03 has CIDR [10.244.2.0/24] 
	I0814 00:10:33.799234       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0814 00:10:33.799251       1 main.go:322] Node ha-105013-m04 has CIDR [10.244.3.0/24] 
	I0814 00:10:43.788723       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0814 00:10:43.788763       1 main.go:322] Node ha-105013-m04 has CIDR [10.244.3.0/24] 
	I0814 00:10:43.788956       1 main.go:295] Handling node with IPs: map[192.168.39.79:{}]
	I0814 00:10:43.788985       1 main.go:299] handling current node
	I0814 00:10:43.788996       1 main.go:295] Handling node with IPs: map[192.168.39.160:{}]
	I0814 00:10:43.789002       1 main.go:322] Node ha-105013-m02 has CIDR [10.244.1.0/24] 
	I0814 00:10:43.789078       1 main.go:295] Handling node with IPs: map[192.168.39.177:{}]
	I0814 00:10:43.789093       1 main.go:322] Node ha-105013-m03 has CIDR [10.244.2.0/24] 
	I0814 00:10:53.789336       1 main.go:295] Handling node with IPs: map[192.168.39.79:{}]
	I0814 00:10:53.789395       1 main.go:299] handling current node
	I0814 00:10:53.789415       1 main.go:295] Handling node with IPs: map[192.168.39.160:{}]
	I0814 00:10:53.789423       1 main.go:322] Node ha-105013-m02 has CIDR [10.244.1.0/24] 
	I0814 00:10:53.789582       1 main.go:295] Handling node with IPs: map[192.168.39.177:{}]
	I0814 00:10:53.789609       1 main.go:322] Node ha-105013-m03 has CIDR [10.244.2.0/24] 
	I0814 00:10:53.789675       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0814 00:10:53.789693       1 main.go:322] Node ha-105013-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [f847be92bcda663b5a64c3ebe241dc754529718628caa01cef0525a11d01209f] <==
	I0814 00:16:46.596268       1 main.go:322] Node ha-105013-m04 has CIDR [10.244.3.0/24] 
	I0814 00:16:56.604688       1 main.go:295] Handling node with IPs: map[192.168.39.79:{}]
	I0814 00:16:56.604792       1 main.go:299] handling current node
	I0814 00:16:56.604823       1 main.go:295] Handling node with IPs: map[192.168.39.160:{}]
	I0814 00:16:56.604841       1 main.go:322] Node ha-105013-m02 has CIDR [10.244.1.0/24] 
	I0814 00:16:56.605042       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0814 00:16:56.605086       1 main.go:322] Node ha-105013-m04 has CIDR [10.244.3.0/24] 
	I0814 00:17:06.595861       1 main.go:295] Handling node with IPs: map[192.168.39.160:{}]
	I0814 00:17:06.595956       1 main.go:322] Node ha-105013-m02 has CIDR [10.244.1.0/24] 
	I0814 00:17:06.596124       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0814 00:17:06.596154       1 main.go:322] Node ha-105013-m04 has CIDR [10.244.3.0/24] 
	I0814 00:17:06.596282       1 main.go:295] Handling node with IPs: map[192.168.39.79:{}]
	I0814 00:17:06.596308       1 main.go:299] handling current node
	I0814 00:17:16.598369       1 main.go:295] Handling node with IPs: map[192.168.39.79:{}]
	I0814 00:17:16.598476       1 main.go:299] handling current node
	I0814 00:17:16.598504       1 main.go:295] Handling node with IPs: map[192.168.39.160:{}]
	I0814 00:17:16.598523       1 main.go:322] Node ha-105013-m02 has CIDR [10.244.1.0/24] 
	I0814 00:17:16.598664       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0814 00:17:16.598735       1 main.go:322] Node ha-105013-m04 has CIDR [10.244.3.0/24] 
	I0814 00:17:26.602769       1 main.go:295] Handling node with IPs: map[192.168.39.79:{}]
	I0814 00:17:26.602808       1 main.go:299] handling current node
	I0814 00:17:26.602837       1 main.go:295] Handling node with IPs: map[192.168.39.160:{}]
	I0814 00:17:26.602855       1 main.go:322] Node ha-105013-m02 has CIDR [10.244.1.0/24] 
	I0814 00:17:26.603012       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0814 00:17:26.603034       1 main.go:322] Node ha-105013-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [b0888b9785ccf1d89dbd1c10a23f4f7eaf095635fe96109abd1f407fd39608fd] <==
	I0814 00:13:16.488443       1 crd_finalizer.go:269] Starting CRDFinalizer
	I0814 00:13:16.565258       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0814 00:13:16.565762       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0814 00:13:16.565936       1 policy_source.go:224] refreshing policies
	I0814 00:13:16.567263       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0814 00:13:16.568088       1 shared_informer.go:320] Caches are synced for configmaps
	I0814 00:13:16.568451       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0814 00:13:16.579700       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0814 00:13:16.583848       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0814 00:13:16.584503       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0814 00:13:16.590541       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0814 00:13:16.591700       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0814 00:13:16.591841       1 aggregator.go:171] initial CRD sync complete...
	I0814 00:13:16.592236       1 autoregister_controller.go:144] Starting autoregister controller
	I0814 00:13:16.592277       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0814 00:13:16.592302       1 cache.go:39] Caches are synced for autoregister controller
	I0814 00:13:16.592776       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	W0814 00:13:16.605442       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.160]
	I0814 00:13:16.606962       1 controller.go:615] quota admission added evaluator for: endpoints
	I0814 00:13:16.619948       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0814 00:13:16.626177       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0814 00:13:16.655751       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0814 00:13:17.473508       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0814 00:13:18.244246       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.160 192.168.39.79]
	W0814 00:15:08.243252       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.160 192.168.39.79]
	
	
	==> kube-apiserver [c23300665d9c76ac06c75fbfb737adf5b17e16c97443028c1a964c023ba15d12] <==
	I0814 00:12:39.244549       1 options.go:228] external host was not specified, using 192.168.39.79
	I0814 00:12:39.248212       1 server.go:142] Version: v1.31.0
	I0814 00:12:39.248503       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W0814 00:12:39.559258       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 00:12:39.559326       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0814 00:12:39.559379       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0814 00:12:39.566965       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0814 00:12:39.570913       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0814 00:12:39.570977       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0814 00:12:39.571173       1 instance.go:232] Using reconciler: lease
	W0814 00:12:39.572051       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 00:12:40.560783       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 00:12:40.560866       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 00:12:40.572607       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 00:12:42.161145       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 00:12:42.216314       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 00:12:42.372287       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 00:12:44.447500       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 00:12:44.677435       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 00:12:45.145452       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 00:12:59.558381       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0814 00:12:59.559313       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0814 00:12:59.571992       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [13140714cc06469d86f9745a4c86966c693d3449ed3f3c154fbb6e14ae42ee33] <==
	I0814 00:12:46.109027       1 serving.go:386] Generated self-signed cert in-memory
	I0814 00:12:46.268452       1 controllermanager.go:197] "Starting" version="v1.31.0"
	I0814 00:12:46.269264       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0814 00:12:46.271177       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0814 00:12:46.271871       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0814 00:12:46.272140       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0814 00:12:46.272306       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0814 00:13:06.274543       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.79:8443/healthz\": dial tcp 192.168.39.79:8443: connect: connection refused"
	
	
	==> kube-controller-manager [ee5d9b99a82fb9bc7d901e38ccc970b5854914d5bdc843ac9d85a1c4a32c0819] <==
	I0814 00:15:45.099143       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-105013-m04"
	I0814 00:15:45.117281       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-105013-m04"
	I0814 00:15:45.188459       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="33.224411ms"
	I0814 00:15:45.188622       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="91.772µs"
	I0814 00:15:48.248333       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-105013-m04"
	I0814 00:15:50.209013       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-105013-m04"
	E0814 00:16:00.023947       1 gc_controller.go:151] "Failed to get node" err="node \"ha-105013-m03\" not found" logger="pod-garbage-collector-controller" node="ha-105013-m03"
	E0814 00:16:00.023990       1 gc_controller.go:151] "Failed to get node" err="node \"ha-105013-m03\" not found" logger="pod-garbage-collector-controller" node="ha-105013-m03"
	E0814 00:16:00.023997       1 gc_controller.go:151] "Failed to get node" err="node \"ha-105013-m03\" not found" logger="pod-garbage-collector-controller" node="ha-105013-m03"
	E0814 00:16:00.024003       1 gc_controller.go:151] "Failed to get node" err="node \"ha-105013-m03\" not found" logger="pod-garbage-collector-controller" node="ha-105013-m03"
	E0814 00:16:00.024010       1 gc_controller.go:151] "Failed to get node" err="node \"ha-105013-m03\" not found" logger="pod-garbage-collector-controller" node="ha-105013-m03"
	I0814 00:16:00.035578       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-105013-m03"
	I0814 00:16:00.077804       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-105013-m03"
	I0814 00:16:00.078428       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-105013-m03"
	I0814 00:16:00.103942       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-105013-m03"
	I0814 00:16:00.104198       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-2ps5t"
	I0814 00:16:00.131317       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-2ps5t"
	I0814 00:16:00.131359       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-77bnm"
	I0814 00:16:00.160102       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-77bnm"
	I0814 00:16:00.161295       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-105013-m03"
	I0814 00:16:00.191312       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-105013-m03"
	I0814 00:16:00.191413       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-105013-m03"
	I0814 00:16:00.213670       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-105013-m03"
	I0814 00:16:00.213705       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-105013-m03"
	I0814 00:16:00.248054       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-105013-m03"
	
	
	==> kube-proxy [cae4a2039c73c8b44c95f3baeb4245c44b9cf0e510c0c05c79eff9d68a7af5c7] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0814 00:03:52.659061       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0814 00:03:52.679318       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.79"]
	E0814 00:03:52.679386       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0814 00:03:52.731286       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0814 00:03:52.731326       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0814 00:03:52.731353       1 server_linux.go:169] "Using iptables Proxier"
	I0814 00:03:52.733433       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0814 00:03:52.733831       1 server.go:483] "Version info" version="v1.31.0"
	I0814 00:03:52.733964       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0814 00:03:52.735266       1 config.go:197] "Starting service config controller"
	I0814 00:03:52.735341       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0814 00:03:52.735408       1 config.go:104] "Starting endpoint slice config controller"
	I0814 00:03:52.735443       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0814 00:03:52.736166       1 config.go:326] "Starting node config controller"
	I0814 00:03:52.736203       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0814 00:03:52.835703       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0814 00:03:52.835796       1 shared_informer.go:320] Caches are synced for service config
	I0814 00:03:52.836469       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [d463e78fa6b27c92b26cd1bc806e34320df69961aae19159122eec7c9250a80b] <==
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0814 00:12:56.391411       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-105013\": net/http: TLS handshake timeout"
	E0814 00:13:03.028604       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-105013\": dial tcp 192.168.39.254:8443: connect: no route to host - error from a previous attempt: read tcp 192.168.39.254:39438->192.168.39.254:8443: read: connection reset by peer"
	E0814 00:13:06.101086       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-105013\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0814 00:13:12.244696       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-105013\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0814 00:13:21.739673       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.79"]
	E0814 00:13:21.739813       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0814 00:13:21.773436       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0814 00:13:21.773478       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0814 00:13:21.773541       1 server_linux.go:169] "Using iptables Proxier"
	I0814 00:13:21.775828       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0814 00:13:21.776161       1 server.go:483] "Version info" version="v1.31.0"
	I0814 00:13:21.776184       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0814 00:13:21.777573       1 config.go:197] "Starting service config controller"
	I0814 00:13:21.777623       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0814 00:13:21.777647       1 config.go:104] "Starting endpoint slice config controller"
	I0814 00:13:21.777662       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0814 00:13:21.778493       1 config.go:326] "Starting node config controller"
	I0814 00:13:21.778519       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0814 00:13:21.878546       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0814 00:13:21.878583       1 shared_informer.go:320] Caches are synced for node config
	I0814 00:13:21.878598       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [8d6324bf2404b4c092f212b1262882c454050b8a4c18214d22cbb56d999ed4d4] <==
	E0814 00:13:08.772038       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.79:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.79:8443: connect: connection refused" logger="UnhandledError"
	W0814 00:13:08.904714       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.79:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.79:8443: connect: connection refused
	E0814 00:13:08.904825       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.168.39.79:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.39.79:8443: connect: connection refused" logger="UnhandledError"
	W0814 00:13:08.906290       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.79:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.79:8443: connect: connection refused
	E0814 00:13:08.906386       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.168.39.79:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.79:8443: connect: connection refused" logger="UnhandledError"
	W0814 00:13:09.699746       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.79:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.79:8443: connect: connection refused
	E0814 00:13:09.699801       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.168.39.79:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.39.79:8443: connect: connection refused" logger="UnhandledError"
	W0814 00:13:10.346371       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.79:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.79:8443: connect: connection refused
	E0814 00:13:10.346425       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.168.39.79:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.39.79:8443: connect: connection refused" logger="UnhandledError"
	W0814 00:13:10.796842       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.79:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.79:8443: connect: connection refused
	E0814 00:13:10.797015       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.168.39.79:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.39.79:8443: connect: connection refused" logger="UnhandledError"
	W0814 00:13:16.498465       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0814 00:13:16.498608       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0814 00:13:16.498472       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0814 00:13:16.499409       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0814 00:13:16.505215       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0814 00:13:16.505255       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0814 00:13:41.688494       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0814 00:14:54.979723       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-cc6ch\": pod busybox-7dff88458-cc6ch is already assigned to node \"ha-105013-m04\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-cc6ch" node="ha-105013-m04"
	E0814 00:14:54.979950       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod a4415006-27c8-4ac9-ac2d-a73687bb8f0f(default/busybox-7dff88458-cc6ch) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-cc6ch"
	E0814 00:14:54.980045       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-cc6ch\": pod busybox-7dff88458-cc6ch is already assigned to node \"ha-105013-m04\"" pod="default/busybox-7dff88458-cc6ch"
	I0814 00:14:54.980124       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-cc6ch" node="ha-105013-m04"
	E0814 00:14:54.981146       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-9zndz\": pod busybox-7dff88458-9zndz is already assigned to node \"ha-105013-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-9zndz" node="ha-105013-m02"
	E0814 00:14:54.981268       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-9zndz\": pod busybox-7dff88458-9zndz is already assigned to node \"ha-105013-m02\"" pod="default/busybox-7dff88458-9zndz"
	I0814 00:14:54.981357       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-9zndz" node="ha-105013-m02"
	
	
	==> kube-scheduler [f644ed2e094890dd8d28e4ca035634bf6340e598553601368c4025ba64cbbc58] <==
	I0814 00:03:48.433993       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0814 00:06:53.566989       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="fd3f4fa0-b215-4671-8d8a-310dcd4cac18" pod="default/busybox-7dff88458-5px5v" assumedNode="ha-105013-m03" currentNode="ha-105013-m02"
	E0814 00:06:53.578753       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-5px5v\": pod busybox-7dff88458-5px5v is already assigned to node \"ha-105013-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-5px5v" node="ha-105013-m02"
	E0814 00:06:53.579116       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod fd3f4fa0-b215-4671-8d8a-310dcd4cac18(default/busybox-7dff88458-5px5v) was assumed on ha-105013-m02 but assigned to ha-105013-m03" pod="default/busybox-7dff88458-5px5v"
	E0814 00:06:53.579261       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-5px5v\": pod busybox-7dff88458-5px5v is already assigned to node \"ha-105013-m03\"" pod="default/busybox-7dff88458-5px5v"
	I0814 00:06:53.579379       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-5px5v" node="ha-105013-m03"
	E0814 00:07:31.050023       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-2cd8m\": pod kube-proxy-2cd8m is already assigned to node \"ha-105013-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-2cd8m" node="ha-105013-m04"
	E0814 00:07:31.050117       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod e5bb37bb-b8f9-4a66-8a98-778055989065(kube-system/kube-proxy-2cd8m) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-2cd8m"
	E0814 00:07:31.050142       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-2cd8m\": pod kube-proxy-2cd8m is already assigned to node \"ha-105013-m04\"" pod="kube-system/kube-proxy-2cd8m"
	I0814 00:07:31.050175       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-2cd8m" node="ha-105013-m04"
	E0814 00:07:31.114249       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-5xxs4\": pod kube-proxy-5xxs4 is already assigned to node \"ha-105013-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-5xxs4" node="ha-105013-m04"
	E0814 00:07:31.115258       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-5xxs4\": pod kube-proxy-5xxs4 is already assigned to node \"ha-105013-m04\"" pod="kube-system/kube-proxy-5xxs4"
	E0814 00:07:31.117351       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-t8dfd\": pod kube-proxy-t8dfd is already assigned to node \"ha-105013-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-t8dfd" node="ha-105013-m04"
	E0814 00:07:31.117474       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 34d2f2d4-f6f7-48b0-9325-0a4be891bc91(kube-system/kube-proxy-t8dfd) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-t8dfd"
	E0814 00:07:31.117548       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-t8dfd\": pod kube-proxy-t8dfd is already assigned to node \"ha-105013-m04\"" pod="kube-system/kube-proxy-t8dfd"
	I0814 00:07:31.117593       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-t8dfd" node="ha-105013-m04"
	E0814 00:07:31.118258       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-2dmsx\": pod kindnet-2dmsx is already assigned to node \"ha-105013-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-2dmsx" node="ha-105013-m04"
	E0814 00:07:31.118324       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod c0343723-94eb-47f2-a11c-ed9a25875f46(kube-system/kindnet-2dmsx) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-2dmsx"
	E0814 00:07:31.118343       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-2dmsx\": pod kindnet-2dmsx is already assigned to node \"ha-105013-m04\"" pod="kube-system/kindnet-2dmsx"
	I0814 00:07:31.118359       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-2dmsx" node="ha-105013-m04"
	E0814 00:07:31.128785       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-jgnhw\": pod kindnet-jgnhw is already assigned to node \"ha-105013-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-jgnhw" node="ha-105013-m04"
	E0814 00:07:31.129287       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod ffc28f4d-e2ce-4f73-a7d3-4df8b62d445b(kube-system/kindnet-jgnhw) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-jgnhw"
	E0814 00:07:31.129360       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-jgnhw\": pod kindnet-jgnhw is already assigned to node \"ha-105013-m04\"" pod="kube-system/kindnet-jgnhw"
	I0814 00:07:31.129458       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-jgnhw" node="ha-105013-m04"
	E0814 00:11:02.504669       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Aug 14 00:15:57 ha-105013 kubelet[1324]: E0814 00:15:57.808748    1324 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723594557808406205,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:145823,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 00:15:57 ha-105013 kubelet[1324]: E0814 00:15:57.809291    1324 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723594557808406205,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:145823,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 00:16:07 ha-105013 kubelet[1324]: E0814 00:16:07.811666    1324 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723594567811267748,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:145823,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 00:16:07 ha-105013 kubelet[1324]: E0814 00:16:07.812034    1324 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723594567811267748,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:145823,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 00:16:17 ha-105013 kubelet[1324]: E0814 00:16:17.813671    1324 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723594577813277470,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:145823,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 00:16:17 ha-105013 kubelet[1324]: E0814 00:16:17.814256    1324 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723594577813277470,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:145823,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 00:16:27 ha-105013 kubelet[1324]: E0814 00:16:27.815622    1324 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723594587815274138,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:145823,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 00:16:27 ha-105013 kubelet[1324]: E0814 00:16:27.816044    1324 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723594587815274138,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:145823,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 00:16:37 ha-105013 kubelet[1324]: E0814 00:16:37.818476    1324 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723594597817964861,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:145823,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 00:16:37 ha-105013 kubelet[1324]: E0814 00:16:37.818781    1324 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723594597817964861,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:145823,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 00:16:47 ha-105013 kubelet[1324]: E0814 00:16:47.640630    1324 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 14 00:16:47 ha-105013 kubelet[1324]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 14 00:16:47 ha-105013 kubelet[1324]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 14 00:16:47 ha-105013 kubelet[1324]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 14 00:16:47 ha-105013 kubelet[1324]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 14 00:16:47 ha-105013 kubelet[1324]: E0814 00:16:47.821751    1324 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723594607821244269,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:145823,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 00:16:47 ha-105013 kubelet[1324]: E0814 00:16:47.821785    1324 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723594607821244269,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:145823,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 00:16:57 ha-105013 kubelet[1324]: E0814 00:16:57.823852    1324 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723594617823277163,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:145823,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 00:16:57 ha-105013 kubelet[1324]: E0814 00:16:57.824181    1324 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723594617823277163,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:145823,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 00:17:07 ha-105013 kubelet[1324]: E0814 00:17:07.826789    1324 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723594627826376737,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:145823,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 00:17:07 ha-105013 kubelet[1324]: E0814 00:17:07.827327    1324 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723594627826376737,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:145823,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 00:17:17 ha-105013 kubelet[1324]: E0814 00:17:17.829706    1324 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723594637829367632,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:145823,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 00:17:17 ha-105013 kubelet[1324]: E0814 00:17:17.830037    1324 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723594637829367632,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:145823,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 00:17:27 ha-105013 kubelet[1324]: E0814 00:17:27.831796    1324 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723594647831392718,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:145823,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 00:17:27 ha-105013 kubelet[1324]: E0814 00:17:27.832137    1324 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723594647831392718,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:145823,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0814 00:17:31.152054   34565 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19429-9425/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-105013 -n ha-105013
helpers_test.go:261: (dbg) Run:  kubectl --context ha-105013 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (141.67s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (324.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-745925
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-745925
E0814 00:32:14.189060   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/functional-770612/client.crt: no such file or directory" logger="UnhandledError"
E0814 00:33:08.589368   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/addons-937866/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-745925: exit status 82 (2m1.720648605s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-745925-m03"  ...
	* Stopping node "multinode-745925-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-745925" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-745925 --wait=true -v=8 --alsologtostderr
E0814 00:35:05.518944   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/addons-937866/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-745925 --wait=true -v=8 --alsologtostderr: (3m20.905897893s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-745925
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-745925 -n multinode-745925
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-745925 logs -n 25
E0814 00:37:14.186260   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/functional-770612/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-745925 logs -n 25: (1.358873751s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-745925 ssh -n                                                                 | multinode-745925 | jenkins | v1.33.1 | 14 Aug 24 00:31 UTC | 14 Aug 24 00:31 UTC |
	|         | multinode-745925-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-745925 cp multinode-745925-m02:/home/docker/cp-test.txt                       | multinode-745925 | jenkins | v1.33.1 | 14 Aug 24 00:31 UTC | 14 Aug 24 00:31 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1031533634/001/cp-test_multinode-745925-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-745925 ssh -n                                                                 | multinode-745925 | jenkins | v1.33.1 | 14 Aug 24 00:31 UTC | 14 Aug 24 00:31 UTC |
	|         | multinode-745925-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-745925 cp multinode-745925-m02:/home/docker/cp-test.txt                       | multinode-745925 | jenkins | v1.33.1 | 14 Aug 24 00:31 UTC | 14 Aug 24 00:31 UTC |
	|         | multinode-745925:/home/docker/cp-test_multinode-745925-m02_multinode-745925.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-745925 ssh -n                                                                 | multinode-745925 | jenkins | v1.33.1 | 14 Aug 24 00:31 UTC | 14 Aug 24 00:31 UTC |
	|         | multinode-745925-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-745925 ssh -n multinode-745925 sudo cat                                       | multinode-745925 | jenkins | v1.33.1 | 14 Aug 24 00:31 UTC | 14 Aug 24 00:31 UTC |
	|         | /home/docker/cp-test_multinode-745925-m02_multinode-745925.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-745925 cp multinode-745925-m02:/home/docker/cp-test.txt                       | multinode-745925 | jenkins | v1.33.1 | 14 Aug 24 00:31 UTC | 14 Aug 24 00:31 UTC |
	|         | multinode-745925-m03:/home/docker/cp-test_multinode-745925-m02_multinode-745925-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-745925 ssh -n                                                                 | multinode-745925 | jenkins | v1.33.1 | 14 Aug 24 00:31 UTC | 14 Aug 24 00:31 UTC |
	|         | multinode-745925-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-745925 ssh -n multinode-745925-m03 sudo cat                                   | multinode-745925 | jenkins | v1.33.1 | 14 Aug 24 00:31 UTC | 14 Aug 24 00:31 UTC |
	|         | /home/docker/cp-test_multinode-745925-m02_multinode-745925-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-745925 cp testdata/cp-test.txt                                                | multinode-745925 | jenkins | v1.33.1 | 14 Aug 24 00:31 UTC | 14 Aug 24 00:31 UTC |
	|         | multinode-745925-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-745925 ssh -n                                                                 | multinode-745925 | jenkins | v1.33.1 | 14 Aug 24 00:31 UTC | 14 Aug 24 00:31 UTC |
	|         | multinode-745925-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-745925 cp multinode-745925-m03:/home/docker/cp-test.txt                       | multinode-745925 | jenkins | v1.33.1 | 14 Aug 24 00:31 UTC | 14 Aug 24 00:31 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1031533634/001/cp-test_multinode-745925-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-745925 ssh -n                                                                 | multinode-745925 | jenkins | v1.33.1 | 14 Aug 24 00:31 UTC | 14 Aug 24 00:31 UTC |
	|         | multinode-745925-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-745925 cp multinode-745925-m03:/home/docker/cp-test.txt                       | multinode-745925 | jenkins | v1.33.1 | 14 Aug 24 00:31 UTC | 14 Aug 24 00:31 UTC |
	|         | multinode-745925:/home/docker/cp-test_multinode-745925-m03_multinode-745925.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-745925 ssh -n                                                                 | multinode-745925 | jenkins | v1.33.1 | 14 Aug 24 00:31 UTC | 14 Aug 24 00:31 UTC |
	|         | multinode-745925-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-745925 ssh -n multinode-745925 sudo cat                                       | multinode-745925 | jenkins | v1.33.1 | 14 Aug 24 00:31 UTC | 14 Aug 24 00:31 UTC |
	|         | /home/docker/cp-test_multinode-745925-m03_multinode-745925.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-745925 cp multinode-745925-m03:/home/docker/cp-test.txt                       | multinode-745925 | jenkins | v1.33.1 | 14 Aug 24 00:31 UTC | 14 Aug 24 00:31 UTC |
	|         | multinode-745925-m02:/home/docker/cp-test_multinode-745925-m03_multinode-745925-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-745925 ssh -n                                                                 | multinode-745925 | jenkins | v1.33.1 | 14 Aug 24 00:31 UTC | 14 Aug 24 00:31 UTC |
	|         | multinode-745925-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-745925 ssh -n multinode-745925-m02 sudo cat                                   | multinode-745925 | jenkins | v1.33.1 | 14 Aug 24 00:31 UTC | 14 Aug 24 00:31 UTC |
	|         | /home/docker/cp-test_multinode-745925-m03_multinode-745925-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-745925 node stop m03                                                          | multinode-745925 | jenkins | v1.33.1 | 14 Aug 24 00:31 UTC | 14 Aug 24 00:31 UTC |
	| node    | multinode-745925 node start                                                             | multinode-745925 | jenkins | v1.33.1 | 14 Aug 24 00:31 UTC | 14 Aug 24 00:31 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-745925                                                                | multinode-745925 | jenkins | v1.33.1 | 14 Aug 24 00:31 UTC |                     |
	| stop    | -p multinode-745925                                                                     | multinode-745925 | jenkins | v1.33.1 | 14 Aug 24 00:31 UTC |                     |
	| start   | -p multinode-745925                                                                     | multinode-745925 | jenkins | v1.33.1 | 14 Aug 24 00:33 UTC | 14 Aug 24 00:37 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-745925                                                                | multinode-745925 | jenkins | v1.33.1 | 14 Aug 24 00:37 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/14 00:33:52
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0814 00:33:52.736985   44271 out.go:291] Setting OutFile to fd 1 ...
	I0814 00:33:52.737282   44271 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 00:33:52.737292   44271 out.go:304] Setting ErrFile to fd 2...
	I0814 00:33:52.737299   44271 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 00:33:52.737529   44271 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19429-9425/.minikube/bin
	I0814 00:33:52.738129   44271 out.go:298] Setting JSON to false
	I0814 00:33:52.739087   44271 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4579,"bootTime":1723591054,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0814 00:33:52.739151   44271 start.go:139] virtualization: kvm guest
	I0814 00:33:52.742156   44271 out.go:177] * [multinode-745925] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0814 00:33:52.743536   44271 notify.go:220] Checking for updates...
	I0814 00:33:52.743548   44271 out.go:177]   - MINIKUBE_LOCATION=19429
	I0814 00:33:52.744919   44271 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0814 00:33:52.746133   44271 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19429-9425/kubeconfig
	I0814 00:33:52.747363   44271 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19429-9425/.minikube
	I0814 00:33:52.748491   44271 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0814 00:33:52.749600   44271 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0814 00:33:52.751357   44271 config.go:182] Loaded profile config "multinode-745925": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 00:33:52.751428   44271 driver.go:392] Setting default libvirt URI to qemu:///system
	I0814 00:33:52.751841   44271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 00:33:52.751907   44271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 00:33:52.766796   44271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41023
	I0814 00:33:52.767156   44271 main.go:141] libmachine: () Calling .GetVersion
	I0814 00:33:52.767686   44271 main.go:141] libmachine: Using API Version  1
	I0814 00:33:52.767707   44271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 00:33:52.768021   44271 main.go:141] libmachine: () Calling .GetMachineName
	I0814 00:33:52.768205   44271 main.go:141] libmachine: (multinode-745925) Calling .DriverName
	I0814 00:33:52.802988   44271 out.go:177] * Using the kvm2 driver based on existing profile
	I0814 00:33:52.804368   44271 start.go:297] selected driver: kvm2
	I0814 00:33:52.804383   44271 start.go:901] validating driver "kvm2" against &{Name:multinode-745925 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.0 ClusterName:multinode-745925 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.201 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.55 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.225 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 00:33:52.804544   44271 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0814 00:33:52.804864   44271 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 00:33:52.804962   44271 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19429-9425/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0814 00:33:52.819371   44271 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0814 00:33:52.820014   44271 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 00:33:52.820090   44271 cni.go:84] Creating CNI manager for ""
	I0814 00:33:52.820108   44271 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0814 00:33:52.820161   44271 start.go:340] cluster config:
	{Name:multinode-745925 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-745925 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.201 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.55 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.225 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false k
ong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 00:33:52.820283   44271 iso.go:125] acquiring lock: {Name:mk654171f0e78c238a265344dbbd1eacb21d0f1b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 00:33:52.822701   44271 out.go:177] * Starting "multinode-745925" primary control-plane node in "multinode-745925" cluster
	I0814 00:33:52.824172   44271 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0814 00:33:52.824206   44271 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0814 00:33:52.824215   44271 cache.go:56] Caching tarball of preloaded images
	I0814 00:33:52.824307   44271 preload.go:172] Found /home/jenkins/minikube-integration/19429-9425/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0814 00:33:52.824322   44271 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0814 00:33:52.824457   44271 profile.go:143] Saving config to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/multinode-745925/config.json ...
	I0814 00:33:52.824690   44271 start.go:360] acquireMachinesLock for multinode-745925: {Name:mk8ab1b491404dd6cc95a37a50c74a14d6d0e4c9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 00:33:52.824741   44271 start.go:364] duration metric: took 30.68µs to acquireMachinesLock for "multinode-745925"
	I0814 00:33:52.824760   44271 start.go:96] Skipping create...Using existing machine configuration
	I0814 00:33:52.824781   44271 fix.go:54] fixHost starting: 
	I0814 00:33:52.825057   44271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 00:33:52.825090   44271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 00:33:52.839157   44271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45277
	I0814 00:33:52.839545   44271 main.go:141] libmachine: () Calling .GetVersion
	I0814 00:33:52.839939   44271 main.go:141] libmachine: Using API Version  1
	I0814 00:33:52.839956   44271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 00:33:52.840234   44271 main.go:141] libmachine: () Calling .GetMachineName
	I0814 00:33:52.840415   44271 main.go:141] libmachine: (multinode-745925) Calling .DriverName
	I0814 00:33:52.840665   44271 main.go:141] libmachine: (multinode-745925) Calling .GetState
	I0814 00:33:52.842142   44271 fix.go:112] recreateIfNeeded on multinode-745925: state=Running err=<nil>
	W0814 00:33:52.842156   44271 fix.go:138] unexpected machine state, will restart: <nil>
	I0814 00:33:52.843948   44271 out.go:177] * Updating the running kvm2 "multinode-745925" VM ...
	I0814 00:33:52.845073   44271 machine.go:94] provisionDockerMachine start ...
	I0814 00:33:52.845088   44271 main.go:141] libmachine: (multinode-745925) Calling .DriverName
	I0814 00:33:52.845273   44271 main.go:141] libmachine: (multinode-745925) Calling .GetSSHHostname
	I0814 00:33:52.847942   44271 main.go:141] libmachine: (multinode-745925) DBG | domain multinode-745925 has defined MAC address 52:54:00:eb:87:ad in network mk-multinode-745925
	I0814 00:33:52.848435   44271 main.go:141] libmachine: (multinode-745925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:87:ad", ip: ""} in network mk-multinode-745925: {Iface:virbr1 ExpiryTime:2024-08-14 01:28:32 +0000 UTC Type:0 Mac:52:54:00:eb:87:ad Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:multinode-745925 Clientid:01:52:54:00:eb:87:ad}
	I0814 00:33:52.848460   44271 main.go:141] libmachine: (multinode-745925) DBG | domain multinode-745925 has defined IP address 192.168.39.201 and MAC address 52:54:00:eb:87:ad in network mk-multinode-745925
	I0814 00:33:52.848565   44271 main.go:141] libmachine: (multinode-745925) Calling .GetSSHPort
	I0814 00:33:52.848728   44271 main.go:141] libmachine: (multinode-745925) Calling .GetSSHKeyPath
	I0814 00:33:52.848880   44271 main.go:141] libmachine: (multinode-745925) Calling .GetSSHKeyPath
	I0814 00:33:52.849010   44271 main.go:141] libmachine: (multinode-745925) Calling .GetSSHUsername
	I0814 00:33:52.849163   44271 main.go:141] libmachine: Using SSH client type: native
	I0814 00:33:52.849343   44271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.201 22 <nil> <nil>}
	I0814 00:33:52.849356   44271 main.go:141] libmachine: About to run SSH command:
	hostname
	I0814 00:33:52.954665   44271 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-745925
	
	I0814 00:33:52.954702   44271 main.go:141] libmachine: (multinode-745925) Calling .GetMachineName
	I0814 00:33:52.954959   44271 buildroot.go:166] provisioning hostname "multinode-745925"
	I0814 00:33:52.954981   44271 main.go:141] libmachine: (multinode-745925) Calling .GetMachineName
	I0814 00:33:52.955170   44271 main.go:141] libmachine: (multinode-745925) Calling .GetSSHHostname
	I0814 00:33:52.957807   44271 main.go:141] libmachine: (multinode-745925) DBG | domain multinode-745925 has defined MAC address 52:54:00:eb:87:ad in network mk-multinode-745925
	I0814 00:33:52.958290   44271 main.go:141] libmachine: (multinode-745925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:87:ad", ip: ""} in network mk-multinode-745925: {Iface:virbr1 ExpiryTime:2024-08-14 01:28:32 +0000 UTC Type:0 Mac:52:54:00:eb:87:ad Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:multinode-745925 Clientid:01:52:54:00:eb:87:ad}
	I0814 00:33:52.958316   44271 main.go:141] libmachine: (multinode-745925) DBG | domain multinode-745925 has defined IP address 192.168.39.201 and MAC address 52:54:00:eb:87:ad in network mk-multinode-745925
	I0814 00:33:52.958456   44271 main.go:141] libmachine: (multinode-745925) Calling .GetSSHPort
	I0814 00:33:52.958620   44271 main.go:141] libmachine: (multinode-745925) Calling .GetSSHKeyPath
	I0814 00:33:52.958736   44271 main.go:141] libmachine: (multinode-745925) Calling .GetSSHKeyPath
	I0814 00:33:52.958873   44271 main.go:141] libmachine: (multinode-745925) Calling .GetSSHUsername
	I0814 00:33:52.958998   44271 main.go:141] libmachine: Using SSH client type: native
	I0814 00:33:52.959181   44271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.201 22 <nil> <nil>}
	I0814 00:33:52.959192   44271 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-745925 && echo "multinode-745925" | sudo tee /etc/hostname
	I0814 00:33:53.081508   44271 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-745925
	
	I0814 00:33:53.081545   44271 main.go:141] libmachine: (multinode-745925) Calling .GetSSHHostname
	I0814 00:33:53.084003   44271 main.go:141] libmachine: (multinode-745925) DBG | domain multinode-745925 has defined MAC address 52:54:00:eb:87:ad in network mk-multinode-745925
	I0814 00:33:53.084348   44271 main.go:141] libmachine: (multinode-745925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:87:ad", ip: ""} in network mk-multinode-745925: {Iface:virbr1 ExpiryTime:2024-08-14 01:28:32 +0000 UTC Type:0 Mac:52:54:00:eb:87:ad Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:multinode-745925 Clientid:01:52:54:00:eb:87:ad}
	I0814 00:33:53.084386   44271 main.go:141] libmachine: (multinode-745925) DBG | domain multinode-745925 has defined IP address 192.168.39.201 and MAC address 52:54:00:eb:87:ad in network mk-multinode-745925
	I0814 00:33:53.084561   44271 main.go:141] libmachine: (multinode-745925) Calling .GetSSHPort
	I0814 00:33:53.084763   44271 main.go:141] libmachine: (multinode-745925) Calling .GetSSHKeyPath
	I0814 00:33:53.084910   44271 main.go:141] libmachine: (multinode-745925) Calling .GetSSHKeyPath
	I0814 00:33:53.085053   44271 main.go:141] libmachine: (multinode-745925) Calling .GetSSHUsername
	I0814 00:33:53.085232   44271 main.go:141] libmachine: Using SSH client type: native
	I0814 00:33:53.085503   44271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.201 22 <nil> <nil>}
	I0814 00:33:53.085530   44271 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-745925' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-745925/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-745925' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0814 00:33:53.195129   44271 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 00:33:53.195170   44271 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19429-9425/.minikube CaCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19429-9425/.minikube}
	I0814 00:33:53.195207   44271 buildroot.go:174] setting up certificates
	I0814 00:33:53.195216   44271 provision.go:84] configureAuth start
	I0814 00:33:53.195225   44271 main.go:141] libmachine: (multinode-745925) Calling .GetMachineName
	I0814 00:33:53.195621   44271 main.go:141] libmachine: (multinode-745925) Calling .GetIP
	I0814 00:33:53.198139   44271 main.go:141] libmachine: (multinode-745925) DBG | domain multinode-745925 has defined MAC address 52:54:00:eb:87:ad in network mk-multinode-745925
	I0814 00:33:53.198610   44271 main.go:141] libmachine: (multinode-745925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:87:ad", ip: ""} in network mk-multinode-745925: {Iface:virbr1 ExpiryTime:2024-08-14 01:28:32 +0000 UTC Type:0 Mac:52:54:00:eb:87:ad Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:multinode-745925 Clientid:01:52:54:00:eb:87:ad}
	I0814 00:33:53.198636   44271 main.go:141] libmachine: (multinode-745925) DBG | domain multinode-745925 has defined IP address 192.168.39.201 and MAC address 52:54:00:eb:87:ad in network mk-multinode-745925
	I0814 00:33:53.198786   44271 main.go:141] libmachine: (multinode-745925) Calling .GetSSHHostname
	I0814 00:33:53.200839   44271 main.go:141] libmachine: (multinode-745925) DBG | domain multinode-745925 has defined MAC address 52:54:00:eb:87:ad in network mk-multinode-745925
	I0814 00:33:53.201150   44271 main.go:141] libmachine: (multinode-745925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:87:ad", ip: ""} in network mk-multinode-745925: {Iface:virbr1 ExpiryTime:2024-08-14 01:28:32 +0000 UTC Type:0 Mac:52:54:00:eb:87:ad Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:multinode-745925 Clientid:01:52:54:00:eb:87:ad}
	I0814 00:33:53.201188   44271 main.go:141] libmachine: (multinode-745925) DBG | domain multinode-745925 has defined IP address 192.168.39.201 and MAC address 52:54:00:eb:87:ad in network mk-multinode-745925
	I0814 00:33:53.201320   44271 provision.go:143] copyHostCerts
	I0814 00:33:53.201350   44271 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem
	I0814 00:33:53.201385   44271 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem, removing ...
	I0814 00:33:53.201394   44271 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem
	I0814 00:33:53.201460   44271 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem (1675 bytes)
	I0814 00:33:53.201552   44271 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem
	I0814 00:33:53.201570   44271 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem, removing ...
	I0814 00:33:53.201583   44271 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem
	I0814 00:33:53.201610   44271 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem (1082 bytes)
	I0814 00:33:53.201708   44271 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem
	I0814 00:33:53.201729   44271 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem, removing ...
	I0814 00:33:53.201736   44271 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem
	I0814 00:33:53.201759   44271 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem (1123 bytes)
	I0814 00:33:53.201819   44271 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem org=jenkins.multinode-745925 san=[127.0.0.1 192.168.39.201 localhost minikube multinode-745925]
	I0814 00:33:53.440737   44271 provision.go:177] copyRemoteCerts
	I0814 00:33:53.440797   44271 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0814 00:33:53.440820   44271 main.go:141] libmachine: (multinode-745925) Calling .GetSSHHostname
	I0814 00:33:53.443833   44271 main.go:141] libmachine: (multinode-745925) DBG | domain multinode-745925 has defined MAC address 52:54:00:eb:87:ad in network mk-multinode-745925
	I0814 00:33:53.444164   44271 main.go:141] libmachine: (multinode-745925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:87:ad", ip: ""} in network mk-multinode-745925: {Iface:virbr1 ExpiryTime:2024-08-14 01:28:32 +0000 UTC Type:0 Mac:52:54:00:eb:87:ad Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:multinode-745925 Clientid:01:52:54:00:eb:87:ad}
	I0814 00:33:53.444193   44271 main.go:141] libmachine: (multinode-745925) DBG | domain multinode-745925 has defined IP address 192.168.39.201 and MAC address 52:54:00:eb:87:ad in network mk-multinode-745925
	I0814 00:33:53.444367   44271 main.go:141] libmachine: (multinode-745925) Calling .GetSSHPort
	I0814 00:33:53.444585   44271 main.go:141] libmachine: (multinode-745925) Calling .GetSSHKeyPath
	I0814 00:33:53.444770   44271 main.go:141] libmachine: (multinode-745925) Calling .GetSSHUsername
	I0814 00:33:53.444894   44271 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/multinode-745925/id_rsa Username:docker}
	I0814 00:33:53.528610   44271 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0814 00:33:53.528674   44271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0814 00:33:53.551326   44271 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0814 00:33:53.551392   44271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0814 00:33:53.573438   44271 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0814 00:33:53.573542   44271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0814 00:33:53.597025   44271 provision.go:87] duration metric: took 401.796686ms to configureAuth
	I0814 00:33:53.597058   44271 buildroot.go:189] setting minikube options for container-runtime
	I0814 00:33:53.597360   44271 config.go:182] Loaded profile config "multinode-745925": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 00:33:53.597465   44271 main.go:141] libmachine: (multinode-745925) Calling .GetSSHHostname
	I0814 00:33:53.600460   44271 main.go:141] libmachine: (multinode-745925) DBG | domain multinode-745925 has defined MAC address 52:54:00:eb:87:ad in network mk-multinode-745925
	I0814 00:33:53.600832   44271 main.go:141] libmachine: (multinode-745925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:87:ad", ip: ""} in network mk-multinode-745925: {Iface:virbr1 ExpiryTime:2024-08-14 01:28:32 +0000 UTC Type:0 Mac:52:54:00:eb:87:ad Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:multinode-745925 Clientid:01:52:54:00:eb:87:ad}
	I0814 00:33:53.600859   44271 main.go:141] libmachine: (multinode-745925) DBG | domain multinode-745925 has defined IP address 192.168.39.201 and MAC address 52:54:00:eb:87:ad in network mk-multinode-745925
	I0814 00:33:53.600993   44271 main.go:141] libmachine: (multinode-745925) Calling .GetSSHPort
	I0814 00:33:53.601203   44271 main.go:141] libmachine: (multinode-745925) Calling .GetSSHKeyPath
	I0814 00:33:53.601375   44271 main.go:141] libmachine: (multinode-745925) Calling .GetSSHKeyPath
	I0814 00:33:53.601531   44271 main.go:141] libmachine: (multinode-745925) Calling .GetSSHUsername
	I0814 00:33:53.601706   44271 main.go:141] libmachine: Using SSH client type: native
	I0814 00:33:53.601874   44271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.201 22 <nil> <nil>}
	I0814 00:33:53.601888   44271 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0814 00:35:24.313993   44271 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0814 00:35:24.314020   44271 machine.go:97] duration metric: took 1m31.468935821s to provisionDockerMachine
	I0814 00:35:24.314033   44271 start.go:293] postStartSetup for "multinode-745925" (driver="kvm2")
	I0814 00:35:24.314060   44271 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0814 00:35:24.314085   44271 main.go:141] libmachine: (multinode-745925) Calling .DriverName
	I0814 00:35:24.314392   44271 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0814 00:35:24.314417   44271 main.go:141] libmachine: (multinode-745925) Calling .GetSSHHostname
	I0814 00:35:24.317239   44271 main.go:141] libmachine: (multinode-745925) DBG | domain multinode-745925 has defined MAC address 52:54:00:eb:87:ad in network mk-multinode-745925
	I0814 00:35:24.317767   44271 main.go:141] libmachine: (multinode-745925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:87:ad", ip: ""} in network mk-multinode-745925: {Iface:virbr1 ExpiryTime:2024-08-14 01:28:32 +0000 UTC Type:0 Mac:52:54:00:eb:87:ad Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:multinode-745925 Clientid:01:52:54:00:eb:87:ad}
	I0814 00:35:24.317796   44271 main.go:141] libmachine: (multinode-745925) DBG | domain multinode-745925 has defined IP address 192.168.39.201 and MAC address 52:54:00:eb:87:ad in network mk-multinode-745925
	I0814 00:35:24.317964   44271 main.go:141] libmachine: (multinode-745925) Calling .GetSSHPort
	I0814 00:35:24.318147   44271 main.go:141] libmachine: (multinode-745925) Calling .GetSSHKeyPath
	I0814 00:35:24.318356   44271 main.go:141] libmachine: (multinode-745925) Calling .GetSSHUsername
	I0814 00:35:24.318568   44271 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/multinode-745925/id_rsa Username:docker}
	I0814 00:35:24.400961   44271 ssh_runner.go:195] Run: cat /etc/os-release
	I0814 00:35:24.404765   44271 command_runner.go:130] > NAME=Buildroot
	I0814 00:35:24.404785   44271 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0814 00:35:24.404789   44271 command_runner.go:130] > ID=buildroot
	I0814 00:35:24.404794   44271 command_runner.go:130] > VERSION_ID=2023.02.9
	I0814 00:35:24.404799   44271 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0814 00:35:24.404909   44271 info.go:137] Remote host: Buildroot 2023.02.9
	I0814 00:35:24.404929   44271 filesync.go:126] Scanning /home/jenkins/minikube-integration/19429-9425/.minikube/addons for local assets ...
	I0814 00:35:24.404998   44271 filesync.go:126] Scanning /home/jenkins/minikube-integration/19429-9425/.minikube/files for local assets ...
	I0814 00:35:24.405093   44271 filesync.go:149] local asset: /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem -> 165892.pem in /etc/ssl/certs
	I0814 00:35:24.405106   44271 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem -> /etc/ssl/certs/165892.pem
	I0814 00:35:24.405224   44271 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0814 00:35:24.414481   44271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem --> /etc/ssl/certs/165892.pem (1708 bytes)
	I0814 00:35:24.436348   44271 start.go:296] duration metric: took 122.30144ms for postStartSetup
	I0814 00:35:24.436387   44271 fix.go:56] duration metric: took 1m31.611618248s for fixHost
	I0814 00:35:24.436406   44271 main.go:141] libmachine: (multinode-745925) Calling .GetSSHHostname
	I0814 00:35:24.439037   44271 main.go:141] libmachine: (multinode-745925) DBG | domain multinode-745925 has defined MAC address 52:54:00:eb:87:ad in network mk-multinode-745925
	I0814 00:35:24.439331   44271 main.go:141] libmachine: (multinode-745925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:87:ad", ip: ""} in network mk-multinode-745925: {Iface:virbr1 ExpiryTime:2024-08-14 01:28:32 +0000 UTC Type:0 Mac:52:54:00:eb:87:ad Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:multinode-745925 Clientid:01:52:54:00:eb:87:ad}
	I0814 00:35:24.439356   44271 main.go:141] libmachine: (multinode-745925) DBG | domain multinode-745925 has defined IP address 192.168.39.201 and MAC address 52:54:00:eb:87:ad in network mk-multinode-745925
	I0814 00:35:24.439499   44271 main.go:141] libmachine: (multinode-745925) Calling .GetSSHPort
	I0814 00:35:24.439682   44271 main.go:141] libmachine: (multinode-745925) Calling .GetSSHKeyPath
	I0814 00:35:24.439837   44271 main.go:141] libmachine: (multinode-745925) Calling .GetSSHKeyPath
	I0814 00:35:24.439939   44271 main.go:141] libmachine: (multinode-745925) Calling .GetSSHUsername
	I0814 00:35:24.440100   44271 main.go:141] libmachine: Using SSH client type: native
	I0814 00:35:24.440253   44271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.201 22 <nil> <nil>}
	I0814 00:35:24.440262   44271 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0814 00:35:24.542542   44271 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723595724.512472430
	
	I0814 00:35:24.542570   44271 fix.go:216] guest clock: 1723595724.512472430
	I0814 00:35:24.542577   44271 fix.go:229] Guest: 2024-08-14 00:35:24.51247243 +0000 UTC Remote: 2024-08-14 00:35:24.436391084 +0000 UTC m=+91.735393673 (delta=76.081346ms)
	I0814 00:35:24.542595   44271 fix.go:200] guest clock delta is within tolerance: 76.081346ms
	I0814 00:35:24.542599   44271 start.go:83] releasing machines lock for "multinode-745925", held for 1m31.717847085s
	I0814 00:35:24.542618   44271 main.go:141] libmachine: (multinode-745925) Calling .DriverName
	I0814 00:35:24.542877   44271 main.go:141] libmachine: (multinode-745925) Calling .GetIP
	I0814 00:35:24.545337   44271 main.go:141] libmachine: (multinode-745925) DBG | domain multinode-745925 has defined MAC address 52:54:00:eb:87:ad in network mk-multinode-745925
	I0814 00:35:24.545734   44271 main.go:141] libmachine: (multinode-745925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:87:ad", ip: ""} in network mk-multinode-745925: {Iface:virbr1 ExpiryTime:2024-08-14 01:28:32 +0000 UTC Type:0 Mac:52:54:00:eb:87:ad Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:multinode-745925 Clientid:01:52:54:00:eb:87:ad}
	I0814 00:35:24.545763   44271 main.go:141] libmachine: (multinode-745925) DBG | domain multinode-745925 has defined IP address 192.168.39.201 and MAC address 52:54:00:eb:87:ad in network mk-multinode-745925
	I0814 00:35:24.545914   44271 main.go:141] libmachine: (multinode-745925) Calling .DriverName
	I0814 00:35:24.546389   44271 main.go:141] libmachine: (multinode-745925) Calling .DriverName
	I0814 00:35:24.546600   44271 main.go:141] libmachine: (multinode-745925) Calling .DriverName
	I0814 00:35:24.546712   44271 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0814 00:35:24.546769   44271 main.go:141] libmachine: (multinode-745925) Calling .GetSSHHostname
	I0814 00:35:24.546792   44271 ssh_runner.go:195] Run: cat /version.json
	I0814 00:35:24.546814   44271 main.go:141] libmachine: (multinode-745925) Calling .GetSSHHostname
	I0814 00:35:24.549376   44271 main.go:141] libmachine: (multinode-745925) DBG | domain multinode-745925 has defined MAC address 52:54:00:eb:87:ad in network mk-multinode-745925
	I0814 00:35:24.549565   44271 main.go:141] libmachine: (multinode-745925) DBG | domain multinode-745925 has defined MAC address 52:54:00:eb:87:ad in network mk-multinode-745925
	I0814 00:35:24.549785   44271 main.go:141] libmachine: (multinode-745925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:87:ad", ip: ""} in network mk-multinode-745925: {Iface:virbr1 ExpiryTime:2024-08-14 01:28:32 +0000 UTC Type:0 Mac:52:54:00:eb:87:ad Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:multinode-745925 Clientid:01:52:54:00:eb:87:ad}
	I0814 00:35:24.549820   44271 main.go:141] libmachine: (multinode-745925) DBG | domain multinode-745925 has defined IP address 192.168.39.201 and MAC address 52:54:00:eb:87:ad in network mk-multinode-745925
	I0814 00:35:24.549948   44271 main.go:141] libmachine: (multinode-745925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:87:ad", ip: ""} in network mk-multinode-745925: {Iface:virbr1 ExpiryTime:2024-08-14 01:28:32 +0000 UTC Type:0 Mac:52:54:00:eb:87:ad Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:multinode-745925 Clientid:01:52:54:00:eb:87:ad}
	I0814 00:35:24.549960   44271 main.go:141] libmachine: (multinode-745925) Calling .GetSSHPort
	I0814 00:35:24.549971   44271 main.go:141] libmachine: (multinode-745925) DBG | domain multinode-745925 has defined IP address 192.168.39.201 and MAC address 52:54:00:eb:87:ad in network mk-multinode-745925
	I0814 00:35:24.550140   44271 main.go:141] libmachine: (multinode-745925) Calling .GetSSHKeyPath
	I0814 00:35:24.550165   44271 main.go:141] libmachine: (multinode-745925) Calling .GetSSHPort
	I0814 00:35:24.550312   44271 main.go:141] libmachine: (multinode-745925) Calling .GetSSHKeyPath
	I0814 00:35:24.550333   44271 main.go:141] libmachine: (multinode-745925) Calling .GetSSHUsername
	I0814 00:35:24.550491   44271 main.go:141] libmachine: (multinode-745925) Calling .GetSSHUsername
	I0814 00:35:24.550511   44271 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/multinode-745925/id_rsa Username:docker}
	I0814 00:35:24.550607   44271 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/multinode-745925/id_rsa Username:docker}
	I0814 00:35:24.656761   44271 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0814 00:35:24.657388   44271 command_runner.go:130] > {"iso_version": "v1.33.1-1723567878-19429", "kicbase_version": "v0.0.44-1723026928-19389", "minikube_version": "v1.33.1", "commit": "99323a71d52eff08226c75fcaff04297eb5d3584"}
	I0814 00:35:24.657557   44271 ssh_runner.go:195] Run: systemctl --version
	I0814 00:35:24.663108   44271 command_runner.go:130] > systemd 252 (252)
	I0814 00:35:24.663136   44271 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0814 00:35:24.663303   44271 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0814 00:35:24.823032   44271 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0814 00:35:24.828427   44271 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0814 00:35:24.828605   44271 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0814 00:35:24.828666   44271 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0814 00:35:24.837558   44271 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0814 00:35:24.837581   44271 start.go:495] detecting cgroup driver to use...
	I0814 00:35:24.837646   44271 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0814 00:35:24.853015   44271 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0814 00:35:24.865844   44271 docker.go:217] disabling cri-docker service (if available) ...
	I0814 00:35:24.865906   44271 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0814 00:35:24.878754   44271 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0814 00:35:24.891135   44271 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0814 00:35:25.029959   44271 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0814 00:35:25.170308   44271 docker.go:233] disabling docker service ...
	I0814 00:35:25.170385   44271 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0814 00:35:25.185699   44271 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0814 00:35:25.198393   44271 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0814 00:35:25.333381   44271 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0814 00:35:25.468063   44271 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0814 00:35:25.481207   44271 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 00:35:25.498586   44271 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0814 00:35:25.499170   44271 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0814 00:35:25.499224   44271 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 00:35:25.509694   44271 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0814 00:35:25.509756   44271 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 00:35:25.519317   44271 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 00:35:25.528597   44271 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 00:35:25.539373   44271 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0814 00:35:25.549149   44271 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 00:35:25.558645   44271 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 00:35:25.569126   44271 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 00:35:25.578563   44271 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0814 00:35:25.587019   44271 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0814 00:35:25.587177   44271 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0814 00:35:25.595505   44271 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 00:35:25.731918   44271 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0814 00:35:29.186981   44271 ssh_runner.go:235] Completed: sudo systemctl restart crio: (3.455027566s)
	I0814 00:35:29.187008   44271 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0814 00:35:29.187058   44271 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0814 00:35:29.192103   44271 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0814 00:35:29.192124   44271 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0814 00:35:29.192131   44271 command_runner.go:130] > Device: 0,22	Inode: 1317        Links: 1
	I0814 00:35:29.192138   44271 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0814 00:35:29.192145   44271 command_runner.go:130] > Access: 2024-08-14 00:35:29.144137867 +0000
	I0814 00:35:29.192162   44271 command_runner.go:130] > Modify: 2024-08-14 00:35:29.056136228 +0000
	I0814 00:35:29.192170   44271 command_runner.go:130] > Change: 2024-08-14 00:35:29.056136228 +0000
	I0814 00:35:29.192176   44271 command_runner.go:130] >  Birth: -
	I0814 00:35:29.192374   44271 start.go:563] Will wait 60s for crictl version
	I0814 00:35:29.192423   44271 ssh_runner.go:195] Run: which crictl
	I0814 00:35:29.195702   44271 command_runner.go:130] > /usr/bin/crictl
	I0814 00:35:29.195838   44271 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0814 00:35:29.238596   44271 command_runner.go:130] > Version:  0.1.0
	I0814 00:35:29.238617   44271 command_runner.go:130] > RuntimeName:  cri-o
	I0814 00:35:29.238767   44271 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0814 00:35:29.238792   44271 command_runner.go:130] > RuntimeApiVersion:  v1
	I0814 00:35:29.240822   44271 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0814 00:35:29.240895   44271 ssh_runner.go:195] Run: crio --version
	I0814 00:35:29.271735   44271 command_runner.go:130] > crio version 1.29.1
	I0814 00:35:29.271762   44271 command_runner.go:130] > Version:        1.29.1
	I0814 00:35:29.271777   44271 command_runner.go:130] > GitCommit:      unknown
	I0814 00:35:29.271784   44271 command_runner.go:130] > GitCommitDate:  unknown
	I0814 00:35:29.271790   44271 command_runner.go:130] > GitTreeState:   clean
	I0814 00:35:29.271797   44271 command_runner.go:130] > BuildDate:      2024-08-13T22:49:54Z
	I0814 00:35:29.271801   44271 command_runner.go:130] > GoVersion:      go1.21.6
	I0814 00:35:29.271805   44271 command_runner.go:130] > Compiler:       gc
	I0814 00:35:29.271810   44271 command_runner.go:130] > Platform:       linux/amd64
	I0814 00:35:29.271825   44271 command_runner.go:130] > Linkmode:       dynamic
	I0814 00:35:29.271833   44271 command_runner.go:130] > BuildTags:      
	I0814 00:35:29.271838   44271 command_runner.go:130] >   containers_image_ostree_stub
	I0814 00:35:29.271845   44271 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0814 00:35:29.271857   44271 command_runner.go:130] >   btrfs_noversion
	I0814 00:35:29.271864   44271 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0814 00:35:29.271871   44271 command_runner.go:130] >   libdm_no_deferred_remove
	I0814 00:35:29.271879   44271 command_runner.go:130] >   seccomp
	I0814 00:35:29.271884   44271 command_runner.go:130] > LDFlags:          unknown
	I0814 00:35:29.271889   44271 command_runner.go:130] > SeccompEnabled:   true
	I0814 00:35:29.271893   44271 command_runner.go:130] > AppArmorEnabled:  false
	I0814 00:35:29.271959   44271 ssh_runner.go:195] Run: crio --version
	I0814 00:35:29.303471   44271 command_runner.go:130] > crio version 1.29.1
	I0814 00:35:29.303490   44271 command_runner.go:130] > Version:        1.29.1
	I0814 00:35:29.303495   44271 command_runner.go:130] > GitCommit:      unknown
	I0814 00:35:29.303499   44271 command_runner.go:130] > GitCommitDate:  unknown
	I0814 00:35:29.303504   44271 command_runner.go:130] > GitTreeState:   clean
	I0814 00:35:29.303510   44271 command_runner.go:130] > BuildDate:      2024-08-13T22:49:54Z
	I0814 00:35:29.303514   44271 command_runner.go:130] > GoVersion:      go1.21.6
	I0814 00:35:29.303518   44271 command_runner.go:130] > Compiler:       gc
	I0814 00:35:29.303522   44271 command_runner.go:130] > Platform:       linux/amd64
	I0814 00:35:29.303526   44271 command_runner.go:130] > Linkmode:       dynamic
	I0814 00:35:29.303531   44271 command_runner.go:130] > BuildTags:      
	I0814 00:35:29.303535   44271 command_runner.go:130] >   containers_image_ostree_stub
	I0814 00:35:29.303539   44271 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0814 00:35:29.303542   44271 command_runner.go:130] >   btrfs_noversion
	I0814 00:35:29.303547   44271 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0814 00:35:29.303551   44271 command_runner.go:130] >   libdm_no_deferred_remove
	I0814 00:35:29.303554   44271 command_runner.go:130] >   seccomp
	I0814 00:35:29.303558   44271 command_runner.go:130] > LDFlags:          unknown
	I0814 00:35:29.303563   44271 command_runner.go:130] > SeccompEnabled:   true
	I0814 00:35:29.303567   44271 command_runner.go:130] > AppArmorEnabled:  false
	I0814 00:35:29.305635   44271 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0814 00:35:29.307010   44271 main.go:141] libmachine: (multinode-745925) Calling .GetIP
	I0814 00:35:29.309603   44271 main.go:141] libmachine: (multinode-745925) DBG | domain multinode-745925 has defined MAC address 52:54:00:eb:87:ad in network mk-multinode-745925
	I0814 00:35:29.309914   44271 main.go:141] libmachine: (multinode-745925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:87:ad", ip: ""} in network mk-multinode-745925: {Iface:virbr1 ExpiryTime:2024-08-14 01:28:32 +0000 UTC Type:0 Mac:52:54:00:eb:87:ad Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:multinode-745925 Clientid:01:52:54:00:eb:87:ad}
	I0814 00:35:29.309941   44271 main.go:141] libmachine: (multinode-745925) DBG | domain multinode-745925 has defined IP address 192.168.39.201 and MAC address 52:54:00:eb:87:ad in network mk-multinode-745925
	I0814 00:35:29.310166   44271 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0814 00:35:29.314013   44271 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0814 00:35:29.314200   44271 kubeadm.go:883] updating cluster {Name:multinode-745925 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.0 ClusterName:multinode-745925 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.201 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.55 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.225 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fal
se inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0814 00:35:29.314323   44271 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0814 00:35:29.314378   44271 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 00:35:29.350689   44271 command_runner.go:130] > {
	I0814 00:35:29.350709   44271 command_runner.go:130] >   "images": [
	I0814 00:35:29.350714   44271 command_runner.go:130] >     {
	I0814 00:35:29.350722   44271 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0814 00:35:29.350727   44271 command_runner.go:130] >       "repoTags": [
	I0814 00:35:29.350737   44271 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0814 00:35:29.350741   44271 command_runner.go:130] >       ],
	I0814 00:35:29.350745   44271 command_runner.go:130] >       "repoDigests": [
	I0814 00:35:29.350753   44271 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0814 00:35:29.350760   44271 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0814 00:35:29.350764   44271 command_runner.go:130] >       ],
	I0814 00:35:29.350774   44271 command_runner.go:130] >       "size": "87165492",
	I0814 00:35:29.350778   44271 command_runner.go:130] >       "uid": null,
	I0814 00:35:29.350782   44271 command_runner.go:130] >       "username": "",
	I0814 00:35:29.350787   44271 command_runner.go:130] >       "spec": null,
	I0814 00:35:29.350792   44271 command_runner.go:130] >       "pinned": false
	I0814 00:35:29.350795   44271 command_runner.go:130] >     },
	I0814 00:35:29.350799   44271 command_runner.go:130] >     {
	I0814 00:35:29.350808   44271 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0814 00:35:29.350824   44271 command_runner.go:130] >       "repoTags": [
	I0814 00:35:29.350832   44271 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0814 00:35:29.350837   44271 command_runner.go:130] >       ],
	I0814 00:35:29.350846   44271 command_runner.go:130] >       "repoDigests": [
	I0814 00:35:29.350858   44271 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0814 00:35:29.350873   44271 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0814 00:35:29.350879   44271 command_runner.go:130] >       ],
	I0814 00:35:29.350883   44271 command_runner.go:130] >       "size": "1363676",
	I0814 00:35:29.350890   44271 command_runner.go:130] >       "uid": null,
	I0814 00:35:29.350898   44271 command_runner.go:130] >       "username": "",
	I0814 00:35:29.350905   44271 command_runner.go:130] >       "spec": null,
	I0814 00:35:29.350909   44271 command_runner.go:130] >       "pinned": false
	I0814 00:35:29.350915   44271 command_runner.go:130] >     },
	I0814 00:35:29.350918   44271 command_runner.go:130] >     {
	I0814 00:35:29.350924   44271 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0814 00:35:29.350930   44271 command_runner.go:130] >       "repoTags": [
	I0814 00:35:29.350936   44271 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0814 00:35:29.350940   44271 command_runner.go:130] >       ],
	I0814 00:35:29.350944   44271 command_runner.go:130] >       "repoDigests": [
	I0814 00:35:29.350954   44271 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0814 00:35:29.350961   44271 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0814 00:35:29.350967   44271 command_runner.go:130] >       ],
	I0814 00:35:29.350971   44271 command_runner.go:130] >       "size": "31470524",
	I0814 00:35:29.350975   44271 command_runner.go:130] >       "uid": null,
	I0814 00:35:29.350978   44271 command_runner.go:130] >       "username": "",
	I0814 00:35:29.350982   44271 command_runner.go:130] >       "spec": null,
	I0814 00:35:29.350987   44271 command_runner.go:130] >       "pinned": false
	I0814 00:35:29.350990   44271 command_runner.go:130] >     },
	I0814 00:35:29.351004   44271 command_runner.go:130] >     {
	I0814 00:35:29.351013   44271 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0814 00:35:29.351017   44271 command_runner.go:130] >       "repoTags": [
	I0814 00:35:29.351022   44271 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0814 00:35:29.351028   44271 command_runner.go:130] >       ],
	I0814 00:35:29.351033   44271 command_runner.go:130] >       "repoDigests": [
	I0814 00:35:29.351040   44271 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0814 00:35:29.351055   44271 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0814 00:35:29.351061   44271 command_runner.go:130] >       ],
	I0814 00:35:29.351065   44271 command_runner.go:130] >       "size": "61245718",
	I0814 00:35:29.351071   44271 command_runner.go:130] >       "uid": null,
	I0814 00:35:29.351076   44271 command_runner.go:130] >       "username": "nonroot",
	I0814 00:35:29.351082   44271 command_runner.go:130] >       "spec": null,
	I0814 00:35:29.351086   44271 command_runner.go:130] >       "pinned": false
	I0814 00:35:29.351091   44271 command_runner.go:130] >     },
	I0814 00:35:29.351095   44271 command_runner.go:130] >     {
	I0814 00:35:29.351101   44271 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0814 00:35:29.351107   44271 command_runner.go:130] >       "repoTags": [
	I0814 00:35:29.351112   44271 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0814 00:35:29.351117   44271 command_runner.go:130] >       ],
	I0814 00:35:29.351121   44271 command_runner.go:130] >       "repoDigests": [
	I0814 00:35:29.351128   44271 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0814 00:35:29.351137   44271 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0814 00:35:29.351140   44271 command_runner.go:130] >       ],
	I0814 00:35:29.351144   44271 command_runner.go:130] >       "size": "149009664",
	I0814 00:35:29.351148   44271 command_runner.go:130] >       "uid": {
	I0814 00:35:29.351152   44271 command_runner.go:130] >         "value": "0"
	I0814 00:35:29.351155   44271 command_runner.go:130] >       },
	I0814 00:35:29.351160   44271 command_runner.go:130] >       "username": "",
	I0814 00:35:29.351164   44271 command_runner.go:130] >       "spec": null,
	I0814 00:35:29.351167   44271 command_runner.go:130] >       "pinned": false
	I0814 00:35:29.351171   44271 command_runner.go:130] >     },
	I0814 00:35:29.351174   44271 command_runner.go:130] >     {
	I0814 00:35:29.351180   44271 command_runner.go:130] >       "id": "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3",
	I0814 00:35:29.351186   44271 command_runner.go:130] >       "repoTags": [
	I0814 00:35:29.351191   44271 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.0"
	I0814 00:35:29.351200   44271 command_runner.go:130] >       ],
	I0814 00:35:29.351207   44271 command_runner.go:130] >       "repoDigests": [
	I0814 00:35:29.351214   44271 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf",
	I0814 00:35:29.351222   44271 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"
	I0814 00:35:29.351226   44271 command_runner.go:130] >       ],
	I0814 00:35:29.351230   44271 command_runner.go:130] >       "size": "95233506",
	I0814 00:35:29.351236   44271 command_runner.go:130] >       "uid": {
	I0814 00:35:29.351240   44271 command_runner.go:130] >         "value": "0"
	I0814 00:35:29.351245   44271 command_runner.go:130] >       },
	I0814 00:35:29.351251   44271 command_runner.go:130] >       "username": "",
	I0814 00:35:29.351260   44271 command_runner.go:130] >       "spec": null,
	I0814 00:35:29.351266   44271 command_runner.go:130] >       "pinned": false
	I0814 00:35:29.351274   44271 command_runner.go:130] >     },
	I0814 00:35:29.351280   44271 command_runner.go:130] >     {
	I0814 00:35:29.351290   44271 command_runner.go:130] >       "id": "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1",
	I0814 00:35:29.351296   44271 command_runner.go:130] >       "repoTags": [
	I0814 00:35:29.351302   44271 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.0"
	I0814 00:35:29.351307   44271 command_runner.go:130] >       ],
	I0814 00:35:29.351312   44271 command_runner.go:130] >       "repoDigests": [
	I0814 00:35:29.351319   44271 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d",
	I0814 00:35:29.351339   44271 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"
	I0814 00:35:29.351344   44271 command_runner.go:130] >       ],
	I0814 00:35:29.351349   44271 command_runner.go:130] >       "size": "89437512",
	I0814 00:35:29.351352   44271 command_runner.go:130] >       "uid": {
	I0814 00:35:29.351356   44271 command_runner.go:130] >         "value": "0"
	I0814 00:35:29.351359   44271 command_runner.go:130] >       },
	I0814 00:35:29.351363   44271 command_runner.go:130] >       "username": "",
	I0814 00:35:29.351367   44271 command_runner.go:130] >       "spec": null,
	I0814 00:35:29.351371   44271 command_runner.go:130] >       "pinned": false
	I0814 00:35:29.351375   44271 command_runner.go:130] >     },
	I0814 00:35:29.351378   44271 command_runner.go:130] >     {
	I0814 00:35:29.351384   44271 command_runner.go:130] >       "id": "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494",
	I0814 00:35:29.351390   44271 command_runner.go:130] >       "repoTags": [
	I0814 00:35:29.351394   44271 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.0"
	I0814 00:35:29.351398   44271 command_runner.go:130] >       ],
	I0814 00:35:29.351402   44271 command_runner.go:130] >       "repoDigests": [
	I0814 00:35:29.351457   44271 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf",
	I0814 00:35:29.351470   44271 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"
	I0814 00:35:29.351474   44271 command_runner.go:130] >       ],
	I0814 00:35:29.351478   44271 command_runner.go:130] >       "size": "92728217",
	I0814 00:35:29.351482   44271 command_runner.go:130] >       "uid": null,
	I0814 00:35:29.351486   44271 command_runner.go:130] >       "username": "",
	I0814 00:35:29.351490   44271 command_runner.go:130] >       "spec": null,
	I0814 00:35:29.351493   44271 command_runner.go:130] >       "pinned": false
	I0814 00:35:29.351496   44271 command_runner.go:130] >     },
	I0814 00:35:29.351499   44271 command_runner.go:130] >     {
	I0814 00:35:29.351514   44271 command_runner.go:130] >       "id": "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94",
	I0814 00:35:29.351519   44271 command_runner.go:130] >       "repoTags": [
	I0814 00:35:29.351523   44271 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.0"
	I0814 00:35:29.351526   44271 command_runner.go:130] >       ],
	I0814 00:35:29.351533   44271 command_runner.go:130] >       "repoDigests": [
	I0814 00:35:29.351542   44271 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a",
	I0814 00:35:29.351550   44271 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"
	I0814 00:35:29.351555   44271 command_runner.go:130] >       ],
	I0814 00:35:29.351560   44271 command_runner.go:130] >       "size": "68420936",
	I0814 00:35:29.351563   44271 command_runner.go:130] >       "uid": {
	I0814 00:35:29.351567   44271 command_runner.go:130] >         "value": "0"
	I0814 00:35:29.351571   44271 command_runner.go:130] >       },
	I0814 00:35:29.351575   44271 command_runner.go:130] >       "username": "",
	I0814 00:35:29.351580   44271 command_runner.go:130] >       "spec": null,
	I0814 00:35:29.351585   44271 command_runner.go:130] >       "pinned": false
	I0814 00:35:29.351588   44271 command_runner.go:130] >     },
	I0814 00:35:29.351592   44271 command_runner.go:130] >     {
	I0814 00:35:29.351598   44271 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0814 00:35:29.351604   44271 command_runner.go:130] >       "repoTags": [
	I0814 00:35:29.351608   44271 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0814 00:35:29.351614   44271 command_runner.go:130] >       ],
	I0814 00:35:29.351618   44271 command_runner.go:130] >       "repoDigests": [
	I0814 00:35:29.351627   44271 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0814 00:35:29.351634   44271 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0814 00:35:29.351639   44271 command_runner.go:130] >       ],
	I0814 00:35:29.351643   44271 command_runner.go:130] >       "size": "742080",
	I0814 00:35:29.351652   44271 command_runner.go:130] >       "uid": {
	I0814 00:35:29.351658   44271 command_runner.go:130] >         "value": "65535"
	I0814 00:35:29.351661   44271 command_runner.go:130] >       },
	I0814 00:35:29.351665   44271 command_runner.go:130] >       "username": "",
	I0814 00:35:29.351669   44271 command_runner.go:130] >       "spec": null,
	I0814 00:35:29.351673   44271 command_runner.go:130] >       "pinned": true
	I0814 00:35:29.351678   44271 command_runner.go:130] >     }
	I0814 00:35:29.351681   44271 command_runner.go:130] >   ]
	I0814 00:35:29.351685   44271 command_runner.go:130] > }
	I0814 00:35:29.352264   44271 crio.go:514] all images are preloaded for cri-o runtime.
	I0814 00:35:29.352278   44271 crio.go:433] Images already preloaded, skipping extraction
	I0814 00:35:29.352322   44271 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 00:35:29.382309   44271 command_runner.go:130] > {
	I0814 00:35:29.382330   44271 command_runner.go:130] >   "images": [
	I0814 00:35:29.382334   44271 command_runner.go:130] >     {
	I0814 00:35:29.382344   44271 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0814 00:35:29.382350   44271 command_runner.go:130] >       "repoTags": [
	I0814 00:35:29.382356   44271 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0814 00:35:29.382359   44271 command_runner.go:130] >       ],
	I0814 00:35:29.382363   44271 command_runner.go:130] >       "repoDigests": [
	I0814 00:35:29.382372   44271 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0814 00:35:29.382383   44271 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0814 00:35:29.382388   44271 command_runner.go:130] >       ],
	I0814 00:35:29.382395   44271 command_runner.go:130] >       "size": "87165492",
	I0814 00:35:29.382400   44271 command_runner.go:130] >       "uid": null,
	I0814 00:35:29.382407   44271 command_runner.go:130] >       "username": "",
	I0814 00:35:29.382423   44271 command_runner.go:130] >       "spec": null,
	I0814 00:35:29.382431   44271 command_runner.go:130] >       "pinned": false
	I0814 00:35:29.382435   44271 command_runner.go:130] >     },
	I0814 00:35:29.382442   44271 command_runner.go:130] >     {
	I0814 00:35:29.382448   44271 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0814 00:35:29.382452   44271 command_runner.go:130] >       "repoTags": [
	I0814 00:35:29.382458   44271 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0814 00:35:29.382463   44271 command_runner.go:130] >       ],
	I0814 00:35:29.382467   44271 command_runner.go:130] >       "repoDigests": [
	I0814 00:35:29.382498   44271 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0814 00:35:29.382514   44271 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0814 00:35:29.382520   44271 command_runner.go:130] >       ],
	I0814 00:35:29.382526   44271 command_runner.go:130] >       "size": "1363676",
	I0814 00:35:29.382533   44271 command_runner.go:130] >       "uid": null,
	I0814 00:35:29.382547   44271 command_runner.go:130] >       "username": "",
	I0814 00:35:29.382556   44271 command_runner.go:130] >       "spec": null,
	I0814 00:35:29.382563   44271 command_runner.go:130] >       "pinned": false
	I0814 00:35:29.382571   44271 command_runner.go:130] >     },
	I0814 00:35:29.382576   44271 command_runner.go:130] >     {
	I0814 00:35:29.382585   44271 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0814 00:35:29.382595   44271 command_runner.go:130] >       "repoTags": [
	I0814 00:35:29.382605   44271 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0814 00:35:29.382611   44271 command_runner.go:130] >       ],
	I0814 00:35:29.382619   44271 command_runner.go:130] >       "repoDigests": [
	I0814 00:35:29.382631   44271 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0814 00:35:29.382647   44271 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0814 00:35:29.382655   44271 command_runner.go:130] >       ],
	I0814 00:35:29.382663   44271 command_runner.go:130] >       "size": "31470524",
	I0814 00:35:29.382671   44271 command_runner.go:130] >       "uid": null,
	I0814 00:35:29.382678   44271 command_runner.go:130] >       "username": "",
	I0814 00:35:29.382685   44271 command_runner.go:130] >       "spec": null,
	I0814 00:35:29.382690   44271 command_runner.go:130] >       "pinned": false
	I0814 00:35:29.382698   44271 command_runner.go:130] >     },
	I0814 00:35:29.382704   44271 command_runner.go:130] >     {
	I0814 00:35:29.382717   44271 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0814 00:35:29.382723   44271 command_runner.go:130] >       "repoTags": [
	I0814 00:35:29.382734   44271 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0814 00:35:29.382743   44271 command_runner.go:130] >       ],
	I0814 00:35:29.382750   44271 command_runner.go:130] >       "repoDigests": [
	I0814 00:35:29.382765   44271 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0814 00:35:29.382790   44271 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0814 00:35:29.382798   44271 command_runner.go:130] >       ],
	I0814 00:35:29.382805   44271 command_runner.go:130] >       "size": "61245718",
	I0814 00:35:29.382812   44271 command_runner.go:130] >       "uid": null,
	I0814 00:35:29.382822   44271 command_runner.go:130] >       "username": "nonroot",
	I0814 00:35:29.382835   44271 command_runner.go:130] >       "spec": null,
	I0814 00:35:29.382844   44271 command_runner.go:130] >       "pinned": false
	I0814 00:35:29.382850   44271 command_runner.go:130] >     },
	I0814 00:35:29.382858   44271 command_runner.go:130] >     {
	I0814 00:35:29.382868   44271 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0814 00:35:29.382876   44271 command_runner.go:130] >       "repoTags": [
	I0814 00:35:29.382881   44271 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0814 00:35:29.382890   44271 command_runner.go:130] >       ],
	I0814 00:35:29.382897   44271 command_runner.go:130] >       "repoDigests": [
	I0814 00:35:29.382911   44271 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0814 00:35:29.382924   44271 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0814 00:35:29.382932   44271 command_runner.go:130] >       ],
	I0814 00:35:29.382939   44271 command_runner.go:130] >       "size": "149009664",
	I0814 00:35:29.382947   44271 command_runner.go:130] >       "uid": {
	I0814 00:35:29.382954   44271 command_runner.go:130] >         "value": "0"
	I0814 00:35:29.382962   44271 command_runner.go:130] >       },
	I0814 00:35:29.382970   44271 command_runner.go:130] >       "username": "",
	I0814 00:35:29.382978   44271 command_runner.go:130] >       "spec": null,
	I0814 00:35:29.382983   44271 command_runner.go:130] >       "pinned": false
	I0814 00:35:29.382989   44271 command_runner.go:130] >     },
	I0814 00:35:29.382995   44271 command_runner.go:130] >     {
	I0814 00:35:29.383007   44271 command_runner.go:130] >       "id": "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3",
	I0814 00:35:29.383014   44271 command_runner.go:130] >       "repoTags": [
	I0814 00:35:29.383025   44271 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.0"
	I0814 00:35:29.383033   44271 command_runner.go:130] >       ],
	I0814 00:35:29.383040   44271 command_runner.go:130] >       "repoDigests": [
	I0814 00:35:29.383054   44271 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf",
	I0814 00:35:29.383069   44271 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"
	I0814 00:35:29.383077   44271 command_runner.go:130] >       ],
	I0814 00:35:29.383082   44271 command_runner.go:130] >       "size": "95233506",
	I0814 00:35:29.383088   44271 command_runner.go:130] >       "uid": {
	I0814 00:35:29.383095   44271 command_runner.go:130] >         "value": "0"
	I0814 00:35:29.383103   44271 command_runner.go:130] >       },
	I0814 00:35:29.383110   44271 command_runner.go:130] >       "username": "",
	I0814 00:35:29.383119   44271 command_runner.go:130] >       "spec": null,
	I0814 00:35:29.383125   44271 command_runner.go:130] >       "pinned": false
	I0814 00:35:29.383139   44271 command_runner.go:130] >     },
	I0814 00:35:29.383147   44271 command_runner.go:130] >     {
	I0814 00:35:29.383157   44271 command_runner.go:130] >       "id": "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1",
	I0814 00:35:29.383165   44271 command_runner.go:130] >       "repoTags": [
	I0814 00:35:29.383171   44271 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.0"
	I0814 00:35:29.383178   44271 command_runner.go:130] >       ],
	I0814 00:35:29.383185   44271 command_runner.go:130] >       "repoDigests": [
	I0814 00:35:29.383201   44271 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d",
	I0814 00:35:29.383215   44271 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"
	I0814 00:35:29.383224   44271 command_runner.go:130] >       ],
	I0814 00:35:29.383231   44271 command_runner.go:130] >       "size": "89437512",
	I0814 00:35:29.383240   44271 command_runner.go:130] >       "uid": {
	I0814 00:35:29.383246   44271 command_runner.go:130] >         "value": "0"
	I0814 00:35:29.383255   44271 command_runner.go:130] >       },
	I0814 00:35:29.383261   44271 command_runner.go:130] >       "username": "",
	I0814 00:35:29.383269   44271 command_runner.go:130] >       "spec": null,
	I0814 00:35:29.383274   44271 command_runner.go:130] >       "pinned": false
	I0814 00:35:29.383281   44271 command_runner.go:130] >     },
	I0814 00:35:29.383286   44271 command_runner.go:130] >     {
	I0814 00:35:29.383298   44271 command_runner.go:130] >       "id": "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494",
	I0814 00:35:29.383308   44271 command_runner.go:130] >       "repoTags": [
	I0814 00:35:29.383317   44271 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.0"
	I0814 00:35:29.383326   44271 command_runner.go:130] >       ],
	I0814 00:35:29.383332   44271 command_runner.go:130] >       "repoDigests": [
	I0814 00:35:29.383363   44271 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf",
	I0814 00:35:29.383373   44271 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"
	I0814 00:35:29.383378   44271 command_runner.go:130] >       ],
	I0814 00:35:29.383385   44271 command_runner.go:130] >       "size": "92728217",
	I0814 00:35:29.383395   44271 command_runner.go:130] >       "uid": null,
	I0814 00:35:29.383401   44271 command_runner.go:130] >       "username": "",
	I0814 00:35:29.383411   44271 command_runner.go:130] >       "spec": null,
	I0814 00:35:29.383418   44271 command_runner.go:130] >       "pinned": false
	I0814 00:35:29.383426   44271 command_runner.go:130] >     },
	I0814 00:35:29.383431   44271 command_runner.go:130] >     {
	I0814 00:35:29.383444   44271 command_runner.go:130] >       "id": "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94",
	I0814 00:35:29.383452   44271 command_runner.go:130] >       "repoTags": [
	I0814 00:35:29.383560   44271 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.0"
	I0814 00:35:29.383708   44271 command_runner.go:130] >       ],
	I0814 00:35:29.383724   44271 command_runner.go:130] >       "repoDigests": [
	I0814 00:35:29.383738   44271 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a",
	I0814 00:35:29.383755   44271 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"
	I0814 00:35:29.383762   44271 command_runner.go:130] >       ],
	I0814 00:35:29.383769   44271 command_runner.go:130] >       "size": "68420936",
	I0814 00:35:29.383776   44271 command_runner.go:130] >       "uid": {
	I0814 00:35:29.383782   44271 command_runner.go:130] >         "value": "0"
	I0814 00:35:29.383793   44271 command_runner.go:130] >       },
	I0814 00:35:29.383800   44271 command_runner.go:130] >       "username": "",
	I0814 00:35:29.383807   44271 command_runner.go:130] >       "spec": null,
	I0814 00:35:29.383813   44271 command_runner.go:130] >       "pinned": false
	I0814 00:35:29.383819   44271 command_runner.go:130] >     },
	I0814 00:35:29.383824   44271 command_runner.go:130] >     {
	I0814 00:35:29.383839   44271 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0814 00:35:29.383846   44271 command_runner.go:130] >       "repoTags": [
	I0814 00:35:29.383852   44271 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0814 00:35:29.383914   44271 command_runner.go:130] >       ],
	I0814 00:35:29.383942   44271 command_runner.go:130] >       "repoDigests": [
	I0814 00:35:29.383961   44271 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0814 00:35:29.383983   44271 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0814 00:35:29.383989   44271 command_runner.go:130] >       ],
	I0814 00:35:29.383995   44271 command_runner.go:130] >       "size": "742080",
	I0814 00:35:29.384000   44271 command_runner.go:130] >       "uid": {
	I0814 00:35:29.384005   44271 command_runner.go:130] >         "value": "65535"
	I0814 00:35:29.384010   44271 command_runner.go:130] >       },
	I0814 00:35:29.384021   44271 command_runner.go:130] >       "username": "",
	I0814 00:35:29.384027   44271 command_runner.go:130] >       "spec": null,
	I0814 00:35:29.384034   44271 command_runner.go:130] >       "pinned": true
	I0814 00:35:29.384039   44271 command_runner.go:130] >     }
	I0814 00:35:29.384044   44271 command_runner.go:130] >   ]
	I0814 00:35:29.384049   44271 command_runner.go:130] > }
	I0814 00:35:29.384260   44271 crio.go:514] all images are preloaded for cri-o runtime.
	I0814 00:35:29.384268   44271 cache_images.go:84] Images are preloaded, skipping loading
	I0814 00:35:29.384279   44271 kubeadm.go:934] updating node { 192.168.39.201 8443 v1.31.0 crio true true} ...
	I0814 00:35:29.384435   44271 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-745925 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.201
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:multinode-745925 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0814 00:35:29.384508   44271 ssh_runner.go:195] Run: crio config
	I0814 00:35:29.426507   44271 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0814 00:35:29.426548   44271 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0814 00:35:29.426560   44271 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0814 00:35:29.426565   44271 command_runner.go:130] > #
	I0814 00:35:29.426577   44271 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0814 00:35:29.426587   44271 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0814 00:35:29.426596   44271 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0814 00:35:29.426608   44271 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0814 00:35:29.426616   44271 command_runner.go:130] > # reload'.
	I0814 00:35:29.426625   44271 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0814 00:35:29.426638   44271 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0814 00:35:29.426648   44271 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0814 00:35:29.426659   44271 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0814 00:35:29.426665   44271 command_runner.go:130] > [crio]
	I0814 00:35:29.426677   44271 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0814 00:35:29.426685   44271 command_runner.go:130] > # containers images, in this directory.
	I0814 00:35:29.426696   44271 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0814 00:35:29.426716   44271 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0814 00:35:29.426726   44271 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0814 00:35:29.426736   44271 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0814 00:35:29.426746   44271 command_runner.go:130] > # imagestore = ""
	I0814 00:35:29.426756   44271 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0814 00:35:29.426768   44271 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0814 00:35:29.426775   44271 command_runner.go:130] > storage_driver = "overlay"
	I0814 00:35:29.426787   44271 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0814 00:35:29.426796   44271 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0814 00:35:29.426802   44271 command_runner.go:130] > storage_option = [
	I0814 00:35:29.426812   44271 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0814 00:35:29.426818   44271 command_runner.go:130] > ]
	I0814 00:35:29.426830   44271 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0814 00:35:29.426843   44271 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0814 00:35:29.426850   44271 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0814 00:35:29.426858   44271 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0814 00:35:29.426867   44271 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0814 00:35:29.426874   44271 command_runner.go:130] > # always happen on a node reboot
	I0814 00:35:29.426898   44271 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0814 00:35:29.426930   44271 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0814 00:35:29.426944   44271 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0814 00:35:29.426955   44271 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0814 00:35:29.426968   44271 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0814 00:35:29.426983   44271 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0814 00:35:29.426997   44271 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0814 00:35:29.427007   44271 command_runner.go:130] > # internal_wipe = true
	I0814 00:35:29.427018   44271 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0814 00:35:29.427027   44271 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0814 00:35:29.427037   44271 command_runner.go:130] > # internal_repair = false
	I0814 00:35:29.427048   44271 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0814 00:35:29.427061   44271 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0814 00:35:29.427072   44271 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0814 00:35:29.427081   44271 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0814 00:35:29.427095   44271 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0814 00:35:29.427103   44271 command_runner.go:130] > [crio.api]
	I0814 00:35:29.427111   44271 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0814 00:35:29.427123   44271 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0814 00:35:29.427136   44271 command_runner.go:130] > # IP address on which the stream server will listen.
	I0814 00:35:29.427146   44271 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0814 00:35:29.427159   44271 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0814 00:35:29.427169   44271 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0814 00:35:29.427176   44271 command_runner.go:130] > # stream_port = "0"
	I0814 00:35:29.427185   44271 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0814 00:35:29.427194   44271 command_runner.go:130] > # stream_enable_tls = false
	I0814 00:35:29.427207   44271 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0814 00:35:29.427214   44271 command_runner.go:130] > # stream_idle_timeout = ""
	I0814 00:35:29.427227   44271 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0814 00:35:29.427239   44271 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0814 00:35:29.427247   44271 command_runner.go:130] > # minutes.
	I0814 00:35:29.427256   44271 command_runner.go:130] > # stream_tls_cert = ""
	I0814 00:35:29.427266   44271 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0814 00:35:29.427282   44271 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0814 00:35:29.427291   44271 command_runner.go:130] > # stream_tls_key = ""
	I0814 00:35:29.427300   44271 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0814 00:35:29.427310   44271 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0814 00:35:29.427337   44271 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0814 00:35:29.427346   44271 command_runner.go:130] > # stream_tls_ca = ""
	I0814 00:35:29.427358   44271 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0814 00:35:29.427367   44271 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0814 00:35:29.427385   44271 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0814 00:35:29.427397   44271 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0814 00:35:29.427410   44271 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0814 00:35:29.427424   44271 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0814 00:35:29.427434   44271 command_runner.go:130] > [crio.runtime]
	I0814 00:35:29.427444   44271 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0814 00:35:29.427456   44271 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0814 00:35:29.427464   44271 command_runner.go:130] > # "nofile=1024:2048"
	I0814 00:35:29.427473   44271 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0814 00:35:29.427482   44271 command_runner.go:130] > # default_ulimits = [
	I0814 00:35:29.427487   44271 command_runner.go:130] > # ]
	I0814 00:35:29.427498   44271 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0814 00:35:29.427507   44271 command_runner.go:130] > # no_pivot = false
	I0814 00:35:29.427516   44271 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0814 00:35:29.427529   44271 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0814 00:35:29.427539   44271 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0814 00:35:29.427554   44271 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0814 00:35:29.427565   44271 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0814 00:35:29.427576   44271 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0814 00:35:29.427587   44271 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0814 00:35:29.427596   44271 command_runner.go:130] > # Cgroup setting for conmon
	I0814 00:35:29.427611   44271 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0814 00:35:29.427618   44271 command_runner.go:130] > conmon_cgroup = "pod"
	I0814 00:35:29.427629   44271 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0814 00:35:29.427640   44271 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0814 00:35:29.427654   44271 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0814 00:35:29.427663   44271 command_runner.go:130] > conmon_env = [
	I0814 00:35:29.427677   44271 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0814 00:35:29.427687   44271 command_runner.go:130] > ]
	I0814 00:35:29.427695   44271 command_runner.go:130] > # Additional environment variables to set for all the
	I0814 00:35:29.427706   44271 command_runner.go:130] > # containers. These are overridden if set in the
	I0814 00:35:29.427717   44271 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0814 00:35:29.427727   44271 command_runner.go:130] > # default_env = [
	I0814 00:35:29.427732   44271 command_runner.go:130] > # ]
	I0814 00:35:29.427745   44271 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0814 00:35:29.427760   44271 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0814 00:35:29.427769   44271 command_runner.go:130] > # selinux = false
	I0814 00:35:29.427779   44271 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0814 00:35:29.427791   44271 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0814 00:35:29.427803   44271 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0814 00:35:29.427809   44271 command_runner.go:130] > # seccomp_profile = ""
	I0814 00:35:29.427821   44271 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0814 00:35:29.427833   44271 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0814 00:35:29.427843   44271 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0814 00:35:29.427853   44271 command_runner.go:130] > # which might increase security.
	I0814 00:35:29.427860   44271 command_runner.go:130] > # This option is currently deprecated,
	I0814 00:35:29.427873   44271 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0814 00:35:29.427883   44271 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0814 00:35:29.427893   44271 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0814 00:35:29.427905   44271 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0814 00:35:29.427916   44271 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0814 00:35:29.427928   44271 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0814 00:35:29.427939   44271 command_runner.go:130] > # This option supports live configuration reload.
	I0814 00:35:29.427950   44271 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0814 00:35:29.427960   44271 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0814 00:35:29.427970   44271 command_runner.go:130] > # the cgroup blockio controller.
	I0814 00:35:29.427976   44271 command_runner.go:130] > # blockio_config_file = ""
	I0814 00:35:29.427990   44271 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0814 00:35:29.428001   44271 command_runner.go:130] > # blockio parameters.
	I0814 00:35:29.428006   44271 command_runner.go:130] > # blockio_reload = false
	I0814 00:35:29.428017   44271 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0814 00:35:29.428025   44271 command_runner.go:130] > # irqbalance daemon.
	I0814 00:35:29.428033   44271 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0814 00:35:29.428043   44271 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0814 00:35:29.428056   44271 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0814 00:35:29.428070   44271 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0814 00:35:29.428082   44271 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0814 00:35:29.428095   44271 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0814 00:35:29.428106   44271 command_runner.go:130] > # This option supports live configuration reload.
	I0814 00:35:29.428118   44271 command_runner.go:130] > # rdt_config_file = ""
	I0814 00:35:29.428127   44271 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0814 00:35:29.428136   44271 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0814 00:35:29.428175   44271 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0814 00:35:29.428185   44271 command_runner.go:130] > # separate_pull_cgroup = ""
	I0814 00:35:29.428198   44271 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0814 00:35:29.428211   44271 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0814 00:35:29.428219   44271 command_runner.go:130] > # will be added.
	I0814 00:35:29.428226   44271 command_runner.go:130] > # default_capabilities = [
	I0814 00:35:29.428234   44271 command_runner.go:130] > # 	"CHOWN",
	I0814 00:35:29.428241   44271 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0814 00:35:29.428247   44271 command_runner.go:130] > # 	"FSETID",
	I0814 00:35:29.428254   44271 command_runner.go:130] > # 	"FOWNER",
	I0814 00:35:29.428263   44271 command_runner.go:130] > # 	"SETGID",
	I0814 00:35:29.428269   44271 command_runner.go:130] > # 	"SETUID",
	I0814 00:35:29.428278   44271 command_runner.go:130] > # 	"SETPCAP",
	I0814 00:35:29.428285   44271 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0814 00:35:29.428295   44271 command_runner.go:130] > # 	"KILL",
	I0814 00:35:29.428300   44271 command_runner.go:130] > # ]
	I0814 00:35:29.428314   44271 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0814 00:35:29.428327   44271 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0814 00:35:29.428333   44271 command_runner.go:130] > # add_inheritable_capabilities = false
	I0814 00:35:29.428343   44271 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0814 00:35:29.428354   44271 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0814 00:35:29.428363   44271 command_runner.go:130] > default_sysctls = [
	I0814 00:35:29.428380   44271 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0814 00:35:29.428397   44271 command_runner.go:130] > ]
	I0814 00:35:29.428408   44271 command_runner.go:130] > # List of devices on the host that a
	I0814 00:35:29.428417   44271 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0814 00:35:29.428427   44271 command_runner.go:130] > # allowed_devices = [
	I0814 00:35:29.428433   44271 command_runner.go:130] > # 	"/dev/fuse",
	I0814 00:35:29.428442   44271 command_runner.go:130] > # ]
	I0814 00:35:29.428450   44271 command_runner.go:130] > # List of additional devices. specified as
	I0814 00:35:29.428464   44271 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0814 00:35:29.428477   44271 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0814 00:35:29.428488   44271 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0814 00:35:29.428495   44271 command_runner.go:130] > # additional_devices = [
	I0814 00:35:29.428503   44271 command_runner.go:130] > # ]
	I0814 00:35:29.428513   44271 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0814 00:35:29.428523   44271 command_runner.go:130] > # cdi_spec_dirs = [
	I0814 00:35:29.428529   44271 command_runner.go:130] > # 	"/etc/cdi",
	I0814 00:35:29.428538   44271 command_runner.go:130] > # 	"/var/run/cdi",
	I0814 00:35:29.428543   44271 command_runner.go:130] > # ]
	I0814 00:35:29.428555   44271 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0814 00:35:29.428568   44271 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0814 00:35:29.428576   44271 command_runner.go:130] > # Defaults to false.
	I0814 00:35:29.428584   44271 command_runner.go:130] > # device_ownership_from_security_context = false
	I0814 00:35:29.428596   44271 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0814 00:35:29.428608   44271 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0814 00:35:29.428617   44271 command_runner.go:130] > # hooks_dir = [
	I0814 00:35:29.428625   44271 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0814 00:35:29.428633   44271 command_runner.go:130] > # ]
	I0814 00:35:29.428642   44271 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0814 00:35:29.428655   44271 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0814 00:35:29.428666   44271 command_runner.go:130] > # its default mounts from the following two files:
	I0814 00:35:29.428672   44271 command_runner.go:130] > #
	I0814 00:35:29.428686   44271 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0814 00:35:29.428699   44271 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0814 00:35:29.428710   44271 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0814 00:35:29.428719   44271 command_runner.go:130] > #
	I0814 00:35:29.428728   44271 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0814 00:35:29.428741   44271 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0814 00:35:29.428759   44271 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0814 00:35:29.428772   44271 command_runner.go:130] > #      only add mounts it finds in this file.
	I0814 00:35:29.428777   44271 command_runner.go:130] > #
	I0814 00:35:29.428786   44271 command_runner.go:130] > # default_mounts_file = ""
	I0814 00:35:29.428795   44271 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0814 00:35:29.428809   44271 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0814 00:35:29.428819   44271 command_runner.go:130] > pids_limit = 1024
	I0814 00:35:29.428828   44271 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0814 00:35:29.428840   44271 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0814 00:35:29.428853   44271 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0814 00:35:29.428867   44271 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0814 00:35:29.428877   44271 command_runner.go:130] > # log_size_max = -1
	I0814 00:35:29.428890   44271 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0814 00:35:29.428899   44271 command_runner.go:130] > # log_to_journald = false
	I0814 00:35:29.428927   44271 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0814 00:35:29.428944   44271 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0814 00:35:29.428955   44271 command_runner.go:130] > # Path to directory for container attach sockets.
	I0814 00:35:29.428964   44271 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0814 00:35:29.428976   44271 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0814 00:35:29.428983   44271 command_runner.go:130] > # bind_mount_prefix = ""
	I0814 00:35:29.428995   44271 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0814 00:35:29.429007   44271 command_runner.go:130] > # read_only = false
	I0814 00:35:29.429017   44271 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0814 00:35:29.429029   44271 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0814 00:35:29.429040   44271 command_runner.go:130] > # live configuration reload.
	I0814 00:35:29.429048   44271 command_runner.go:130] > # log_level = "info"
	I0814 00:35:29.429057   44271 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0814 00:35:29.429067   44271 command_runner.go:130] > # This option supports live configuration reload.
	I0814 00:35:29.429078   44271 command_runner.go:130] > # log_filter = ""
	I0814 00:35:29.429087   44271 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0814 00:35:29.429100   44271 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0814 00:35:29.429110   44271 command_runner.go:130] > # separated by comma.
	I0814 00:35:29.429121   44271 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0814 00:35:29.429130   44271 command_runner.go:130] > # uid_mappings = ""
	I0814 00:35:29.429140   44271 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0814 00:35:29.429150   44271 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0814 00:35:29.429164   44271 command_runner.go:130] > # separated by comma.
	I0814 00:35:29.429178   44271 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0814 00:35:29.429189   44271 command_runner.go:130] > # gid_mappings = ""
	I0814 00:35:29.429198   44271 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0814 00:35:29.429209   44271 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0814 00:35:29.429222   44271 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0814 00:35:29.429239   44271 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0814 00:35:29.429264   44271 command_runner.go:130] > # minimum_mappable_uid = -1
	I0814 00:35:29.429279   44271 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0814 00:35:29.429291   44271 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0814 00:35:29.429302   44271 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0814 00:35:29.429313   44271 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0814 00:35:29.429324   44271 command_runner.go:130] > # minimum_mappable_gid = -1
	I0814 00:35:29.429335   44271 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0814 00:35:29.429347   44271 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0814 00:35:29.429359   44271 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0814 00:35:29.429367   44271 command_runner.go:130] > # ctr_stop_timeout = 30
	I0814 00:35:29.429384   44271 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0814 00:35:29.429393   44271 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0814 00:35:29.429402   44271 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0814 00:35:29.429408   44271 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0814 00:35:29.429417   44271 command_runner.go:130] > drop_infra_ctr = false
	I0814 00:35:29.429426   44271 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0814 00:35:29.429437   44271 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0814 00:35:29.429452   44271 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0814 00:35:29.429459   44271 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0814 00:35:29.429471   44271 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0814 00:35:29.429484   44271 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0814 00:35:29.429496   44271 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0814 00:35:29.429504   44271 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0814 00:35:29.429513   44271 command_runner.go:130] > # shared_cpuset = ""
	I0814 00:35:29.429524   44271 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0814 00:35:29.429535   44271 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0814 00:35:29.429545   44271 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0814 00:35:29.429557   44271 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0814 00:35:29.429567   44271 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0814 00:35:29.429585   44271 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0814 00:35:29.429598   44271 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0814 00:35:29.429604   44271 command_runner.go:130] > # enable_criu_support = false
	I0814 00:35:29.429613   44271 command_runner.go:130] > # Enable/disable the generation of the container,
	I0814 00:35:29.429621   44271 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0814 00:35:29.429632   44271 command_runner.go:130] > # enable_pod_events = false
	I0814 00:35:29.429640   44271 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0814 00:35:29.429653   44271 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0814 00:35:29.429661   44271 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0814 00:35:29.429670   44271 command_runner.go:130] > # default_runtime = "runc"
	I0814 00:35:29.429680   44271 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0814 00:35:29.429694   44271 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0814 00:35:29.429712   44271 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0814 00:35:29.429723   44271 command_runner.go:130] > # creation as a file is not desired either.
	I0814 00:35:29.429739   44271 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0814 00:35:29.429750   44271 command_runner.go:130] > # the hostname is being managed dynamically.
	I0814 00:35:29.429757   44271 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0814 00:35:29.429766   44271 command_runner.go:130] > # ]
	I0814 00:35:29.429775   44271 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0814 00:35:29.429790   44271 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0814 00:35:29.429802   44271 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0814 00:35:29.429812   44271 command_runner.go:130] > # Each entry in the table should follow the format:
	I0814 00:35:29.429821   44271 command_runner.go:130] > #
	I0814 00:35:29.429828   44271 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0814 00:35:29.429839   44271 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0814 00:35:29.429885   44271 command_runner.go:130] > # runtime_type = "oci"
	I0814 00:35:29.429904   44271 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0814 00:35:29.429913   44271 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0814 00:35:29.429922   44271 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0814 00:35:29.429933   44271 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0814 00:35:29.429939   44271 command_runner.go:130] > # monitor_env = []
	I0814 00:35:29.429950   44271 command_runner.go:130] > # privileged_without_host_devices = false
	I0814 00:35:29.429960   44271 command_runner.go:130] > # allowed_annotations = []
	I0814 00:35:29.429969   44271 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0814 00:35:29.429977   44271 command_runner.go:130] > # Where:
	I0814 00:35:29.429985   44271 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0814 00:35:29.429999   44271 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0814 00:35:29.430013   44271 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0814 00:35:29.430026   44271 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0814 00:35:29.430033   44271 command_runner.go:130] > #   in $PATH.
	I0814 00:35:29.430068   44271 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0814 00:35:29.430079   44271 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0814 00:35:29.430089   44271 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0814 00:35:29.430100   44271 command_runner.go:130] > #   state.
	I0814 00:35:29.430115   44271 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0814 00:35:29.430128   44271 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0814 00:35:29.430140   44271 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0814 00:35:29.430152   44271 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0814 00:35:29.430164   44271 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0814 00:35:29.430177   44271 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0814 00:35:29.430188   44271 command_runner.go:130] > #   The currently recognized values are:
	I0814 00:35:29.430202   44271 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0814 00:35:29.430216   44271 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0814 00:35:29.430229   44271 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0814 00:35:29.430243   44271 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0814 00:35:29.430257   44271 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0814 00:35:29.430271   44271 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0814 00:35:29.430284   44271 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0814 00:35:29.430294   44271 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0814 00:35:29.430307   44271 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0814 00:35:29.430318   44271 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0814 00:35:29.430328   44271 command_runner.go:130] > #   deprecated option "conmon".
	I0814 00:35:29.430339   44271 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0814 00:35:29.430349   44271 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0814 00:35:29.430361   44271 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0814 00:35:29.430371   44271 command_runner.go:130] > #   should be moved to the container's cgroup
	I0814 00:35:29.430386   44271 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0814 00:35:29.430396   44271 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0814 00:35:29.430409   44271 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0814 00:35:29.430421   44271 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0814 00:35:29.430427   44271 command_runner.go:130] > #
	I0814 00:35:29.430434   44271 command_runner.go:130] > # Using the seccomp notifier feature:
	I0814 00:35:29.430445   44271 command_runner.go:130] > #
	I0814 00:35:29.430458   44271 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0814 00:35:29.430469   44271 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0814 00:35:29.430477   44271 command_runner.go:130] > #
	I0814 00:35:29.430487   44271 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0814 00:35:29.430499   44271 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0814 00:35:29.430508   44271 command_runner.go:130] > #
	I0814 00:35:29.430518   44271 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0814 00:35:29.430527   44271 command_runner.go:130] > # feature.
	I0814 00:35:29.430533   44271 command_runner.go:130] > #
	I0814 00:35:29.430544   44271 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0814 00:35:29.430556   44271 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0814 00:35:29.430569   44271 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0814 00:35:29.430582   44271 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0814 00:35:29.430594   44271 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0814 00:35:29.430602   44271 command_runner.go:130] > #
	I0814 00:35:29.430612   44271 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0814 00:35:29.430624   44271 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0814 00:35:29.430632   44271 command_runner.go:130] > #
	I0814 00:35:29.430645   44271 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0814 00:35:29.430658   44271 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0814 00:35:29.430666   44271 command_runner.go:130] > #
	I0814 00:35:29.430679   44271 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0814 00:35:29.430691   44271 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0814 00:35:29.430700   44271 command_runner.go:130] > # limitation.
	I0814 00:35:29.430710   44271 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0814 00:35:29.430720   44271 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0814 00:35:29.430730   44271 command_runner.go:130] > runtime_type = "oci"
	I0814 00:35:29.430740   44271 command_runner.go:130] > runtime_root = "/run/runc"
	I0814 00:35:29.430750   44271 command_runner.go:130] > runtime_config_path = ""
	I0814 00:35:29.430762   44271 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0814 00:35:29.430771   44271 command_runner.go:130] > monitor_cgroup = "pod"
	I0814 00:35:29.430780   44271 command_runner.go:130] > monitor_exec_cgroup = ""
	I0814 00:35:29.430786   44271 command_runner.go:130] > monitor_env = [
	I0814 00:35:29.430799   44271 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0814 00:35:29.430806   44271 command_runner.go:130] > ]
	I0814 00:35:29.430815   44271 command_runner.go:130] > privileged_without_host_devices = false
	I0814 00:35:29.430827   44271 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0814 00:35:29.430838   44271 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0814 00:35:29.430850   44271 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0814 00:35:29.430864   44271 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0814 00:35:29.430879   44271 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0814 00:35:29.430895   44271 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0814 00:35:29.430913   44271 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0814 00:35:29.430928   44271 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0814 00:35:29.430941   44271 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0814 00:35:29.430956   44271 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0814 00:35:29.430965   44271 command_runner.go:130] > # Example:
	I0814 00:35:29.430976   44271 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0814 00:35:29.430986   44271 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0814 00:35:29.430997   44271 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0814 00:35:29.431007   44271 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0814 00:35:29.431015   44271 command_runner.go:130] > # cpuset = 0
	I0814 00:35:29.431021   44271 command_runner.go:130] > # cpushares = "0-1"
	I0814 00:35:29.431029   44271 command_runner.go:130] > # Where:
	I0814 00:35:29.431039   44271 command_runner.go:130] > # The workload name is workload-type.
	I0814 00:35:29.431052   44271 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0814 00:35:29.431062   44271 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0814 00:35:29.431072   44271 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0814 00:35:29.431085   44271 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0814 00:35:29.431098   44271 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0814 00:35:29.431109   44271 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0814 00:35:29.431122   44271 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0814 00:35:29.431131   44271 command_runner.go:130] > # Default value is set to true
	I0814 00:35:29.431141   44271 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0814 00:35:29.431153   44271 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0814 00:35:29.431163   44271 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0814 00:35:29.431173   44271 command_runner.go:130] > # Default value is set to 'false'
	I0814 00:35:29.431183   44271 command_runner.go:130] > # disable_hostport_mapping = false
	I0814 00:35:29.431196   44271 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0814 00:35:29.431205   44271 command_runner.go:130] > #
	I0814 00:35:29.431217   44271 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0814 00:35:29.431231   44271 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0814 00:35:29.431240   44271 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0814 00:35:29.431250   44271 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0814 00:35:29.431260   44271 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0814 00:35:29.431265   44271 command_runner.go:130] > [crio.image]
	I0814 00:35:29.431273   44271 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0814 00:35:29.431278   44271 command_runner.go:130] > # default_transport = "docker://"
	I0814 00:35:29.431287   44271 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0814 00:35:29.431296   44271 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0814 00:35:29.431302   44271 command_runner.go:130] > # global_auth_file = ""
	I0814 00:35:29.431309   44271 command_runner.go:130] > # The image used to instantiate infra containers.
	I0814 00:35:29.431315   44271 command_runner.go:130] > # This option supports live configuration reload.
	I0814 00:35:29.431323   44271 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0814 00:35:29.431335   44271 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0814 00:35:29.431346   44271 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0814 00:35:29.431355   44271 command_runner.go:130] > # This option supports live configuration reload.
	I0814 00:35:29.431362   44271 command_runner.go:130] > # pause_image_auth_file = ""
	I0814 00:35:29.431378   44271 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0814 00:35:29.431389   44271 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0814 00:35:29.431401   44271 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0814 00:35:29.431412   44271 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0814 00:35:29.431422   44271 command_runner.go:130] > # pause_command = "/pause"
	I0814 00:35:29.431432   44271 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0814 00:35:29.431443   44271 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0814 00:35:29.431455   44271 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0814 00:35:29.431466   44271 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0814 00:35:29.431477   44271 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0814 00:35:29.431489   44271 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0814 00:35:29.431498   44271 command_runner.go:130] > # pinned_images = [
	I0814 00:35:29.431506   44271 command_runner.go:130] > # ]
	I0814 00:35:29.431518   44271 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0814 00:35:29.431532   44271 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0814 00:35:29.431545   44271 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0814 00:35:29.431558   44271 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0814 00:35:29.431568   44271 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0814 00:35:29.431577   44271 command_runner.go:130] > # signature_policy = ""
	I0814 00:35:29.431589   44271 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0814 00:35:29.431602   44271 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0814 00:35:29.431613   44271 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0814 00:35:29.431631   44271 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0814 00:35:29.431641   44271 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0814 00:35:29.431650   44271 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0814 00:35:29.431660   44271 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0814 00:35:29.431674   44271 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0814 00:35:29.431683   44271 command_runner.go:130] > # changing them here.
	I0814 00:35:29.431693   44271 command_runner.go:130] > # insecure_registries = [
	I0814 00:35:29.431701   44271 command_runner.go:130] > # ]
	I0814 00:35:29.431712   44271 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0814 00:35:29.431723   44271 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0814 00:35:29.431731   44271 command_runner.go:130] > # image_volumes = "mkdir"
	I0814 00:35:29.431739   44271 command_runner.go:130] > # Temporary directory to use for storing big files
	I0814 00:35:29.431747   44271 command_runner.go:130] > # big_files_temporary_dir = ""
	I0814 00:35:29.431760   44271 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0814 00:35:29.431769   44271 command_runner.go:130] > # CNI plugins.
	I0814 00:35:29.431777   44271 command_runner.go:130] > [crio.network]
	I0814 00:35:29.431790   44271 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0814 00:35:29.431800   44271 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0814 00:35:29.431809   44271 command_runner.go:130] > # cni_default_network = ""
	I0814 00:35:29.431821   44271 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0814 00:35:29.431832   44271 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0814 00:35:29.431843   44271 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0814 00:35:29.431852   44271 command_runner.go:130] > # plugin_dirs = [
	I0814 00:35:29.431858   44271 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0814 00:35:29.431866   44271 command_runner.go:130] > # ]
	I0814 00:35:29.431876   44271 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0814 00:35:29.431885   44271 command_runner.go:130] > [crio.metrics]
	I0814 00:35:29.431892   44271 command_runner.go:130] > # Globally enable or disable metrics support.
	I0814 00:35:29.431901   44271 command_runner.go:130] > enable_metrics = true
	I0814 00:35:29.431911   44271 command_runner.go:130] > # Specify enabled metrics collectors.
	I0814 00:35:29.431921   44271 command_runner.go:130] > # Per default all metrics are enabled.
	I0814 00:35:29.431934   44271 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0814 00:35:29.431946   44271 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0814 00:35:29.431960   44271 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0814 00:35:29.431968   44271 command_runner.go:130] > # metrics_collectors = [
	I0814 00:35:29.431977   44271 command_runner.go:130] > # 	"operations",
	I0814 00:35:29.431987   44271 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0814 00:35:29.431997   44271 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0814 00:35:29.432008   44271 command_runner.go:130] > # 	"operations_errors",
	I0814 00:35:29.432016   44271 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0814 00:35:29.432025   44271 command_runner.go:130] > # 	"image_pulls_by_name",
	I0814 00:35:29.432032   44271 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0814 00:35:29.432040   44271 command_runner.go:130] > # 	"image_pulls_failures",
	I0814 00:35:29.432049   44271 command_runner.go:130] > # 	"image_pulls_successes",
	I0814 00:35:29.432058   44271 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0814 00:35:29.432067   44271 command_runner.go:130] > # 	"image_layer_reuse",
	I0814 00:35:29.432077   44271 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0814 00:35:29.432086   44271 command_runner.go:130] > # 	"containers_oom_total",
	I0814 00:35:29.432095   44271 command_runner.go:130] > # 	"containers_oom",
	I0814 00:35:29.432103   44271 command_runner.go:130] > # 	"processes_defunct",
	I0814 00:35:29.432110   44271 command_runner.go:130] > # 	"operations_total",
	I0814 00:35:29.432119   44271 command_runner.go:130] > # 	"operations_latency_seconds",
	I0814 00:35:29.432129   44271 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0814 00:35:29.432138   44271 command_runner.go:130] > # 	"operations_errors_total",
	I0814 00:35:29.432147   44271 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0814 00:35:29.432157   44271 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0814 00:35:29.432167   44271 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0814 00:35:29.432177   44271 command_runner.go:130] > # 	"image_pulls_success_total",
	I0814 00:35:29.432186   44271 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0814 00:35:29.432194   44271 command_runner.go:130] > # 	"containers_oom_count_total",
	I0814 00:35:29.432201   44271 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0814 00:35:29.432211   44271 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0814 00:35:29.432217   44271 command_runner.go:130] > # ]
	I0814 00:35:29.432225   44271 command_runner.go:130] > # The port on which the metrics server will listen.
	I0814 00:35:29.432234   44271 command_runner.go:130] > # metrics_port = 9090
	I0814 00:35:29.432242   44271 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0814 00:35:29.432251   44271 command_runner.go:130] > # metrics_socket = ""
	I0814 00:35:29.432261   44271 command_runner.go:130] > # The certificate for the secure metrics server.
	I0814 00:35:29.432274   44271 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0814 00:35:29.432288   44271 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0814 00:35:29.432301   44271 command_runner.go:130] > # certificate on any modification event.
	I0814 00:35:29.432311   44271 command_runner.go:130] > # metrics_cert = ""
	I0814 00:35:29.432319   44271 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0814 00:35:29.432329   44271 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0814 00:35:29.432339   44271 command_runner.go:130] > # metrics_key = ""
	I0814 00:35:29.432351   44271 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0814 00:35:29.432359   44271 command_runner.go:130] > [crio.tracing]
	I0814 00:35:29.432371   44271 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0814 00:35:29.432386   44271 command_runner.go:130] > # enable_tracing = false
	I0814 00:35:29.432393   44271 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0814 00:35:29.432403   44271 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0814 00:35:29.432415   44271 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0814 00:35:29.432425   44271 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0814 00:35:29.432434   44271 command_runner.go:130] > # CRI-O NRI configuration.
	I0814 00:35:29.432443   44271 command_runner.go:130] > [crio.nri]
	I0814 00:35:29.432452   44271 command_runner.go:130] > # Globally enable or disable NRI.
	I0814 00:35:29.432461   44271 command_runner.go:130] > # enable_nri = false
	I0814 00:35:29.432470   44271 command_runner.go:130] > # NRI socket to listen on.
	I0814 00:35:29.432479   44271 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0814 00:35:29.432489   44271 command_runner.go:130] > # NRI plugin directory to use.
	I0814 00:35:29.432498   44271 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0814 00:35:29.432508   44271 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0814 00:35:29.432519   44271 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0814 00:35:29.432530   44271 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0814 00:35:29.432539   44271 command_runner.go:130] > # nri_disable_connections = false
	I0814 00:35:29.432548   44271 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0814 00:35:29.432558   44271 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0814 00:35:29.432564   44271 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0814 00:35:29.432572   44271 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0814 00:35:29.432578   44271 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0814 00:35:29.432585   44271 command_runner.go:130] > [crio.stats]
	I0814 00:35:29.432591   44271 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0814 00:35:29.432599   44271 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0814 00:35:29.432606   44271 command_runner.go:130] > # stats_collection_period = 0
	I0814 00:35:29.432634   44271 command_runner.go:130] ! time="2024-08-14 00:35:29.387565487Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0814 00:35:29.432649   44271 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0814 00:35:29.432749   44271 cni.go:84] Creating CNI manager for ""
	I0814 00:35:29.432757   44271 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0814 00:35:29.432765   44271 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0814 00:35:29.432784   44271 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.201 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-745925 NodeName:multinode-745925 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.201"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.201 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0814 00:35:29.432947   44271 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.201
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-745925"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.201
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.201"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0814 00:35:29.433006   44271 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0814 00:35:29.443631   44271 command_runner.go:130] > kubeadm
	I0814 00:35:29.443662   44271 command_runner.go:130] > kubectl
	I0814 00:35:29.443668   44271 command_runner.go:130] > kubelet
	I0814 00:35:29.443695   44271 binaries.go:44] Found k8s binaries, skipping transfer
	I0814 00:35:29.443747   44271 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0814 00:35:29.452610   44271 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0814 00:35:29.469032   44271 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0814 00:35:29.485178   44271 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0814 00:35:29.500456   44271 ssh_runner.go:195] Run: grep 192.168.39.201	control-plane.minikube.internal$ /etc/hosts
	I0814 00:35:29.503765   44271 command_runner.go:130] > 192.168.39.201	control-plane.minikube.internal
	I0814 00:35:29.503844   44271 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 00:35:29.637304   44271 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 00:35:29.650883   44271 certs.go:68] Setting up /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/multinode-745925 for IP: 192.168.39.201
	I0814 00:35:29.650907   44271 certs.go:194] generating shared ca certs ...
	I0814 00:35:29.650928   44271 certs.go:226] acquiring lock for ca certs: {Name:mke321171338291cf6d66f3acbfe43d3fabcafa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 00:35:29.651104   44271 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19429-9425/.minikube/ca.key
	I0814 00:35:29.651145   44271 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.key
	I0814 00:35:29.651155   44271 certs.go:256] generating profile certs ...
	I0814 00:35:29.651225   44271 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/multinode-745925/client.key
	I0814 00:35:29.651278   44271 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/multinode-745925/apiserver.key.a77e74ae
	I0814 00:35:29.651314   44271 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/multinode-745925/proxy-client.key
	I0814 00:35:29.651324   44271 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19429-9425/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0814 00:35:29.651337   44271 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19429-9425/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0814 00:35:29.651352   44271 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0814 00:35:29.651365   44271 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0814 00:35:29.651380   44271 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/multinode-745925/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0814 00:35:29.651393   44271 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/multinode-745925/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0814 00:35:29.651406   44271 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/multinode-745925/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0814 00:35:29.651418   44271 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/multinode-745925/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0814 00:35:29.651466   44271 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589.pem (1338 bytes)
	W0814 00:35:29.651493   44271 certs.go:480] ignoring /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589_empty.pem, impossibly tiny 0 bytes
	I0814 00:35:29.651503   44271 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem (1679 bytes)
	I0814 00:35:29.651526   44271 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem (1082 bytes)
	I0814 00:35:29.651547   44271 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem (1123 bytes)
	I0814 00:35:29.651573   44271 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem (1675 bytes)
	I0814 00:35:29.651609   44271 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem (1708 bytes)
	I0814 00:35:29.651643   44271 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem -> /usr/share/ca-certificates/165892.pem
	I0814 00:35:29.651657   44271 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19429-9425/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0814 00:35:29.651670   44271 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589.pem -> /usr/share/ca-certificates/16589.pem
	I0814 00:35:29.652235   44271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0814 00:35:29.674628   44271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0814 00:35:29.696286   44271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0814 00:35:29.717270   44271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0814 00:35:29.739572   44271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/multinode-745925/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0814 00:35:29.761334   44271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/multinode-745925/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0814 00:35:29.782503   44271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/multinode-745925/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0814 00:35:29.804098   44271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/multinode-745925/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0814 00:35:29.825978   44271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem --> /usr/share/ca-certificates/165892.pem (1708 bytes)
	I0814 00:35:29.847189   44271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0814 00:35:29.867924   44271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589.pem --> /usr/share/ca-certificates/16589.pem (1338 bytes)
	I0814 00:35:29.889316   44271 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0814 00:35:29.903920   44271 ssh_runner.go:195] Run: openssl version
	I0814 00:35:29.909175   44271 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0814 00:35:29.909253   44271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16589.pem && ln -fs /usr/share/ca-certificates/16589.pem /etc/ssl/certs/16589.pem"
	I0814 00:35:29.918805   44271 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16589.pem
	I0814 00:35:29.922707   44271 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug 13 23:59 /usr/share/ca-certificates/16589.pem
	I0814 00:35:29.922786   44271 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 13 23:59 /usr/share/ca-certificates/16589.pem
	I0814 00:35:29.922838   44271 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16589.pem
	I0814 00:35:29.927994   44271 command_runner.go:130] > 51391683
	I0814 00:35:29.928121   44271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16589.pem /etc/ssl/certs/51391683.0"
	I0814 00:35:29.936740   44271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/165892.pem && ln -fs /usr/share/ca-certificates/165892.pem /etc/ssl/certs/165892.pem"
	I0814 00:35:29.946889   44271 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/165892.pem
	I0814 00:35:29.950847   44271 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug 13 23:59 /usr/share/ca-certificates/165892.pem
	I0814 00:35:29.950870   44271 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 13 23:59 /usr/share/ca-certificates/165892.pem
	I0814 00:35:29.950905   44271 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/165892.pem
	I0814 00:35:29.956186   44271 command_runner.go:130] > 3ec20f2e
	I0814 00:35:29.956238   44271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/165892.pem /etc/ssl/certs/3ec20f2e.0"
	I0814 00:35:29.964992   44271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0814 00:35:29.975179   44271 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0814 00:35:29.979118   44271 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug 13 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0814 00:35:29.979239   44271 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 13 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0814 00:35:29.979286   44271 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0814 00:35:29.984278   44271 command_runner.go:130] > b5213941
	I0814 00:35:29.984328   44271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0814 00:35:29.992818   44271 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0814 00:35:29.996719   44271 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0814 00:35:29.996738   44271 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0814 00:35:29.996744   44271 command_runner.go:130] > Device: 253,1	Inode: 7338518     Links: 1
	I0814 00:35:29.996750   44271 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0814 00:35:29.996756   44271 command_runner.go:130] > Access: 2024-08-14 00:28:49.868086845 +0000
	I0814 00:35:29.996761   44271 command_runner.go:130] > Modify: 2024-08-14 00:28:49.868086845 +0000
	I0814 00:35:29.996765   44271 command_runner.go:130] > Change: 2024-08-14 00:28:49.868086845 +0000
	I0814 00:35:29.996770   44271 command_runner.go:130] >  Birth: 2024-08-14 00:28:49.868086845 +0000
	I0814 00:35:29.996808   44271 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0814 00:35:30.001812   44271 command_runner.go:130] > Certificate will not expire
	I0814 00:35:30.001885   44271 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0814 00:35:30.006858   44271 command_runner.go:130] > Certificate will not expire
	I0814 00:35:30.006916   44271 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0814 00:35:30.011694   44271 command_runner.go:130] > Certificate will not expire
	I0814 00:35:30.011888   44271 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0814 00:35:30.016736   44271 command_runner.go:130] > Certificate will not expire
	I0814 00:35:30.016781   44271 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0814 00:35:30.021575   44271 command_runner.go:130] > Certificate will not expire
	I0814 00:35:30.021802   44271 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0814 00:35:30.026653   44271 command_runner.go:130] > Certificate will not expire
	I0814 00:35:30.026711   44271 kubeadm.go:392] StartCluster: {Name:multinode-745925 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:multinode-745925 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.201 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.55 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.225 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 00:35:30.026818   44271 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0814 00:35:30.026886   44271 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 00:35:30.060478   44271 command_runner.go:130] > eae8460ab3854c106136e0ebfc2e6438f306e26ac28df73f29788b373b39c1a6
	I0814 00:35:30.060508   44271 command_runner.go:130] > da26853bb0e0e4f765db1e5539436826636f3097e5d75d7aab7d9cf136a5fe42
	I0814 00:35:30.060517   44271 command_runner.go:130] > c0a5b54e67fb3eded06b4f3af4d4ca0576c1bc12e7fcc728d1abae5fc42964db
	I0814 00:35:30.060531   44271 command_runner.go:130] > c4fcefb3fdcea5401dcac2f0926c175690591bc6708e7487325a84810c9a3b6e
	I0814 00:35:30.060540   44271 command_runner.go:130] > d3b1b295216971a1ac2224e9411af3298ba5a538e1ad234dff54e5489d7945f7
	I0814 00:35:30.060548   44271 command_runner.go:130] > da0de171031bff0d0f0c40041b75aa57d0acaa98a37e4fa2143fff8554a135ba
	I0814 00:35:30.060557   44271 command_runner.go:130] > 98cb3f5f3e0f6b39eb346dacc9c9584057abe80a2b3fead9e9a4160cae92d7d7
	I0814 00:35:30.060575   44271 command_runner.go:130] > 0826d520b837d333cb9e8db12cfb2a3195420daee26767cd7ebe43cd46ff2963
	I0814 00:35:30.061958   44271 cri.go:89] found id: "eae8460ab3854c106136e0ebfc2e6438f306e26ac28df73f29788b373b39c1a6"
	I0814 00:35:30.061978   44271 cri.go:89] found id: "da26853bb0e0e4f765db1e5539436826636f3097e5d75d7aab7d9cf136a5fe42"
	I0814 00:35:30.061985   44271 cri.go:89] found id: "c0a5b54e67fb3eded06b4f3af4d4ca0576c1bc12e7fcc728d1abae5fc42964db"
	I0814 00:35:30.061989   44271 cri.go:89] found id: "c4fcefb3fdcea5401dcac2f0926c175690591bc6708e7487325a84810c9a3b6e"
	I0814 00:35:30.061993   44271 cri.go:89] found id: "d3b1b295216971a1ac2224e9411af3298ba5a538e1ad234dff54e5489d7945f7"
	I0814 00:35:30.061998   44271 cri.go:89] found id: "da0de171031bff0d0f0c40041b75aa57d0acaa98a37e4fa2143fff8554a135ba"
	I0814 00:35:30.062002   44271 cri.go:89] found id: "98cb3f5f3e0f6b39eb346dacc9c9584057abe80a2b3fead9e9a4160cae92d7d7"
	I0814 00:35:30.062006   44271 cri.go:89] found id: "0826d520b837d333cb9e8db12cfb2a3195420daee26767cd7ebe43cd46ff2963"
	I0814 00:35:30.062010   44271 cri.go:89] found id: ""
	I0814 00:35:30.062078   44271 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 14 00:37:14 multinode-745925 crio[2741]: time="2024-08-14 00:37:14.229350096Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723595834229329793,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134106,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=31876b3b-764d-4be0-b45e-8cf54dc0d7c4 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 00:37:14 multinode-745925 crio[2741]: time="2024-08-14 00:37:14.229896006Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d4f56154-2d55-4f55-8b30-0136e7b5e077 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 00:37:14 multinode-745925 crio[2741]: time="2024-08-14 00:37:14.229951801Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d4f56154-2d55-4f55-8b30-0136e7b5e077 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 00:37:14 multinode-745925 crio[2741]: time="2024-08-14 00:37:14.230306746Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bdd01c460191f7cef86d4bd30239ffbae3175c12c2b4a861d542e57d9aeb7b32,PodSandboxId:0d5d2a387657aad12c763de116f41321dcb185a108cccaf0fccead5521a01e9a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723595770921657606,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-q5qs4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2be22ae6-914d-4acc-a956-458c46f75090,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a8f8d0cbc5b0251014bd8c2230d63236db842bf1b0f787d972c920cd149b6a7,PodSandboxId:5215f31e9dadbebac285182026b4a1b860c0bb70956c0441122f1e75a1ca3401,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_RUNNING,CreatedAt:1723595737363921993,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dpqll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4e4b3c8-077a-4af9-8b09-463d6ff33bb7,},Annotations:map[string]string{io.kubernetes.container.hash: 49c194ef,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83669ef86608ef7b74baf1cd13371fce7a52c8c4c38820f713991b5aae72da67,PodSandboxId:83e8f1fa57c59190bcc3a9a9a70295532baa5410fd9ca03da8153096a613a78f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723595737238174970,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-42npp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5270fbf9-4681-4491-9e81-7538ba368977,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d40d3ea3bdcb515b59751a57cd931bbdf9e7836ae4e8760dafd40c83f53e699,PodSandboxId:8e7eac2966ebf71edad71801d0c98a6e948ebc90803a09e444ebff640da8db8f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723595737139537458,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wjs78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c84ef830-a81e-46d5-8dc1-6b8e826fc666,},Annotations:map[string]
string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f8ca6a66fd01793525fbfe5df2bb98677bbf13445f26b53da75040073c63f54,PodSandboxId:994050b8051dbca82143ba3b1d04d6c0ac8360b77b6ffad3798e36a6e3c9392a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723595737096452555,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66743779-9b1f-437e-8554-6f828a863e02,},Annotations:map[string]string{io.kub
ernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4388dbc1285435fa23d08e95bdc7fe47ea3ec8c1d661b27176003aa77f6d6c48,PodSandboxId:9a12d7bb37b38941445b9c2fc25822c58cc3b206856b300c973f52c36f432689,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723595732293563260,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-745925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 785240a4505ff96df5aeec83a66029df,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26907166372cd38b39caa8d25e060e61509027312b2f0325fcac273f1a90ce9a,PodSandboxId:926eca98650905da1bdb3d2222f512929d09aa2141f5b69c2681bd2703aa666c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723595732287504554,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-745925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4728b2edfec0f1c2f274c0eb1d84a79e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5293423821d467aa472c3a3bd890f34c1911316eae819cef2d0ca642e7a4be6b,PodSandboxId:8505babb9c47b3f31b52344ef6332498bd00b3860864a978f77acbc414451c34,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723595732248368032,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-745925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da57b742c36b6c761c31b0d0b9d29de5,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b16f783a23a0ce16a50f7dcec0d5a31b8d36401c26ddb636e6cc01c981fe26a2,PodSandboxId:f049204edba77b1fdc056392ab6bd5a0fee87a71c65c60cdde93f59308747321,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723595732209594326,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-745925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f851d83a2c3e07f14dec64e8651a3ec,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3834013e3a57d6a20bf3999d8ab486311628761b6cbb3a792f6731f48e873e6,PodSandboxId:88538e8639c0a344a975d49fc0aee49bdbdc39a385e85878f20bd116d378a30d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723595412998824243,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-q5qs4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2be22ae6-914d-4acc-a956-458c46f75090,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eae8460ab3854c106136e0ebfc2e6438f306e26ac28df73f29788b373b39c1a6,PodSandboxId:359828cbd3a6a9fad65e8c86e16e2f0e5deb75986dd961d24153e982a8727a72,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723595356394401050,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-42npp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5270fbf9-4681-4491-9e81-7538ba368977,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da26853bb0e0e4f765db1e5539436826636f3097e5d75d7aab7d9cf136a5fe42,PodSandboxId:30168f7b62fd63b6b2c3212c5175b14ff61f4c808c3979ac07c1f5f6fcfa9335,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723595356336143983,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 66743779-9b1f-437e-8554-6f828a863e02,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0a5b54e67fb3eded06b4f3af4d4ca0576c1bc12e7fcc728d1abae5fc42964db,PodSandboxId:e6bc55e0ce83be8673eeb2f68a71843cbaac04ddb0387fee4bf00160f7970974,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_EXITED,CreatedAt:1723595344652296579,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dpqll,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: c4e4b3c8-077a-4af9-8b09-463d6ff33bb7,},Annotations:map[string]string{io.kubernetes.container.hash: 49c194ef,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4fcefb3fdcea5401dcac2f0926c175690591bc6708e7487325a84810c9a3b6e,PodSandboxId:55e339b006ca5ee34d4de2e10968d4bc40458ca8077168d59b722769f68c5790,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723595344559937852,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wjs78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c84ef830-a81e-46d5-8dc1-
6b8e826fc666,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da0de171031bff0d0f0c40041b75aa57d0acaa98a37e4fa2143fff8554a135ba,PodSandboxId:9823caa1bb86e291545ce708c22d1d2789e8132ae25167c05224476dae42fc58,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723595333719586256,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-745925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4728b2edfec0f1c2f274c0eb1d84a79e,},Annotations:map[string]string{i
o.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3b1b295216971a1ac2224e9411af3298ba5a538e1ad234dff54e5489d7945f7,PodSandboxId:b7ca0981e0cd9d9d8d2c48e90a3df655ed0174e28d6bf844e716f4ad09d31c68,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723595333738785481,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-745925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 785240a4505ff96df5aeec83a66029df,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0826d520b837d333cb9e8db12cfb2a3195420daee26767cd7ebe43cd46ff2963,PodSandboxId:43bc1a5d8bfa6b0dec1e3817f318d1617ddcdf10ba744772540fbd7d75cba12f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723595333701664563,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-745925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f851d83a2c3e07f14dec64e8651a3ec,},Annotations:map[string]string{io.kubernetes.container.hash: f
72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98cb3f5f3e0f6b39eb346dacc9c9584057abe80a2b3fead9e9a4160cae92d7d7,PodSandboxId:f58497b82867e2069e6294da926112064a0da4dba22ae94753c88711bb267a20,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723595333706988101,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-745925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da57b742c36b6c761c31b0d0b9d29de5,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d4f56154-2d55-4f55-8b30-0136e7b5e077 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 00:37:14 multinode-745925 crio[2741]: time="2024-08-14 00:37:14.277264314Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5129807b-e18a-4ce6-a705-9823f15e0fd6 name=/runtime.v1.RuntimeService/Version
	Aug 14 00:37:14 multinode-745925 crio[2741]: time="2024-08-14 00:37:14.277333623Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5129807b-e18a-4ce6-a705-9823f15e0fd6 name=/runtime.v1.RuntimeService/Version
	Aug 14 00:37:14 multinode-745925 crio[2741]: time="2024-08-14 00:37:14.278423397Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=525aa026-412c-4d6b-b54e-8a9f2bf9a8f9 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 00:37:14 multinode-745925 crio[2741]: time="2024-08-14 00:37:14.278790644Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723595834278770082,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134106,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=525aa026-412c-4d6b-b54e-8a9f2bf9a8f9 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 00:37:14 multinode-745925 crio[2741]: time="2024-08-14 00:37:14.279454211Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8d81bd02-878b-41dc-972a-52f4d2814a8e name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 00:37:14 multinode-745925 crio[2741]: time="2024-08-14 00:37:14.279512059Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8d81bd02-878b-41dc-972a-52f4d2814a8e name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 00:37:14 multinode-745925 crio[2741]: time="2024-08-14 00:37:14.279892041Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bdd01c460191f7cef86d4bd30239ffbae3175c12c2b4a861d542e57d9aeb7b32,PodSandboxId:0d5d2a387657aad12c763de116f41321dcb185a108cccaf0fccead5521a01e9a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723595770921657606,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-q5qs4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2be22ae6-914d-4acc-a956-458c46f75090,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a8f8d0cbc5b0251014bd8c2230d63236db842bf1b0f787d972c920cd149b6a7,PodSandboxId:5215f31e9dadbebac285182026b4a1b860c0bb70956c0441122f1e75a1ca3401,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_RUNNING,CreatedAt:1723595737363921993,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dpqll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4e4b3c8-077a-4af9-8b09-463d6ff33bb7,},Annotations:map[string]string{io.kubernetes.container.hash: 49c194ef,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83669ef86608ef7b74baf1cd13371fce7a52c8c4c38820f713991b5aae72da67,PodSandboxId:83e8f1fa57c59190bcc3a9a9a70295532baa5410fd9ca03da8153096a613a78f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723595737238174970,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-42npp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5270fbf9-4681-4491-9e81-7538ba368977,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d40d3ea3bdcb515b59751a57cd931bbdf9e7836ae4e8760dafd40c83f53e699,PodSandboxId:8e7eac2966ebf71edad71801d0c98a6e948ebc90803a09e444ebff640da8db8f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723595737139537458,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wjs78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c84ef830-a81e-46d5-8dc1-6b8e826fc666,},Annotations:map[string]
string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f8ca6a66fd01793525fbfe5df2bb98677bbf13445f26b53da75040073c63f54,PodSandboxId:994050b8051dbca82143ba3b1d04d6c0ac8360b77b6ffad3798e36a6e3c9392a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723595737096452555,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66743779-9b1f-437e-8554-6f828a863e02,},Annotations:map[string]string{io.kub
ernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4388dbc1285435fa23d08e95bdc7fe47ea3ec8c1d661b27176003aa77f6d6c48,PodSandboxId:9a12d7bb37b38941445b9c2fc25822c58cc3b206856b300c973f52c36f432689,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723595732293563260,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-745925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 785240a4505ff96df5aeec83a66029df,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26907166372cd38b39caa8d25e060e61509027312b2f0325fcac273f1a90ce9a,PodSandboxId:926eca98650905da1bdb3d2222f512929d09aa2141f5b69c2681bd2703aa666c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723595732287504554,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-745925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4728b2edfec0f1c2f274c0eb1d84a79e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5293423821d467aa472c3a3bd890f34c1911316eae819cef2d0ca642e7a4be6b,PodSandboxId:8505babb9c47b3f31b52344ef6332498bd00b3860864a978f77acbc414451c34,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723595732248368032,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-745925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da57b742c36b6c761c31b0d0b9d29de5,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b16f783a23a0ce16a50f7dcec0d5a31b8d36401c26ddb636e6cc01c981fe26a2,PodSandboxId:f049204edba77b1fdc056392ab6bd5a0fee87a71c65c60cdde93f59308747321,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723595732209594326,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-745925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f851d83a2c3e07f14dec64e8651a3ec,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3834013e3a57d6a20bf3999d8ab486311628761b6cbb3a792f6731f48e873e6,PodSandboxId:88538e8639c0a344a975d49fc0aee49bdbdc39a385e85878f20bd116d378a30d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723595412998824243,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-q5qs4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2be22ae6-914d-4acc-a956-458c46f75090,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eae8460ab3854c106136e0ebfc2e6438f306e26ac28df73f29788b373b39c1a6,PodSandboxId:359828cbd3a6a9fad65e8c86e16e2f0e5deb75986dd961d24153e982a8727a72,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723595356394401050,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-42npp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5270fbf9-4681-4491-9e81-7538ba368977,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da26853bb0e0e4f765db1e5539436826636f3097e5d75d7aab7d9cf136a5fe42,PodSandboxId:30168f7b62fd63b6b2c3212c5175b14ff61f4c808c3979ac07c1f5f6fcfa9335,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723595356336143983,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 66743779-9b1f-437e-8554-6f828a863e02,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0a5b54e67fb3eded06b4f3af4d4ca0576c1bc12e7fcc728d1abae5fc42964db,PodSandboxId:e6bc55e0ce83be8673eeb2f68a71843cbaac04ddb0387fee4bf00160f7970974,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_EXITED,CreatedAt:1723595344652296579,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dpqll,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: c4e4b3c8-077a-4af9-8b09-463d6ff33bb7,},Annotations:map[string]string{io.kubernetes.container.hash: 49c194ef,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4fcefb3fdcea5401dcac2f0926c175690591bc6708e7487325a84810c9a3b6e,PodSandboxId:55e339b006ca5ee34d4de2e10968d4bc40458ca8077168d59b722769f68c5790,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723595344559937852,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wjs78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c84ef830-a81e-46d5-8dc1-
6b8e826fc666,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da0de171031bff0d0f0c40041b75aa57d0acaa98a37e4fa2143fff8554a135ba,PodSandboxId:9823caa1bb86e291545ce708c22d1d2789e8132ae25167c05224476dae42fc58,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723595333719586256,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-745925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4728b2edfec0f1c2f274c0eb1d84a79e,},Annotations:map[string]string{i
o.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3b1b295216971a1ac2224e9411af3298ba5a538e1ad234dff54e5489d7945f7,PodSandboxId:b7ca0981e0cd9d9d8d2c48e90a3df655ed0174e28d6bf844e716f4ad09d31c68,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723595333738785481,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-745925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 785240a4505ff96df5aeec83a66029df,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0826d520b837d333cb9e8db12cfb2a3195420daee26767cd7ebe43cd46ff2963,PodSandboxId:43bc1a5d8bfa6b0dec1e3817f318d1617ddcdf10ba744772540fbd7d75cba12f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723595333701664563,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-745925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f851d83a2c3e07f14dec64e8651a3ec,},Annotations:map[string]string{io.kubernetes.container.hash: f
72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98cb3f5f3e0f6b39eb346dacc9c9584057abe80a2b3fead9e9a4160cae92d7d7,PodSandboxId:f58497b82867e2069e6294da926112064a0da4dba22ae94753c88711bb267a20,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723595333706988101,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-745925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da57b742c36b6c761c31b0d0b9d29de5,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8d81bd02-878b-41dc-972a-52f4d2814a8e name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 00:37:14 multinode-745925 crio[2741]: time="2024-08-14 00:37:14.318554637Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=938cb833-2a68-4a35-956d-84696e61e4cd name=/runtime.v1.RuntimeService/Version
	Aug 14 00:37:14 multinode-745925 crio[2741]: time="2024-08-14 00:37:14.318627302Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=938cb833-2a68-4a35-956d-84696e61e4cd name=/runtime.v1.RuntimeService/Version
	Aug 14 00:37:14 multinode-745925 crio[2741]: time="2024-08-14 00:37:14.319990661Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a8c09172-c5d5-49b4-ad1f-1abfedd60dd1 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 00:37:14 multinode-745925 crio[2741]: time="2024-08-14 00:37:14.320398206Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723595834320377839,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134106,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a8c09172-c5d5-49b4-ad1f-1abfedd60dd1 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 00:37:14 multinode-745925 crio[2741]: time="2024-08-14 00:37:14.320938886Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a5b4893c-4f2d-4d6c-9912-47b2dc127106 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 00:37:14 multinode-745925 crio[2741]: time="2024-08-14 00:37:14.321347879Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a5b4893c-4f2d-4d6c-9912-47b2dc127106 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 00:37:14 multinode-745925 crio[2741]: time="2024-08-14 00:37:14.321969748Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bdd01c460191f7cef86d4bd30239ffbae3175c12c2b4a861d542e57d9aeb7b32,PodSandboxId:0d5d2a387657aad12c763de116f41321dcb185a108cccaf0fccead5521a01e9a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723595770921657606,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-q5qs4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2be22ae6-914d-4acc-a956-458c46f75090,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a8f8d0cbc5b0251014bd8c2230d63236db842bf1b0f787d972c920cd149b6a7,PodSandboxId:5215f31e9dadbebac285182026b4a1b860c0bb70956c0441122f1e75a1ca3401,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_RUNNING,CreatedAt:1723595737363921993,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dpqll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4e4b3c8-077a-4af9-8b09-463d6ff33bb7,},Annotations:map[string]string{io.kubernetes.container.hash: 49c194ef,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83669ef86608ef7b74baf1cd13371fce7a52c8c4c38820f713991b5aae72da67,PodSandboxId:83e8f1fa57c59190bcc3a9a9a70295532baa5410fd9ca03da8153096a613a78f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723595737238174970,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-42npp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5270fbf9-4681-4491-9e81-7538ba368977,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d40d3ea3bdcb515b59751a57cd931bbdf9e7836ae4e8760dafd40c83f53e699,PodSandboxId:8e7eac2966ebf71edad71801d0c98a6e948ebc90803a09e444ebff640da8db8f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723595737139537458,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wjs78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c84ef830-a81e-46d5-8dc1-6b8e826fc666,},Annotations:map[string]
string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f8ca6a66fd01793525fbfe5df2bb98677bbf13445f26b53da75040073c63f54,PodSandboxId:994050b8051dbca82143ba3b1d04d6c0ac8360b77b6ffad3798e36a6e3c9392a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723595737096452555,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66743779-9b1f-437e-8554-6f828a863e02,},Annotations:map[string]string{io.kub
ernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4388dbc1285435fa23d08e95bdc7fe47ea3ec8c1d661b27176003aa77f6d6c48,PodSandboxId:9a12d7bb37b38941445b9c2fc25822c58cc3b206856b300c973f52c36f432689,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723595732293563260,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-745925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 785240a4505ff96df5aeec83a66029df,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26907166372cd38b39caa8d25e060e61509027312b2f0325fcac273f1a90ce9a,PodSandboxId:926eca98650905da1bdb3d2222f512929d09aa2141f5b69c2681bd2703aa666c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723595732287504554,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-745925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4728b2edfec0f1c2f274c0eb1d84a79e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5293423821d467aa472c3a3bd890f34c1911316eae819cef2d0ca642e7a4be6b,PodSandboxId:8505babb9c47b3f31b52344ef6332498bd00b3860864a978f77acbc414451c34,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723595732248368032,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-745925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da57b742c36b6c761c31b0d0b9d29de5,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b16f783a23a0ce16a50f7dcec0d5a31b8d36401c26ddb636e6cc01c981fe26a2,PodSandboxId:f049204edba77b1fdc056392ab6bd5a0fee87a71c65c60cdde93f59308747321,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723595732209594326,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-745925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f851d83a2c3e07f14dec64e8651a3ec,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3834013e3a57d6a20bf3999d8ab486311628761b6cbb3a792f6731f48e873e6,PodSandboxId:88538e8639c0a344a975d49fc0aee49bdbdc39a385e85878f20bd116d378a30d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723595412998824243,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-q5qs4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2be22ae6-914d-4acc-a956-458c46f75090,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eae8460ab3854c106136e0ebfc2e6438f306e26ac28df73f29788b373b39c1a6,PodSandboxId:359828cbd3a6a9fad65e8c86e16e2f0e5deb75986dd961d24153e982a8727a72,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723595356394401050,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-42npp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5270fbf9-4681-4491-9e81-7538ba368977,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da26853bb0e0e4f765db1e5539436826636f3097e5d75d7aab7d9cf136a5fe42,PodSandboxId:30168f7b62fd63b6b2c3212c5175b14ff61f4c808c3979ac07c1f5f6fcfa9335,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723595356336143983,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 66743779-9b1f-437e-8554-6f828a863e02,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0a5b54e67fb3eded06b4f3af4d4ca0576c1bc12e7fcc728d1abae5fc42964db,PodSandboxId:e6bc55e0ce83be8673eeb2f68a71843cbaac04ddb0387fee4bf00160f7970974,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_EXITED,CreatedAt:1723595344652296579,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dpqll,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: c4e4b3c8-077a-4af9-8b09-463d6ff33bb7,},Annotations:map[string]string{io.kubernetes.container.hash: 49c194ef,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4fcefb3fdcea5401dcac2f0926c175690591bc6708e7487325a84810c9a3b6e,PodSandboxId:55e339b006ca5ee34d4de2e10968d4bc40458ca8077168d59b722769f68c5790,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723595344559937852,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wjs78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c84ef830-a81e-46d5-8dc1-
6b8e826fc666,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da0de171031bff0d0f0c40041b75aa57d0acaa98a37e4fa2143fff8554a135ba,PodSandboxId:9823caa1bb86e291545ce708c22d1d2789e8132ae25167c05224476dae42fc58,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723595333719586256,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-745925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4728b2edfec0f1c2f274c0eb1d84a79e,},Annotations:map[string]string{i
o.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3b1b295216971a1ac2224e9411af3298ba5a538e1ad234dff54e5489d7945f7,PodSandboxId:b7ca0981e0cd9d9d8d2c48e90a3df655ed0174e28d6bf844e716f4ad09d31c68,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723595333738785481,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-745925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 785240a4505ff96df5aeec83a66029df,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0826d520b837d333cb9e8db12cfb2a3195420daee26767cd7ebe43cd46ff2963,PodSandboxId:43bc1a5d8bfa6b0dec1e3817f318d1617ddcdf10ba744772540fbd7d75cba12f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723595333701664563,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-745925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f851d83a2c3e07f14dec64e8651a3ec,},Annotations:map[string]string{io.kubernetes.container.hash: f
72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98cb3f5f3e0f6b39eb346dacc9c9584057abe80a2b3fead9e9a4160cae92d7d7,PodSandboxId:f58497b82867e2069e6294da926112064a0da4dba22ae94753c88711bb267a20,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723595333706988101,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-745925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da57b742c36b6c761c31b0d0b9d29de5,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a5b4893c-4f2d-4d6c-9912-47b2dc127106 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 00:37:14 multinode-745925 crio[2741]: time="2024-08-14 00:37:14.360052005Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a58807bd-1135-475e-a7d6-75c9d5d6825b name=/runtime.v1.RuntimeService/Version
	Aug 14 00:37:14 multinode-745925 crio[2741]: time="2024-08-14 00:37:14.360124268Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a58807bd-1135-475e-a7d6-75c9d5d6825b name=/runtime.v1.RuntimeService/Version
	Aug 14 00:37:14 multinode-745925 crio[2741]: time="2024-08-14 00:37:14.361388315Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d48d0383-9a83-4f36-9165-d814832b19ce name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 00:37:14 multinode-745925 crio[2741]: time="2024-08-14 00:37:14.361869085Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723595834361774193,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134106,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d48d0383-9a83-4f36-9165-d814832b19ce name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 00:37:14 multinode-745925 crio[2741]: time="2024-08-14 00:37:14.362313681Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3b44ec50-fdb1-4440-b142-b79d8318881a name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 00:37:14 multinode-745925 crio[2741]: time="2024-08-14 00:37:14.362368824Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3b44ec50-fdb1-4440-b142-b79d8318881a name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 00:37:14 multinode-745925 crio[2741]: time="2024-08-14 00:37:14.362748667Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bdd01c460191f7cef86d4bd30239ffbae3175c12c2b4a861d542e57d9aeb7b32,PodSandboxId:0d5d2a387657aad12c763de116f41321dcb185a108cccaf0fccead5521a01e9a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723595770921657606,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-q5qs4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2be22ae6-914d-4acc-a956-458c46f75090,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a8f8d0cbc5b0251014bd8c2230d63236db842bf1b0f787d972c920cd149b6a7,PodSandboxId:5215f31e9dadbebac285182026b4a1b860c0bb70956c0441122f1e75a1ca3401,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_RUNNING,CreatedAt:1723595737363921993,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dpqll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4e4b3c8-077a-4af9-8b09-463d6ff33bb7,},Annotations:map[string]string{io.kubernetes.container.hash: 49c194ef,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83669ef86608ef7b74baf1cd13371fce7a52c8c4c38820f713991b5aae72da67,PodSandboxId:83e8f1fa57c59190bcc3a9a9a70295532baa5410fd9ca03da8153096a613a78f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723595737238174970,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-42npp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5270fbf9-4681-4491-9e81-7538ba368977,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d40d3ea3bdcb515b59751a57cd931bbdf9e7836ae4e8760dafd40c83f53e699,PodSandboxId:8e7eac2966ebf71edad71801d0c98a6e948ebc90803a09e444ebff640da8db8f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723595737139537458,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wjs78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c84ef830-a81e-46d5-8dc1-6b8e826fc666,},Annotations:map[string]
string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f8ca6a66fd01793525fbfe5df2bb98677bbf13445f26b53da75040073c63f54,PodSandboxId:994050b8051dbca82143ba3b1d04d6c0ac8360b77b6ffad3798e36a6e3c9392a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723595737096452555,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66743779-9b1f-437e-8554-6f828a863e02,},Annotations:map[string]string{io.kub
ernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4388dbc1285435fa23d08e95bdc7fe47ea3ec8c1d661b27176003aa77f6d6c48,PodSandboxId:9a12d7bb37b38941445b9c2fc25822c58cc3b206856b300c973f52c36f432689,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723595732293563260,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-745925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 785240a4505ff96df5aeec83a66029df,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26907166372cd38b39caa8d25e060e61509027312b2f0325fcac273f1a90ce9a,PodSandboxId:926eca98650905da1bdb3d2222f512929d09aa2141f5b69c2681bd2703aa666c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723595732287504554,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-745925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4728b2edfec0f1c2f274c0eb1d84a79e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5293423821d467aa472c3a3bd890f34c1911316eae819cef2d0ca642e7a4be6b,PodSandboxId:8505babb9c47b3f31b52344ef6332498bd00b3860864a978f77acbc414451c34,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723595732248368032,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-745925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da57b742c36b6c761c31b0d0b9d29de5,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b16f783a23a0ce16a50f7dcec0d5a31b8d36401c26ddb636e6cc01c981fe26a2,PodSandboxId:f049204edba77b1fdc056392ab6bd5a0fee87a71c65c60cdde93f59308747321,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723595732209594326,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-745925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f851d83a2c3e07f14dec64e8651a3ec,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3834013e3a57d6a20bf3999d8ab486311628761b6cbb3a792f6731f48e873e6,PodSandboxId:88538e8639c0a344a975d49fc0aee49bdbdc39a385e85878f20bd116d378a30d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723595412998824243,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-q5qs4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2be22ae6-914d-4acc-a956-458c46f75090,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eae8460ab3854c106136e0ebfc2e6438f306e26ac28df73f29788b373b39c1a6,PodSandboxId:359828cbd3a6a9fad65e8c86e16e2f0e5deb75986dd961d24153e982a8727a72,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723595356394401050,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-42npp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5270fbf9-4681-4491-9e81-7538ba368977,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da26853bb0e0e4f765db1e5539436826636f3097e5d75d7aab7d9cf136a5fe42,PodSandboxId:30168f7b62fd63b6b2c3212c5175b14ff61f4c808c3979ac07c1f5f6fcfa9335,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723595356336143983,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 66743779-9b1f-437e-8554-6f828a863e02,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0a5b54e67fb3eded06b4f3af4d4ca0576c1bc12e7fcc728d1abae5fc42964db,PodSandboxId:e6bc55e0ce83be8673eeb2f68a71843cbaac04ddb0387fee4bf00160f7970974,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_EXITED,CreatedAt:1723595344652296579,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dpqll,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: c4e4b3c8-077a-4af9-8b09-463d6ff33bb7,},Annotations:map[string]string{io.kubernetes.container.hash: 49c194ef,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4fcefb3fdcea5401dcac2f0926c175690591bc6708e7487325a84810c9a3b6e,PodSandboxId:55e339b006ca5ee34d4de2e10968d4bc40458ca8077168d59b722769f68c5790,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723595344559937852,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wjs78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c84ef830-a81e-46d5-8dc1-
6b8e826fc666,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da0de171031bff0d0f0c40041b75aa57d0acaa98a37e4fa2143fff8554a135ba,PodSandboxId:9823caa1bb86e291545ce708c22d1d2789e8132ae25167c05224476dae42fc58,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723595333719586256,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-745925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4728b2edfec0f1c2f274c0eb1d84a79e,},Annotations:map[string]string{i
o.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3b1b295216971a1ac2224e9411af3298ba5a538e1ad234dff54e5489d7945f7,PodSandboxId:b7ca0981e0cd9d9d8d2c48e90a3df655ed0174e28d6bf844e716f4ad09d31c68,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723595333738785481,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-745925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 785240a4505ff96df5aeec83a66029df,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0826d520b837d333cb9e8db12cfb2a3195420daee26767cd7ebe43cd46ff2963,PodSandboxId:43bc1a5d8bfa6b0dec1e3817f318d1617ddcdf10ba744772540fbd7d75cba12f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723595333701664563,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-745925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f851d83a2c3e07f14dec64e8651a3ec,},Annotations:map[string]string{io.kubernetes.container.hash: f
72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98cb3f5f3e0f6b39eb346dacc9c9584057abe80a2b3fead9e9a4160cae92d7d7,PodSandboxId:f58497b82867e2069e6294da926112064a0da4dba22ae94753c88711bb267a20,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723595333706988101,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-745925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da57b742c36b6c761c31b0d0b9d29de5,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3b44ec50-fdb1-4440-b142-b79d8318881a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	bdd01c460191f       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   0d5d2a387657a       busybox-7dff88458-q5qs4
	9a8f8d0cbc5b0       917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557                                      About a minute ago   Running             kindnet-cni               1                   5215f31e9dadb       kindnet-dpqll
	83669ef86608e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Running             coredns                   1                   83e8f1fa57c59       coredns-6f6b679f8f-42npp
	1d40d3ea3bdcb       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      About a minute ago   Running             kube-proxy                1                   8e7eac2966ebf       kube-proxy-wjs78
	8f8ca6a66fd01       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   994050b8051db       storage-provisioner
	4388dbc128543       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      About a minute ago   Running             kube-scheduler            1                   9a12d7bb37b38       kube-scheduler-multinode-745925
	26907166372cd       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      About a minute ago   Running             etcd                      1                   926eca9865090       etcd-multinode-745925
	5293423821d46       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      About a minute ago   Running             kube-controller-manager   1                   8505babb9c47b       kube-controller-manager-multinode-745925
	b16f783a23a0c       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      About a minute ago   Running             kube-apiserver            1                   f049204edba77       kube-apiserver-multinode-745925
	c3834013e3a57       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   7 minutes ago        Exited              busybox                   0                   88538e8639c0a       busybox-7dff88458-q5qs4
	eae8460ab3854       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago        Exited              coredns                   0                   359828cbd3a6a       coredns-6f6b679f8f-42npp
	da26853bb0e0e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago        Exited              storage-provisioner       0                   30168f7b62fd6       storage-provisioner
	c0a5b54e67fb3       917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557                                      8 minutes ago        Exited              kindnet-cni               0                   e6bc55e0ce83b       kindnet-dpqll
	c4fcefb3fdcea       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      8 minutes ago        Exited              kube-proxy                0                   55e339b006ca5       kube-proxy-wjs78
	d3b1b29521697       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      8 minutes ago        Exited              kube-scheduler            0                   b7ca0981e0cd9       kube-scheduler-multinode-745925
	da0de171031bf       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      8 minutes ago        Exited              etcd                      0                   9823caa1bb86e       etcd-multinode-745925
	98cb3f5f3e0f6       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      8 minutes ago        Exited              kube-controller-manager   0                   f58497b82867e       kube-controller-manager-multinode-745925
	0826d520b837d       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      8 minutes ago        Exited              kube-apiserver            0                   43bc1a5d8bfa6       kube-apiserver-multinode-745925
	
	
	==> coredns [83669ef86608ef7b74baf1cd13371fce7a52c8c4c38820f713991b5aae72da67] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:51980 - 44192 "HINFO IN 4627935223448981007.9129784899072471225. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.01195331s
	
	
	==> coredns [eae8460ab3854c106136e0ebfc2e6438f306e26ac28df73f29788b373b39c1a6] <==
	[INFO] 10.244.1.2:46018 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001534261s
	[INFO] 10.244.1.2:43464 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000070235s
	[INFO] 10.244.1.2:38869 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000075645s
	[INFO] 10.244.1.2:34500 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001099481s
	[INFO] 10.244.1.2:55262 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000064953s
	[INFO] 10.244.1.2:60067 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000074835s
	[INFO] 10.244.1.2:54269 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000058182s
	[INFO] 10.244.0.3:33565 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000072903s
	[INFO] 10.244.0.3:55121 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000159692s
	[INFO] 10.244.0.3:50568 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000036901s
	[INFO] 10.244.0.3:54217 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000031753s
	[INFO] 10.244.1.2:43247 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000105057s
	[INFO] 10.244.1.2:35711 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000064728s
	[INFO] 10.244.1.2:59512 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000054406s
	[INFO] 10.244.1.2:37804 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000048475s
	[INFO] 10.244.0.3:33612 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000080442s
	[INFO] 10.244.0.3:48076 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000107043s
	[INFO] 10.244.0.3:50714 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000062317s
	[INFO] 10.244.0.3:42453 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000050605s
	[INFO] 10.244.1.2:60045 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000151408s
	[INFO] 10.244.1.2:42388 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000090237s
	[INFO] 10.244.1.2:57343 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000102036s
	[INFO] 10.244.1.2:60620 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000107288s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-745925
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-745925
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a5ac17629aabe8562ef9220195ba6559cf416caf
	                    minikube.k8s.io/name=multinode-745925
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_14T00_28_59_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 14 Aug 2024 00:28:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-745925
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 14 Aug 2024 00:37:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 14 Aug 2024 00:35:35 +0000   Wed, 14 Aug 2024 00:28:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 14 Aug 2024 00:35:35 +0000   Wed, 14 Aug 2024 00:28:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 14 Aug 2024 00:35:35 +0000   Wed, 14 Aug 2024 00:28:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 14 Aug 2024 00:35:35 +0000   Wed, 14 Aug 2024 00:29:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.201
	  Hostname:    multinode-745925
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a29b00ae6c9445bb9ec55db18ac99634
	  System UUID:                a29b00ae-6c94-45bb-9ec5-5db18ac99634
	  Boot ID:                    d038d6a5-2e6b-4a2c-a4b9-cc85ebf99a02
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-q5qs4                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m5s
	  kube-system                 coredns-6f6b679f8f-42npp                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     8m10s
	  kube-system                 etcd-multinode-745925                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         8m15s
	  kube-system                 kindnet-dpqll                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      8m11s
	  kube-system                 kube-apiserver-multinode-745925             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m15s
	  kube-system                 kube-controller-manager-multinode-745925    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m15s
	  kube-system                 kube-proxy-wjs78                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m11s
	  kube-system                 kube-scheduler-multinode-745925             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m15s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 8m9s                 kube-proxy       
	  Normal  Starting                 96s                  kube-proxy       
	  Normal  Starting                 8m16s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  8m16s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     8m15s                kubelet          Node multinode-745925 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    8m15s                kubelet          Node multinode-745925 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  8m15s                kubelet          Node multinode-745925 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           8m12s                node-controller  Node multinode-745925 event: Registered Node multinode-745925 in Controller
	  Normal  NodeReady                7m59s                kubelet          Node multinode-745925 status is now: NodeReady
	  Normal  Starting                 103s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  103s (x8 over 103s)  kubelet          Node multinode-745925 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    103s (x8 over 103s)  kubelet          Node multinode-745925 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     103s (x7 over 103s)  kubelet          Node multinode-745925 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  103s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           95s                  node-controller  Node multinode-745925 event: Registered Node multinode-745925 in Controller
	
	
	Name:               multinode-745925-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-745925-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a5ac17629aabe8562ef9220195ba6559cf416caf
	                    minikube.k8s.io/name=multinode-745925
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_14T00_36_15_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 14 Aug 2024 00:36:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-745925-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 14 Aug 2024 00:37:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 14 Aug 2024 00:36:45 +0000   Wed, 14 Aug 2024 00:36:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 14 Aug 2024 00:36:45 +0000   Wed, 14 Aug 2024 00:36:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 14 Aug 2024 00:36:45 +0000   Wed, 14 Aug 2024 00:36:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 14 Aug 2024 00:36:45 +0000   Wed, 14 Aug 2024 00:36:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.55
	  Hostname:    multinode-745925-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 91f5344f07d1452cb1780ba954b6bfac
	  System UUID:                91f5344f-07d1-452c-b178-0ba954b6bfac
	  Boot ID:                    0e30ab73-0588-4779-8d5b-6d70176877c8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-mklsc    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         64s
	  kube-system                 kindnet-jldn7              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m27s
	  kube-system                 kube-proxy-69crd           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 55s                    kube-proxy       
	  Normal  Starting                 7m22s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  7m28s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m27s (x2 over 7m28s)  kubelet          Node multinode-745925-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m27s (x2 over 7m28s)  kubelet          Node multinode-745925-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m27s (x2 over 7m28s)  kubelet          Node multinode-745925-m02 status is now: NodeHasSufficientPID
	  Normal  NodeReady                7m7s                   kubelet          Node multinode-745925-m02 status is now: NodeReady
	  Normal  Starting                 60s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  60s (x2 over 60s)      kubelet          Node multinode-745925-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    60s (x2 over 60s)      kubelet          Node multinode-745925-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     60s (x2 over 60s)      kubelet          Node multinode-745925-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  60s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           55s                    node-controller  Node multinode-745925-m02 event: Registered Node multinode-745925-m02 in Controller
	  Normal  NodeReady                40s                    kubelet          Node multinode-745925-m02 status is now: NodeReady
	
	
	Name:               multinode-745925-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-745925-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a5ac17629aabe8562ef9220195ba6559cf416caf
	                    minikube.k8s.io/name=multinode-745925
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_14T00_36_53_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 14 Aug 2024 00:36:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-745925-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 14 Aug 2024 00:37:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 14 Aug 2024 00:37:11 +0000   Wed, 14 Aug 2024 00:36:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 14 Aug 2024 00:37:11 +0000   Wed, 14 Aug 2024 00:36:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 14 Aug 2024 00:37:11 +0000   Wed, 14 Aug 2024 00:36:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 14 Aug 2024 00:37:11 +0000   Wed, 14 Aug 2024 00:37:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.225
	  Hostname:    multinode-745925-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ebca15d1bf7b4c68a73b32194f431f46
	  System UUID:                ebca15d1-bf7b-4c68-a73b-32194f431f46
	  Boot ID:                    1ac534e4-11bc-4e7e-b4ee-f84d1c97f3e0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-vlh75       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m35s
	  kube-system                 kube-proxy-n2qv9    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From           Message
	  ----    ------                   ----                   ----           -------
	  Normal  Starting                 5m41s                  kube-proxy     
	  Normal  Starting                 6m30s                  kube-proxy     
	  Normal  Starting                 17s                    kube-proxy     
	  Normal  NodeHasSufficientMemory  6m35s (x2 over 6m35s)  kubelet        Node multinode-745925-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m35s (x2 over 6m35s)  kubelet        Node multinode-745925-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m35s (x2 over 6m35s)  kubelet        Node multinode-745925-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m35s                  kubelet        Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m15s                  kubelet        Node multinode-745925-m03 status is now: NodeReady
	  Normal  NodeHasSufficientPID     5m46s (x2 over 5m46s)  kubelet        Node multinode-745925-m03 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    5m46s (x2 over 5m46s)  kubelet        Node multinode-745925-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  5m46s                  kubelet        Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m46s (x2 over 5m46s)  kubelet        Node multinode-745925-m03 status is now: NodeHasSufficientMemory
	  Normal  Starting                 5m46s                  kubelet        Starting kubelet.
	  Normal  NodeReady                5m26s                  kubelet        Node multinode-745925-m03 status is now: NodeReady
	  Normal  CIDRAssignmentFailed     21s                    cidrAllocator  Node multinode-745925-m03 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientMemory  21s (x2 over 21s)      kubelet        Node multinode-745925-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21s (x2 over 21s)      kubelet        Node multinode-745925-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21s (x2 over 21s)      kubelet        Node multinode-745925-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21s                    kubelet        Updated Node Allocatable limit across pods
	  Normal  NodeReady                3s                     kubelet        Node multinode-745925-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.047949] systemd-fstab-generator[613]: Ignoring "noauto" option for root device
	[  +0.185818] systemd-fstab-generator[627]: Ignoring "noauto" option for root device
	[  +0.096502] systemd-fstab-generator[639]: Ignoring "noauto" option for root device
	[  +0.240879] systemd-fstab-generator[669]: Ignoring "noauto" option for root device
	[  +3.695074] systemd-fstab-generator[764]: Ignoring "noauto" option for root device
	[  +4.227842] systemd-fstab-generator[897]: Ignoring "noauto" option for root device
	[  +0.064972] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.985702] systemd-fstab-generator[1233]: Ignoring "noauto" option for root device
	[  +0.084765] kauditd_printk_skb: 69 callbacks suppressed
	[Aug14 00:29] systemd-fstab-generator[1338]: Ignoring "noauto" option for root device
	[  +0.137775] kauditd_printk_skb: 21 callbacks suppressed
	[ +12.318947] kauditd_printk_skb: 60 callbacks suppressed
	[Aug14 00:30] kauditd_printk_skb: 12 callbacks suppressed
	[Aug14 00:35] systemd-fstab-generator[2657]: Ignoring "noauto" option for root device
	[  +0.142572] systemd-fstab-generator[2669]: Ignoring "noauto" option for root device
	[  +0.171268] systemd-fstab-generator[2684]: Ignoring "noauto" option for root device
	[  +0.130113] systemd-fstab-generator[2696]: Ignoring "noauto" option for root device
	[  +0.264436] systemd-fstab-generator[2724]: Ignoring "noauto" option for root device
	[  +3.903620] systemd-fstab-generator[2825]: Ignoring "noauto" option for root device
	[  +1.835360] systemd-fstab-generator[2945]: Ignoring "noauto" option for root device
	[  +0.086317] kauditd_printk_skb: 122 callbacks suppressed
	[  +5.575789] kauditd_printk_skb: 52 callbacks suppressed
	[ +13.687003] systemd-fstab-generator[3788]: Ignoring "noauto" option for root device
	[  +0.108937] kauditd_printk_skb: 36 callbacks suppressed
	[Aug14 00:36] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [26907166372cd38b39caa8d25e060e61509027312b2f0325fcac273f1a90ce9a] <==
	{"level":"info","ts":"2024-08-14T00:35:32.783418Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7315e47f21b89457 switched to configuration voters=(8292785523550360663)"}
	{"level":"info","ts":"2024-08-14T00:35:32.792148Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"1777413e1d1fef45","local-member-id":"7315e47f21b89457","added-peer-id":"7315e47f21b89457","added-peer-peer-urls":["https://192.168.39.201:2380"]}
	{"level":"info","ts":"2024-08-14T00:35:32.792290Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"1777413e1d1fef45","local-member-id":"7315e47f21b89457","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-14T00:35:32.792339Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-14T00:35:32.794137Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-14T00:35:32.802319Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.201:2380"}
	{"level":"info","ts":"2024-08-14T00:35:32.802416Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.201:2380"}
	{"level":"info","ts":"2024-08-14T00:35:32.806985Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"7315e47f21b89457","initial-advertise-peer-urls":["https://192.168.39.201:2380"],"listen-peer-urls":["https://192.168.39.201:2380"],"advertise-client-urls":["https://192.168.39.201:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.201:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-14T00:35:32.810869Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-14T00:35:34.534360Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7315e47f21b89457 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-14T00:35:34.534416Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7315e47f21b89457 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-14T00:35:34.534464Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7315e47f21b89457 received MsgPreVoteResp from 7315e47f21b89457 at term 2"}
	{"level":"info","ts":"2024-08-14T00:35:34.534480Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7315e47f21b89457 became candidate at term 3"}
	{"level":"info","ts":"2024-08-14T00:35:34.534485Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7315e47f21b89457 received MsgVoteResp from 7315e47f21b89457 at term 3"}
	{"level":"info","ts":"2024-08-14T00:35:34.534506Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7315e47f21b89457 became leader at term 3"}
	{"level":"info","ts":"2024-08-14T00:35:34.534516Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7315e47f21b89457 elected leader 7315e47f21b89457 at term 3"}
	{"level":"info","ts":"2024-08-14T00:35:34.538898Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"7315e47f21b89457","local-member-attributes":"{Name:multinode-745925 ClientURLs:[https://192.168.39.201:2379]}","request-path":"/0/members/7315e47f21b89457/attributes","cluster-id":"1777413e1d1fef45","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-14T00:35:34.538941Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-14T00:35:34.539140Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-14T00:35:34.539161Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-14T00:35:34.539178Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-14T00:35:34.540110Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-14T00:35:34.540931Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-14T00:35:34.540113Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-14T00:35:34.541723Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.201:2379"}
	
	
	==> etcd [da0de171031bff0d0f0c40041b75aa57d0acaa98a37e4fa2143fff8554a135ba] <==
	{"level":"info","ts":"2024-08-14T00:28:55.111326Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-14T00:28:55.111349Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-14T00:28:55.111653Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-14T00:28:55.111971Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"1777413e1d1fef45","local-member-id":"7315e47f21b89457","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-14T00:28:55.112067Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-14T00:28:55.112109Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-14T00:28:55.112432Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-14T00:28:55.112698Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-14T00:28:55.113210Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-14T00:28:55.113521Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.201:2379"}
	{"level":"warn","ts":"2024-08-14T00:29:46.986169Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"163.339589ms","expected-duration":"100ms","prefix":"","request":"header:<ID:10689172006015982457 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-745925-m02.17eb70df49150e60\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-745925-m02.17eb70df49150e60\" value_size:642 lease:1465799969161206075 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-08-14T00:29:46.986409Z","caller":"traceutil/trace.go:171","msg":"trace[132266961] transaction","detail":"{read_only:false; response_revision:434; number_of_response:1; }","duration":"240.714098ms","start":"2024-08-14T00:29:46.745683Z","end":"2024-08-14T00:29:46.986397Z","steps":["trace[132266961] 'process raft request'  (duration: 76.754473ms)","trace[132266961] 'compare'  (duration: 163.171551ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-14T00:30:39.467992Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"154.741361ms","expected-duration":"100ms","prefix":"","request":"header:<ID:10689172006015982967 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-745925-m03.17eb70eb82fe355b\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-745925-m03.17eb70eb82fe355b\" value_size:646 lease:1465799969161206861 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-08-14T00:30:39.468386Z","caller":"traceutil/trace.go:171","msg":"trace[1910179642] transaction","detail":"{read_only:false; response_revision:566; number_of_response:1; }","duration":"230.216937ms","start":"2024-08-14T00:30:39.238135Z","end":"2024-08-14T00:30:39.468352Z","steps":["trace[1910179642] 'process raft request'  (duration: 74.953018ms)","trace[1910179642] 'compare'  (duration: 154.61239ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-14T00:31:33.249731Z","caller":"traceutil/trace.go:171","msg":"trace[1150244368] transaction","detail":"{read_only:false; response_revision:698; number_of_response:1; }","duration":"139.477068ms","start":"2024-08-14T00:31:33.110235Z","end":"2024-08-14T00:31:33.249712Z","steps":["trace[1150244368] 'process raft request'  (duration: 138.337249ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-14T00:33:53.710910Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-14T00:33:53.711047Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"multinode-745925","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.201:2380"],"advertise-client-urls":["https://192.168.39.201:2379"]}
	{"level":"warn","ts":"2024-08-14T00:33:53.711164Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-14T00:33:53.711260Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-14T00:33:53.790638Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.201:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-14T00:33:53.790721Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.201:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-14T00:33:53.792200Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7315e47f21b89457","current-leader-member-id":"7315e47f21b89457"}
	{"level":"info","ts":"2024-08-14T00:33:53.794555Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.201:2380"}
	{"level":"info","ts":"2024-08-14T00:33:53.794700Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.201:2380"}
	{"level":"info","ts":"2024-08-14T00:33:53.794732Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"multinode-745925","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.201:2380"],"advertise-client-urls":["https://192.168.39.201:2379"]}
	
	
	==> kernel <==
	 00:37:14 up 8 min,  0 users,  load average: 0.29, 0.34, 0.19
	Linux multinode-745925 5.10.207 #1 SMP Tue Aug 13 22:05:29 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [9a8f8d0cbc5b0251014bd8c2230d63236db842bf1b0f787d972c920cd149b6a7] <==
	I0814 00:36:28.270586       1 main.go:322] Node multinode-745925-m03 has CIDR [10.244.4.0/24] 
	I0814 00:36:38.270253       1 main.go:295] Handling node with IPs: map[192.168.39.201:{}]
	I0814 00:36:38.270371       1 main.go:299] handling current node
	I0814 00:36:38.270398       1 main.go:295] Handling node with IPs: map[192.168.39.55:{}]
	I0814 00:36:38.270416       1 main.go:322] Node multinode-745925-m02 has CIDR [10.244.1.0/24] 
	I0814 00:36:38.270574       1 main.go:295] Handling node with IPs: map[192.168.39.225:{}]
	I0814 00:36:38.270620       1 main.go:322] Node multinode-745925-m03 has CIDR [10.244.4.0/24] 
	I0814 00:36:48.269553       1 main.go:295] Handling node with IPs: map[192.168.39.201:{}]
	I0814 00:36:48.269720       1 main.go:299] handling current node
	I0814 00:36:48.269754       1 main.go:295] Handling node with IPs: map[192.168.39.55:{}]
	I0814 00:36:48.269869       1 main.go:322] Node multinode-745925-m02 has CIDR [10.244.1.0/24] 
	I0814 00:36:48.270016       1 main.go:295] Handling node with IPs: map[192.168.39.225:{}]
	I0814 00:36:48.270082       1 main.go:322] Node multinode-745925-m03 has CIDR [10.244.4.0/24] 
	I0814 00:36:58.270462       1 main.go:295] Handling node with IPs: map[192.168.39.55:{}]
	I0814 00:36:58.270530       1 main.go:322] Node multinode-745925-m02 has CIDR [10.244.1.0/24] 
	I0814 00:36:58.270673       1 main.go:295] Handling node with IPs: map[192.168.39.225:{}]
	I0814 00:36:58.270692       1 main.go:322] Node multinode-745925-m03 has CIDR [10.244.2.0/24] 
	I0814 00:36:58.270734       1 main.go:295] Handling node with IPs: map[192.168.39.201:{}]
	I0814 00:36:58.270750       1 main.go:299] handling current node
	I0814 00:37:08.270928       1 main.go:295] Handling node with IPs: map[192.168.39.201:{}]
	I0814 00:37:08.270970       1 main.go:299] handling current node
	I0814 00:37:08.270990       1 main.go:295] Handling node with IPs: map[192.168.39.55:{}]
	I0814 00:37:08.270996       1 main.go:322] Node multinode-745925-m02 has CIDR [10.244.1.0/24] 
	I0814 00:37:08.271120       1 main.go:295] Handling node with IPs: map[192.168.39.225:{}]
	I0814 00:37:08.271141       1 main.go:322] Node multinode-745925-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kindnet [c0a5b54e67fb3eded06b4f3af4d4ca0576c1bc12e7fcc728d1abae5fc42964db] <==
	I0814 00:33:05.680246       1 main.go:322] Node multinode-745925-m03 has CIDR [10.244.4.0/24] 
	I0814 00:33:15.682842       1 main.go:295] Handling node with IPs: map[192.168.39.201:{}]
	I0814 00:33:15.682953       1 main.go:299] handling current node
	I0814 00:33:15.682981       1 main.go:295] Handling node with IPs: map[192.168.39.55:{}]
	I0814 00:33:15.682999       1 main.go:322] Node multinode-745925-m02 has CIDR [10.244.1.0/24] 
	I0814 00:33:15.683150       1 main.go:295] Handling node with IPs: map[192.168.39.225:{}]
	I0814 00:33:15.683173       1 main.go:322] Node multinode-745925-m03 has CIDR [10.244.4.0/24] 
	I0814 00:33:25.681267       1 main.go:295] Handling node with IPs: map[192.168.39.201:{}]
	I0814 00:33:25.681295       1 main.go:299] handling current node
	I0814 00:33:25.681308       1 main.go:295] Handling node with IPs: map[192.168.39.55:{}]
	I0814 00:33:25.681313       1 main.go:322] Node multinode-745925-m02 has CIDR [10.244.1.0/24] 
	I0814 00:33:25.681496       1 main.go:295] Handling node with IPs: map[192.168.39.225:{}]
	I0814 00:33:25.681514       1 main.go:322] Node multinode-745925-m03 has CIDR [10.244.4.0/24] 
	I0814 00:33:35.689356       1 main.go:295] Handling node with IPs: map[192.168.39.201:{}]
	I0814 00:33:35.689502       1 main.go:299] handling current node
	I0814 00:33:35.689534       1 main.go:295] Handling node with IPs: map[192.168.39.55:{}]
	I0814 00:33:35.689554       1 main.go:322] Node multinode-745925-m02 has CIDR [10.244.1.0/24] 
	I0814 00:33:35.689709       1 main.go:295] Handling node with IPs: map[192.168.39.225:{}]
	I0814 00:33:35.689736       1 main.go:322] Node multinode-745925-m03 has CIDR [10.244.4.0/24] 
	I0814 00:33:45.682662       1 main.go:295] Handling node with IPs: map[192.168.39.201:{}]
	I0814 00:33:45.682877       1 main.go:299] handling current node
	I0814 00:33:45.682913       1 main.go:295] Handling node with IPs: map[192.168.39.55:{}]
	I0814 00:33:45.682934       1 main.go:322] Node multinode-745925-m02 has CIDR [10.244.1.0/24] 
	I0814 00:33:45.683092       1 main.go:295] Handling node with IPs: map[192.168.39.225:{}]
	I0814 00:33:45.683113       1 main.go:322] Node multinode-745925-m03 has CIDR [10.244.4.0/24] 
	
	
	==> kube-apiserver [0826d520b837d333cb9e8db12cfb2a3195420daee26767cd7ebe43cd46ff2963] <==
	W0814 00:33:53.741269       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 00:33:53.741363       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 00:33:53.741419       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 00:33:53.741500       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 00:33:53.741556       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 00:33:53.741604       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 00:33:53.741650       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 00:33:53.741709       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 00:33:53.741768       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 00:33:53.741868       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 00:33:53.741933       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 00:33:53.742090       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 00:33:53.742160       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 00:33:53.742209       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 00:33:53.742256       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 00:33:53.742307       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 00:33:53.742364       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 00:33:53.742418       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 00:33:53.742465       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 00:33:53.742622       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 00:33:53.742699       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 00:33:53.742758       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 00:33:53.742932       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 00:33:53.743006       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 00:33:53.743080       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [b16f783a23a0ce16a50f7dcec0d5a31b8d36401c26ddb636e6cc01c981fe26a2] <==
	I0814 00:35:35.744895       1 aggregator.go:171] initial CRD sync complete...
	I0814 00:35:35.744931       1 autoregister_controller.go:144] Starting autoregister controller
	I0814 00:35:35.744942       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0814 00:35:35.766964       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0814 00:35:35.786313       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0814 00:35:35.787542       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0814 00:35:35.789976       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0814 00:35:35.812661       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0814 00:35:35.812702       1 policy_source.go:224] refreshing policies
	I0814 00:35:35.847154       1 cache.go:39] Caches are synced for autoregister controller
	I0814 00:35:35.866849       1 shared_informer.go:320] Caches are synced for configmaps
	I0814 00:35:35.866887       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0814 00:35:35.866963       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0814 00:35:35.867012       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0814 00:35:35.867018       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	E0814 00:35:35.875047       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0814 00:35:35.875241       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0814 00:35:36.675902       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0814 00:35:37.869372       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0814 00:35:38.044392       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0814 00:35:38.063939       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0814 00:35:38.134312       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0814 00:35:38.144051       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0814 00:35:39.211744       1 controller.go:615] quota admission added evaluator for: endpoints
	I0814 00:35:39.460927       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [5293423821d467aa472c3a3bd890f34c1911316eae819cef2d0ca642e7a4be6b] <==
	I0814 00:36:38.237565       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="7.904321ms"
	I0814 00:36:38.238437       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="29.747µs"
	I0814 00:36:39.142275       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-745925-m02"
	I0814 00:36:45.380201       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-745925-m02"
	I0814 00:36:51.974439       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-745925-m03"
	I0814 00:36:51.990422       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-745925-m03"
	I0814 00:36:52.206789       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-745925-m02"
	I0814 00:36:52.206926       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-745925-m03"
	I0814 00:36:53.290526       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-745925-m03\" does not exist"
	I0814 00:36:53.294898       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-745925-m02"
	I0814 00:36:53.311962       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-745925-m03" podCIDRs=["10.244.2.0/24"]
	I0814 00:36:53.312013       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-745925-m03"
	E0814 00:36:53.323081       1 range_allocator.go:427] "Failed to update node PodCIDR after multiple attempts" err="failed to patch node CIDR: Node \"multinode-745925-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.3.0/24\", \"10.244.2.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="multinode-745925-m03" podCIDRs=["10.244.3.0/24"]
	E0814 00:36:53.323227       1 range_allocator.go:433] "CIDR assignment for node failed. Releasing allocated CIDR" err="failed to patch node CIDR: Node \"multinode-745925-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.3.0/24\", \"10.244.2.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="multinode-745925-m03"
	E0814 00:36:53.323395       1 range_allocator.go:246] "Unhandled Error" err="error syncing 'multinode-745925-m03': failed to patch node CIDR: Node \"multinode-745925-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.3.0/24\", \"10.244.2.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid], requeuing" logger="UnhandledError"
	I0814 00:36:53.323507       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-745925-m03"
	I0814 00:36:53.328794       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-745925-m03"
	I0814 00:36:53.702020       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-745925-m03"
	I0814 00:36:54.039026       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-745925-m03"
	I0814 00:36:54.160539       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-745925-m03"
	I0814 00:37:03.400165       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-745925-m03"
	I0814 00:37:11.621923       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-745925-m02"
	I0814 00:37:11.622309       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-745925-m03"
	I0814 00:37:11.633729       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-745925-m03"
	I0814 00:37:14.161509       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-745925-m03"
	
	
	==> kube-controller-manager [98cb3f5f3e0f6b39eb346dacc9c9584057abe80a2b3fead9e9a4160cae92d7d7] <==
	I0814 00:31:28.004175       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-745925-m02"
	I0814 00:31:28.004925       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-745925-m03"
	I0814 00:31:28.998411       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-745925-m02"
	I0814 00:31:28.998658       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-745925-m03\" does not exist"
	I0814 00:31:29.030492       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-745925-m03" podCIDRs=["10.244.4.0/24"]
	I0814 00:31:29.030527       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-745925-m03"
	I0814 00:31:29.030548       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-745925-m03"
	I0814 00:31:29.042181       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-745925-m03"
	I0814 00:31:29.051234       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-745925-m03"
	I0814 00:31:29.373645       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-745925-m03"
	I0814 00:31:33.252200       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-745925-m03"
	I0814 00:31:39.297700       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-745925-m03"
	I0814 00:31:48.447288       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-745925-m03"
	I0814 00:31:48.447347       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-745925-m02"
	I0814 00:31:48.458890       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-745925-m03"
	I0814 00:31:53.054088       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-745925-m03"
	I0814 00:32:33.070708       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-745925-m02"
	I0814 00:32:33.071033       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-745925-m03"
	I0814 00:32:33.074241       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-745925-m03"
	I0814 00:32:33.095573       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-745925-m02"
	I0814 00:32:33.113502       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-745925-m03"
	I0814 00:32:33.138160       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="11.635545ms"
	I0814 00:32:33.138846       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="73.562µs"
	I0814 00:32:38.141476       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-745925-m02"
	I0814 00:32:48.213395       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-745925-m03"
	
	
	==> kube-proxy [1d40d3ea3bdcb515b59751a57cd931bbdf9e7836ae4e8760dafd40c83f53e699] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0814 00:35:37.574430       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0814 00:35:37.589325       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.201"]
	E0814 00:35:37.589424       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0814 00:35:37.640432       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0814 00:35:37.640492       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0814 00:35:37.640520       1 server_linux.go:169] "Using iptables Proxier"
	I0814 00:35:37.643905       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0814 00:35:37.644208       1 server.go:483] "Version info" version="v1.31.0"
	I0814 00:35:37.644366       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0814 00:35:37.645752       1 config.go:197] "Starting service config controller"
	I0814 00:35:37.645885       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0814 00:35:37.645973       1 config.go:104] "Starting endpoint slice config controller"
	I0814 00:35:37.646008       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0814 00:35:37.646513       1 config.go:326] "Starting node config controller"
	I0814 00:35:37.647873       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0814 00:35:37.746060       1 shared_informer.go:320] Caches are synced for service config
	I0814 00:35:37.746111       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0814 00:35:37.748163       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [c4fcefb3fdcea5401dcac2f0926c175690591bc6708e7487325a84810c9a3b6e] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0814 00:29:04.986961       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0814 00:29:05.003706       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.201"]
	E0814 00:29:05.003783       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0814 00:29:05.073544       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0814 00:29:05.073598       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0814 00:29:05.073629       1 server_linux.go:169] "Using iptables Proxier"
	I0814 00:29:05.075720       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0814 00:29:05.076078       1 server.go:483] "Version info" version="v1.31.0"
	I0814 00:29:05.076102       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0814 00:29:05.077513       1 config.go:197] "Starting service config controller"
	I0814 00:29:05.077557       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0814 00:29:05.077578       1 config.go:104] "Starting endpoint slice config controller"
	I0814 00:29:05.077582       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0814 00:29:05.079610       1 config.go:326] "Starting node config controller"
	I0814 00:29:05.079636       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0814 00:29:05.178226       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0814 00:29:05.178287       1 shared_informer.go:320] Caches are synced for service config
	I0814 00:29:05.179839       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [4388dbc1285435fa23d08e95bdc7fe47ea3ec8c1d661b27176003aa77f6d6c48] <==
	I0814 00:35:33.229607       1 serving.go:386] Generated self-signed cert in-memory
	W0814 00:35:35.690886       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0814 00:35:35.691047       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0814 00:35:35.691075       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0814 00:35:35.691156       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0814 00:35:35.765310       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0814 00:35:35.765644       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0814 00:35:35.768108       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0814 00:35:35.768550       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0814 00:35:35.768578       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0814 00:35:35.769289       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	W0814 00:35:35.795147       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0814 00:35:35.795203       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0814 00:35:35.796440       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0814 00:35:35.798188       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0814 00:35:35.798397       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0814 00:35:35.798513       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0814 00:35:35.800545       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0814 00:35:35.802871       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0814 00:35:35.870271       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [d3b1b295216971a1ac2224e9411af3298ba5a538e1ad234dff54e5489d7945f7] <==
	E0814 00:28:56.317230       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0814 00:28:56.314674       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0814 00:28:56.317296       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0814 00:28:56.314727       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0814 00:28:56.317349       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0814 00:28:56.314775       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0814 00:28:56.317406       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0814 00:28:56.314847       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0814 00:28:56.318051       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0814 00:28:56.314894       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0814 00:28:56.318111       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0814 00:28:56.314932       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0814 00:28:56.318191       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0814 00:28:57.168970       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0814 00:28:57.169126       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0814 00:28:57.406763       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0814 00:28:57.406940       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0814 00:28:57.436759       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0814 00:28:57.436886       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0814 00:28:57.499518       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0814 00:28:57.499563       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0814 00:28:57.525941       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0814 00:28:57.525995       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0814 00:28:57.799295       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0814 00:33:53.714717       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Aug 14 00:35:41 multinode-745925 kubelet[2952]: E0814 00:35:41.661750    2952 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723595741661088061,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134106,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 00:35:41 multinode-745925 kubelet[2952]: E0814 00:35:41.661829    2952 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723595741661088061,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134106,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 00:35:51 multinode-745925 kubelet[2952]: E0814 00:35:51.663646    2952 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723595751663180814,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134106,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 00:35:51 multinode-745925 kubelet[2952]: E0814 00:35:51.664076    2952 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723595751663180814,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134106,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 00:36:01 multinode-745925 kubelet[2952]: E0814 00:36:01.665975    2952 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723595761665580534,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134106,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 00:36:01 multinode-745925 kubelet[2952]: E0814 00:36:01.666014    2952 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723595761665580534,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134106,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 00:36:11 multinode-745925 kubelet[2952]: E0814 00:36:11.668523    2952 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723595771667955418,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134106,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 00:36:11 multinode-745925 kubelet[2952]: E0814 00:36:11.668910    2952 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723595771667955418,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134106,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 00:36:21 multinode-745925 kubelet[2952]: E0814 00:36:21.671959    2952 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723595781671355066,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134106,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 00:36:21 multinode-745925 kubelet[2952]: E0814 00:36:21.672002    2952 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723595781671355066,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134106,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 00:36:31 multinode-745925 kubelet[2952]: E0814 00:36:31.622098    2952 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 14 00:36:31 multinode-745925 kubelet[2952]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 14 00:36:31 multinode-745925 kubelet[2952]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 14 00:36:31 multinode-745925 kubelet[2952]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 14 00:36:31 multinode-745925 kubelet[2952]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 14 00:36:31 multinode-745925 kubelet[2952]: E0814 00:36:31.673859    2952 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723595791673432637,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134106,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 00:36:31 multinode-745925 kubelet[2952]: E0814 00:36:31.673885    2952 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723595791673432637,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134106,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 00:36:41 multinode-745925 kubelet[2952]: E0814 00:36:41.674905    2952 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723595801674629541,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134106,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 00:36:41 multinode-745925 kubelet[2952]: E0814 00:36:41.675001    2952 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723595801674629541,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134106,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 00:36:51 multinode-745925 kubelet[2952]: E0814 00:36:51.677411    2952 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723595811677090619,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134106,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 00:36:51 multinode-745925 kubelet[2952]: E0814 00:36:51.677469    2952 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723595811677090619,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134106,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 00:37:01 multinode-745925 kubelet[2952]: E0814 00:37:01.679951    2952 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723595821679090226,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134106,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 00:37:01 multinode-745925 kubelet[2952]: E0814 00:37:01.680026    2952 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723595821679090226,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134106,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 00:37:11 multinode-745925 kubelet[2952]: E0814 00:37:11.682015    2952 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723595831681600122,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134106,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 00:37:11 multinode-745925 kubelet[2952]: E0814 00:37:11.682294    2952 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723595831681600122,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134106,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0814 00:37:13.964643   45337 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19429-9425/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-745925 -n multinode-745925
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-745925 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (324.69s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-745925 stop
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-745925 stop: exit status 82 (2m0.450075919s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-745925-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-745925 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-745925 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-745925 status: exit status 3 (18.687155354s)

                                                
                                                
-- stdout --
	multinode-745925
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-745925-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0814 00:39:36.918362   46008 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.55:22: connect: no route to host
	E0814 00:39:36.918409   46008 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.55:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-745925 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-745925 -n multinode-745925
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-745925 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-745925 logs -n 25: (1.335155695s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-745925 ssh -n                                                                 | multinode-745925 | jenkins | v1.33.1 | 14 Aug 24 00:31 UTC | 14 Aug 24 00:31 UTC |
	|         | multinode-745925-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-745925 cp multinode-745925-m02:/home/docker/cp-test.txt                       | multinode-745925 | jenkins | v1.33.1 | 14 Aug 24 00:31 UTC | 14 Aug 24 00:31 UTC |
	|         | multinode-745925:/home/docker/cp-test_multinode-745925-m02_multinode-745925.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-745925 ssh -n                                                                 | multinode-745925 | jenkins | v1.33.1 | 14 Aug 24 00:31 UTC | 14 Aug 24 00:31 UTC |
	|         | multinode-745925-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-745925 ssh -n multinode-745925 sudo cat                                       | multinode-745925 | jenkins | v1.33.1 | 14 Aug 24 00:31 UTC | 14 Aug 24 00:31 UTC |
	|         | /home/docker/cp-test_multinode-745925-m02_multinode-745925.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-745925 cp multinode-745925-m02:/home/docker/cp-test.txt                       | multinode-745925 | jenkins | v1.33.1 | 14 Aug 24 00:31 UTC | 14 Aug 24 00:31 UTC |
	|         | multinode-745925-m03:/home/docker/cp-test_multinode-745925-m02_multinode-745925-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-745925 ssh -n                                                                 | multinode-745925 | jenkins | v1.33.1 | 14 Aug 24 00:31 UTC | 14 Aug 24 00:31 UTC |
	|         | multinode-745925-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-745925 ssh -n multinode-745925-m03 sudo cat                                   | multinode-745925 | jenkins | v1.33.1 | 14 Aug 24 00:31 UTC | 14 Aug 24 00:31 UTC |
	|         | /home/docker/cp-test_multinode-745925-m02_multinode-745925-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-745925 cp testdata/cp-test.txt                                                | multinode-745925 | jenkins | v1.33.1 | 14 Aug 24 00:31 UTC | 14 Aug 24 00:31 UTC |
	|         | multinode-745925-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-745925 ssh -n                                                                 | multinode-745925 | jenkins | v1.33.1 | 14 Aug 24 00:31 UTC | 14 Aug 24 00:31 UTC |
	|         | multinode-745925-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-745925 cp multinode-745925-m03:/home/docker/cp-test.txt                       | multinode-745925 | jenkins | v1.33.1 | 14 Aug 24 00:31 UTC | 14 Aug 24 00:31 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1031533634/001/cp-test_multinode-745925-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-745925 ssh -n                                                                 | multinode-745925 | jenkins | v1.33.1 | 14 Aug 24 00:31 UTC | 14 Aug 24 00:31 UTC |
	|         | multinode-745925-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-745925 cp multinode-745925-m03:/home/docker/cp-test.txt                       | multinode-745925 | jenkins | v1.33.1 | 14 Aug 24 00:31 UTC | 14 Aug 24 00:31 UTC |
	|         | multinode-745925:/home/docker/cp-test_multinode-745925-m03_multinode-745925.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-745925 ssh -n                                                                 | multinode-745925 | jenkins | v1.33.1 | 14 Aug 24 00:31 UTC | 14 Aug 24 00:31 UTC |
	|         | multinode-745925-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-745925 ssh -n multinode-745925 sudo cat                                       | multinode-745925 | jenkins | v1.33.1 | 14 Aug 24 00:31 UTC | 14 Aug 24 00:31 UTC |
	|         | /home/docker/cp-test_multinode-745925-m03_multinode-745925.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-745925 cp multinode-745925-m03:/home/docker/cp-test.txt                       | multinode-745925 | jenkins | v1.33.1 | 14 Aug 24 00:31 UTC | 14 Aug 24 00:31 UTC |
	|         | multinode-745925-m02:/home/docker/cp-test_multinode-745925-m03_multinode-745925-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-745925 ssh -n                                                                 | multinode-745925 | jenkins | v1.33.1 | 14 Aug 24 00:31 UTC | 14 Aug 24 00:31 UTC |
	|         | multinode-745925-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-745925 ssh -n multinode-745925-m02 sudo cat                                   | multinode-745925 | jenkins | v1.33.1 | 14 Aug 24 00:31 UTC | 14 Aug 24 00:31 UTC |
	|         | /home/docker/cp-test_multinode-745925-m03_multinode-745925-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-745925 node stop m03                                                          | multinode-745925 | jenkins | v1.33.1 | 14 Aug 24 00:31 UTC | 14 Aug 24 00:31 UTC |
	| node    | multinode-745925 node start                                                             | multinode-745925 | jenkins | v1.33.1 | 14 Aug 24 00:31 UTC | 14 Aug 24 00:31 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-745925                                                                | multinode-745925 | jenkins | v1.33.1 | 14 Aug 24 00:31 UTC |                     |
	| stop    | -p multinode-745925                                                                     | multinode-745925 | jenkins | v1.33.1 | 14 Aug 24 00:31 UTC |                     |
	| start   | -p multinode-745925                                                                     | multinode-745925 | jenkins | v1.33.1 | 14 Aug 24 00:33 UTC | 14 Aug 24 00:37 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-745925                                                                | multinode-745925 | jenkins | v1.33.1 | 14 Aug 24 00:37 UTC |                     |
	| node    | multinode-745925 node delete                                                            | multinode-745925 | jenkins | v1.33.1 | 14 Aug 24 00:37 UTC | 14 Aug 24 00:37 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-745925 stop                                                                   | multinode-745925 | jenkins | v1.33.1 | 14 Aug 24 00:37 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/14 00:33:52
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0814 00:33:52.736985   44271 out.go:291] Setting OutFile to fd 1 ...
	I0814 00:33:52.737282   44271 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 00:33:52.737292   44271 out.go:304] Setting ErrFile to fd 2...
	I0814 00:33:52.737299   44271 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 00:33:52.737529   44271 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19429-9425/.minikube/bin
	I0814 00:33:52.738129   44271 out.go:298] Setting JSON to false
	I0814 00:33:52.739087   44271 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4579,"bootTime":1723591054,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0814 00:33:52.739151   44271 start.go:139] virtualization: kvm guest
	I0814 00:33:52.742156   44271 out.go:177] * [multinode-745925] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0814 00:33:52.743536   44271 notify.go:220] Checking for updates...
	I0814 00:33:52.743548   44271 out.go:177]   - MINIKUBE_LOCATION=19429
	I0814 00:33:52.744919   44271 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0814 00:33:52.746133   44271 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19429-9425/kubeconfig
	I0814 00:33:52.747363   44271 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19429-9425/.minikube
	I0814 00:33:52.748491   44271 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0814 00:33:52.749600   44271 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0814 00:33:52.751357   44271 config.go:182] Loaded profile config "multinode-745925": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 00:33:52.751428   44271 driver.go:392] Setting default libvirt URI to qemu:///system
	I0814 00:33:52.751841   44271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 00:33:52.751907   44271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 00:33:52.766796   44271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41023
	I0814 00:33:52.767156   44271 main.go:141] libmachine: () Calling .GetVersion
	I0814 00:33:52.767686   44271 main.go:141] libmachine: Using API Version  1
	I0814 00:33:52.767707   44271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 00:33:52.768021   44271 main.go:141] libmachine: () Calling .GetMachineName
	I0814 00:33:52.768205   44271 main.go:141] libmachine: (multinode-745925) Calling .DriverName
	I0814 00:33:52.802988   44271 out.go:177] * Using the kvm2 driver based on existing profile
	I0814 00:33:52.804368   44271 start.go:297] selected driver: kvm2
	I0814 00:33:52.804383   44271 start.go:901] validating driver "kvm2" against &{Name:multinode-745925 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.0 ClusterName:multinode-745925 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.201 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.55 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.225 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 00:33:52.804544   44271 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0814 00:33:52.804864   44271 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 00:33:52.804962   44271 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19429-9425/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0814 00:33:52.819371   44271 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0814 00:33:52.820014   44271 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 00:33:52.820090   44271 cni.go:84] Creating CNI manager for ""
	I0814 00:33:52.820108   44271 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0814 00:33:52.820161   44271 start.go:340] cluster config:
	{Name:multinode-745925 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-745925 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.201 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.55 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.225 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false k
ong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 00:33:52.820283   44271 iso.go:125] acquiring lock: {Name:mk654171f0e78c238a265344dbbd1eacb21d0f1b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 00:33:52.822701   44271 out.go:177] * Starting "multinode-745925" primary control-plane node in "multinode-745925" cluster
	I0814 00:33:52.824172   44271 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0814 00:33:52.824206   44271 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0814 00:33:52.824215   44271 cache.go:56] Caching tarball of preloaded images
	I0814 00:33:52.824307   44271 preload.go:172] Found /home/jenkins/minikube-integration/19429-9425/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0814 00:33:52.824322   44271 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0814 00:33:52.824457   44271 profile.go:143] Saving config to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/multinode-745925/config.json ...
	I0814 00:33:52.824690   44271 start.go:360] acquireMachinesLock for multinode-745925: {Name:mk8ab1b491404dd6cc95a37a50c74a14d6d0e4c9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 00:33:52.824741   44271 start.go:364] duration metric: took 30.68µs to acquireMachinesLock for "multinode-745925"
	I0814 00:33:52.824760   44271 start.go:96] Skipping create...Using existing machine configuration
	I0814 00:33:52.824781   44271 fix.go:54] fixHost starting: 
	I0814 00:33:52.825057   44271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 00:33:52.825090   44271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 00:33:52.839157   44271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45277
	I0814 00:33:52.839545   44271 main.go:141] libmachine: () Calling .GetVersion
	I0814 00:33:52.839939   44271 main.go:141] libmachine: Using API Version  1
	I0814 00:33:52.839956   44271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 00:33:52.840234   44271 main.go:141] libmachine: () Calling .GetMachineName
	I0814 00:33:52.840415   44271 main.go:141] libmachine: (multinode-745925) Calling .DriverName
	I0814 00:33:52.840665   44271 main.go:141] libmachine: (multinode-745925) Calling .GetState
	I0814 00:33:52.842142   44271 fix.go:112] recreateIfNeeded on multinode-745925: state=Running err=<nil>
	W0814 00:33:52.842156   44271 fix.go:138] unexpected machine state, will restart: <nil>
	I0814 00:33:52.843948   44271 out.go:177] * Updating the running kvm2 "multinode-745925" VM ...
	I0814 00:33:52.845073   44271 machine.go:94] provisionDockerMachine start ...
	I0814 00:33:52.845088   44271 main.go:141] libmachine: (multinode-745925) Calling .DriverName
	I0814 00:33:52.845273   44271 main.go:141] libmachine: (multinode-745925) Calling .GetSSHHostname
	I0814 00:33:52.847942   44271 main.go:141] libmachine: (multinode-745925) DBG | domain multinode-745925 has defined MAC address 52:54:00:eb:87:ad in network mk-multinode-745925
	I0814 00:33:52.848435   44271 main.go:141] libmachine: (multinode-745925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:87:ad", ip: ""} in network mk-multinode-745925: {Iface:virbr1 ExpiryTime:2024-08-14 01:28:32 +0000 UTC Type:0 Mac:52:54:00:eb:87:ad Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:multinode-745925 Clientid:01:52:54:00:eb:87:ad}
	I0814 00:33:52.848460   44271 main.go:141] libmachine: (multinode-745925) DBG | domain multinode-745925 has defined IP address 192.168.39.201 and MAC address 52:54:00:eb:87:ad in network mk-multinode-745925
	I0814 00:33:52.848565   44271 main.go:141] libmachine: (multinode-745925) Calling .GetSSHPort
	I0814 00:33:52.848728   44271 main.go:141] libmachine: (multinode-745925) Calling .GetSSHKeyPath
	I0814 00:33:52.848880   44271 main.go:141] libmachine: (multinode-745925) Calling .GetSSHKeyPath
	I0814 00:33:52.849010   44271 main.go:141] libmachine: (multinode-745925) Calling .GetSSHUsername
	I0814 00:33:52.849163   44271 main.go:141] libmachine: Using SSH client type: native
	I0814 00:33:52.849343   44271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.201 22 <nil> <nil>}
	I0814 00:33:52.849356   44271 main.go:141] libmachine: About to run SSH command:
	hostname
	I0814 00:33:52.954665   44271 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-745925
	
	I0814 00:33:52.954702   44271 main.go:141] libmachine: (multinode-745925) Calling .GetMachineName
	I0814 00:33:52.954959   44271 buildroot.go:166] provisioning hostname "multinode-745925"
	I0814 00:33:52.954981   44271 main.go:141] libmachine: (multinode-745925) Calling .GetMachineName
	I0814 00:33:52.955170   44271 main.go:141] libmachine: (multinode-745925) Calling .GetSSHHostname
	I0814 00:33:52.957807   44271 main.go:141] libmachine: (multinode-745925) DBG | domain multinode-745925 has defined MAC address 52:54:00:eb:87:ad in network mk-multinode-745925
	I0814 00:33:52.958290   44271 main.go:141] libmachine: (multinode-745925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:87:ad", ip: ""} in network mk-multinode-745925: {Iface:virbr1 ExpiryTime:2024-08-14 01:28:32 +0000 UTC Type:0 Mac:52:54:00:eb:87:ad Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:multinode-745925 Clientid:01:52:54:00:eb:87:ad}
	I0814 00:33:52.958316   44271 main.go:141] libmachine: (multinode-745925) DBG | domain multinode-745925 has defined IP address 192.168.39.201 and MAC address 52:54:00:eb:87:ad in network mk-multinode-745925
	I0814 00:33:52.958456   44271 main.go:141] libmachine: (multinode-745925) Calling .GetSSHPort
	I0814 00:33:52.958620   44271 main.go:141] libmachine: (multinode-745925) Calling .GetSSHKeyPath
	I0814 00:33:52.958736   44271 main.go:141] libmachine: (multinode-745925) Calling .GetSSHKeyPath
	I0814 00:33:52.958873   44271 main.go:141] libmachine: (multinode-745925) Calling .GetSSHUsername
	I0814 00:33:52.958998   44271 main.go:141] libmachine: Using SSH client type: native
	I0814 00:33:52.959181   44271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.201 22 <nil> <nil>}
	I0814 00:33:52.959192   44271 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-745925 && echo "multinode-745925" | sudo tee /etc/hostname
	I0814 00:33:53.081508   44271 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-745925
	
	I0814 00:33:53.081545   44271 main.go:141] libmachine: (multinode-745925) Calling .GetSSHHostname
	I0814 00:33:53.084003   44271 main.go:141] libmachine: (multinode-745925) DBG | domain multinode-745925 has defined MAC address 52:54:00:eb:87:ad in network mk-multinode-745925
	I0814 00:33:53.084348   44271 main.go:141] libmachine: (multinode-745925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:87:ad", ip: ""} in network mk-multinode-745925: {Iface:virbr1 ExpiryTime:2024-08-14 01:28:32 +0000 UTC Type:0 Mac:52:54:00:eb:87:ad Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:multinode-745925 Clientid:01:52:54:00:eb:87:ad}
	I0814 00:33:53.084386   44271 main.go:141] libmachine: (multinode-745925) DBG | domain multinode-745925 has defined IP address 192.168.39.201 and MAC address 52:54:00:eb:87:ad in network mk-multinode-745925
	I0814 00:33:53.084561   44271 main.go:141] libmachine: (multinode-745925) Calling .GetSSHPort
	I0814 00:33:53.084763   44271 main.go:141] libmachine: (multinode-745925) Calling .GetSSHKeyPath
	I0814 00:33:53.084910   44271 main.go:141] libmachine: (multinode-745925) Calling .GetSSHKeyPath
	I0814 00:33:53.085053   44271 main.go:141] libmachine: (multinode-745925) Calling .GetSSHUsername
	I0814 00:33:53.085232   44271 main.go:141] libmachine: Using SSH client type: native
	I0814 00:33:53.085503   44271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.201 22 <nil> <nil>}
	I0814 00:33:53.085530   44271 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-745925' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-745925/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-745925' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0814 00:33:53.195129   44271 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 00:33:53.195170   44271 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19429-9425/.minikube CaCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19429-9425/.minikube}
	I0814 00:33:53.195207   44271 buildroot.go:174] setting up certificates
	I0814 00:33:53.195216   44271 provision.go:84] configureAuth start
	I0814 00:33:53.195225   44271 main.go:141] libmachine: (multinode-745925) Calling .GetMachineName
	I0814 00:33:53.195621   44271 main.go:141] libmachine: (multinode-745925) Calling .GetIP
	I0814 00:33:53.198139   44271 main.go:141] libmachine: (multinode-745925) DBG | domain multinode-745925 has defined MAC address 52:54:00:eb:87:ad in network mk-multinode-745925
	I0814 00:33:53.198610   44271 main.go:141] libmachine: (multinode-745925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:87:ad", ip: ""} in network mk-multinode-745925: {Iface:virbr1 ExpiryTime:2024-08-14 01:28:32 +0000 UTC Type:0 Mac:52:54:00:eb:87:ad Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:multinode-745925 Clientid:01:52:54:00:eb:87:ad}
	I0814 00:33:53.198636   44271 main.go:141] libmachine: (multinode-745925) DBG | domain multinode-745925 has defined IP address 192.168.39.201 and MAC address 52:54:00:eb:87:ad in network mk-multinode-745925
	I0814 00:33:53.198786   44271 main.go:141] libmachine: (multinode-745925) Calling .GetSSHHostname
	I0814 00:33:53.200839   44271 main.go:141] libmachine: (multinode-745925) DBG | domain multinode-745925 has defined MAC address 52:54:00:eb:87:ad in network mk-multinode-745925
	I0814 00:33:53.201150   44271 main.go:141] libmachine: (multinode-745925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:87:ad", ip: ""} in network mk-multinode-745925: {Iface:virbr1 ExpiryTime:2024-08-14 01:28:32 +0000 UTC Type:0 Mac:52:54:00:eb:87:ad Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:multinode-745925 Clientid:01:52:54:00:eb:87:ad}
	I0814 00:33:53.201188   44271 main.go:141] libmachine: (multinode-745925) DBG | domain multinode-745925 has defined IP address 192.168.39.201 and MAC address 52:54:00:eb:87:ad in network mk-multinode-745925
	I0814 00:33:53.201320   44271 provision.go:143] copyHostCerts
	I0814 00:33:53.201350   44271 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem
	I0814 00:33:53.201385   44271 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem, removing ...
	I0814 00:33:53.201394   44271 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem
	I0814 00:33:53.201460   44271 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem (1675 bytes)
	I0814 00:33:53.201552   44271 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem
	I0814 00:33:53.201570   44271 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem, removing ...
	I0814 00:33:53.201583   44271 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem
	I0814 00:33:53.201610   44271 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem (1082 bytes)
	I0814 00:33:53.201708   44271 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem
	I0814 00:33:53.201729   44271 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem, removing ...
	I0814 00:33:53.201736   44271 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem
	I0814 00:33:53.201759   44271 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem (1123 bytes)
	I0814 00:33:53.201819   44271 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem org=jenkins.multinode-745925 san=[127.0.0.1 192.168.39.201 localhost minikube multinode-745925]
	I0814 00:33:53.440737   44271 provision.go:177] copyRemoteCerts
	I0814 00:33:53.440797   44271 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0814 00:33:53.440820   44271 main.go:141] libmachine: (multinode-745925) Calling .GetSSHHostname
	I0814 00:33:53.443833   44271 main.go:141] libmachine: (multinode-745925) DBG | domain multinode-745925 has defined MAC address 52:54:00:eb:87:ad in network mk-multinode-745925
	I0814 00:33:53.444164   44271 main.go:141] libmachine: (multinode-745925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:87:ad", ip: ""} in network mk-multinode-745925: {Iface:virbr1 ExpiryTime:2024-08-14 01:28:32 +0000 UTC Type:0 Mac:52:54:00:eb:87:ad Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:multinode-745925 Clientid:01:52:54:00:eb:87:ad}
	I0814 00:33:53.444193   44271 main.go:141] libmachine: (multinode-745925) DBG | domain multinode-745925 has defined IP address 192.168.39.201 and MAC address 52:54:00:eb:87:ad in network mk-multinode-745925
	I0814 00:33:53.444367   44271 main.go:141] libmachine: (multinode-745925) Calling .GetSSHPort
	I0814 00:33:53.444585   44271 main.go:141] libmachine: (multinode-745925) Calling .GetSSHKeyPath
	I0814 00:33:53.444770   44271 main.go:141] libmachine: (multinode-745925) Calling .GetSSHUsername
	I0814 00:33:53.444894   44271 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/multinode-745925/id_rsa Username:docker}
	I0814 00:33:53.528610   44271 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0814 00:33:53.528674   44271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0814 00:33:53.551326   44271 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0814 00:33:53.551392   44271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0814 00:33:53.573438   44271 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0814 00:33:53.573542   44271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0814 00:33:53.597025   44271 provision.go:87] duration metric: took 401.796686ms to configureAuth
	I0814 00:33:53.597058   44271 buildroot.go:189] setting minikube options for container-runtime
	I0814 00:33:53.597360   44271 config.go:182] Loaded profile config "multinode-745925": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 00:33:53.597465   44271 main.go:141] libmachine: (multinode-745925) Calling .GetSSHHostname
	I0814 00:33:53.600460   44271 main.go:141] libmachine: (multinode-745925) DBG | domain multinode-745925 has defined MAC address 52:54:00:eb:87:ad in network mk-multinode-745925
	I0814 00:33:53.600832   44271 main.go:141] libmachine: (multinode-745925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:87:ad", ip: ""} in network mk-multinode-745925: {Iface:virbr1 ExpiryTime:2024-08-14 01:28:32 +0000 UTC Type:0 Mac:52:54:00:eb:87:ad Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:multinode-745925 Clientid:01:52:54:00:eb:87:ad}
	I0814 00:33:53.600859   44271 main.go:141] libmachine: (multinode-745925) DBG | domain multinode-745925 has defined IP address 192.168.39.201 and MAC address 52:54:00:eb:87:ad in network mk-multinode-745925
	I0814 00:33:53.600993   44271 main.go:141] libmachine: (multinode-745925) Calling .GetSSHPort
	I0814 00:33:53.601203   44271 main.go:141] libmachine: (multinode-745925) Calling .GetSSHKeyPath
	I0814 00:33:53.601375   44271 main.go:141] libmachine: (multinode-745925) Calling .GetSSHKeyPath
	I0814 00:33:53.601531   44271 main.go:141] libmachine: (multinode-745925) Calling .GetSSHUsername
	I0814 00:33:53.601706   44271 main.go:141] libmachine: Using SSH client type: native
	I0814 00:33:53.601874   44271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.201 22 <nil> <nil>}
	I0814 00:33:53.601888   44271 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0814 00:35:24.313993   44271 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0814 00:35:24.314020   44271 machine.go:97] duration metric: took 1m31.468935821s to provisionDockerMachine
	I0814 00:35:24.314033   44271 start.go:293] postStartSetup for "multinode-745925" (driver="kvm2")
	I0814 00:35:24.314060   44271 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0814 00:35:24.314085   44271 main.go:141] libmachine: (multinode-745925) Calling .DriverName
	I0814 00:35:24.314392   44271 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0814 00:35:24.314417   44271 main.go:141] libmachine: (multinode-745925) Calling .GetSSHHostname
	I0814 00:35:24.317239   44271 main.go:141] libmachine: (multinode-745925) DBG | domain multinode-745925 has defined MAC address 52:54:00:eb:87:ad in network mk-multinode-745925
	I0814 00:35:24.317767   44271 main.go:141] libmachine: (multinode-745925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:87:ad", ip: ""} in network mk-multinode-745925: {Iface:virbr1 ExpiryTime:2024-08-14 01:28:32 +0000 UTC Type:0 Mac:52:54:00:eb:87:ad Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:multinode-745925 Clientid:01:52:54:00:eb:87:ad}
	I0814 00:35:24.317796   44271 main.go:141] libmachine: (multinode-745925) DBG | domain multinode-745925 has defined IP address 192.168.39.201 and MAC address 52:54:00:eb:87:ad in network mk-multinode-745925
	I0814 00:35:24.317964   44271 main.go:141] libmachine: (multinode-745925) Calling .GetSSHPort
	I0814 00:35:24.318147   44271 main.go:141] libmachine: (multinode-745925) Calling .GetSSHKeyPath
	I0814 00:35:24.318356   44271 main.go:141] libmachine: (multinode-745925) Calling .GetSSHUsername
	I0814 00:35:24.318568   44271 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/multinode-745925/id_rsa Username:docker}
	I0814 00:35:24.400961   44271 ssh_runner.go:195] Run: cat /etc/os-release
	I0814 00:35:24.404765   44271 command_runner.go:130] > NAME=Buildroot
	I0814 00:35:24.404785   44271 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0814 00:35:24.404789   44271 command_runner.go:130] > ID=buildroot
	I0814 00:35:24.404794   44271 command_runner.go:130] > VERSION_ID=2023.02.9
	I0814 00:35:24.404799   44271 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0814 00:35:24.404909   44271 info.go:137] Remote host: Buildroot 2023.02.9
	I0814 00:35:24.404929   44271 filesync.go:126] Scanning /home/jenkins/minikube-integration/19429-9425/.minikube/addons for local assets ...
	I0814 00:35:24.404998   44271 filesync.go:126] Scanning /home/jenkins/minikube-integration/19429-9425/.minikube/files for local assets ...
	I0814 00:35:24.405093   44271 filesync.go:149] local asset: /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem -> 165892.pem in /etc/ssl/certs
	I0814 00:35:24.405106   44271 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem -> /etc/ssl/certs/165892.pem
	I0814 00:35:24.405224   44271 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0814 00:35:24.414481   44271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem --> /etc/ssl/certs/165892.pem (1708 bytes)
	I0814 00:35:24.436348   44271 start.go:296] duration metric: took 122.30144ms for postStartSetup
	I0814 00:35:24.436387   44271 fix.go:56] duration metric: took 1m31.611618248s for fixHost
	I0814 00:35:24.436406   44271 main.go:141] libmachine: (multinode-745925) Calling .GetSSHHostname
	I0814 00:35:24.439037   44271 main.go:141] libmachine: (multinode-745925) DBG | domain multinode-745925 has defined MAC address 52:54:00:eb:87:ad in network mk-multinode-745925
	I0814 00:35:24.439331   44271 main.go:141] libmachine: (multinode-745925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:87:ad", ip: ""} in network mk-multinode-745925: {Iface:virbr1 ExpiryTime:2024-08-14 01:28:32 +0000 UTC Type:0 Mac:52:54:00:eb:87:ad Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:multinode-745925 Clientid:01:52:54:00:eb:87:ad}
	I0814 00:35:24.439356   44271 main.go:141] libmachine: (multinode-745925) DBG | domain multinode-745925 has defined IP address 192.168.39.201 and MAC address 52:54:00:eb:87:ad in network mk-multinode-745925
	I0814 00:35:24.439499   44271 main.go:141] libmachine: (multinode-745925) Calling .GetSSHPort
	I0814 00:35:24.439682   44271 main.go:141] libmachine: (multinode-745925) Calling .GetSSHKeyPath
	I0814 00:35:24.439837   44271 main.go:141] libmachine: (multinode-745925) Calling .GetSSHKeyPath
	I0814 00:35:24.439939   44271 main.go:141] libmachine: (multinode-745925) Calling .GetSSHUsername
	I0814 00:35:24.440100   44271 main.go:141] libmachine: Using SSH client type: native
	I0814 00:35:24.440253   44271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.201 22 <nil> <nil>}
	I0814 00:35:24.440262   44271 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0814 00:35:24.542542   44271 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723595724.512472430
	
	I0814 00:35:24.542570   44271 fix.go:216] guest clock: 1723595724.512472430
	I0814 00:35:24.542577   44271 fix.go:229] Guest: 2024-08-14 00:35:24.51247243 +0000 UTC Remote: 2024-08-14 00:35:24.436391084 +0000 UTC m=+91.735393673 (delta=76.081346ms)
	I0814 00:35:24.542595   44271 fix.go:200] guest clock delta is within tolerance: 76.081346ms
	I0814 00:35:24.542599   44271 start.go:83] releasing machines lock for "multinode-745925", held for 1m31.717847085s
	I0814 00:35:24.542618   44271 main.go:141] libmachine: (multinode-745925) Calling .DriverName
	I0814 00:35:24.542877   44271 main.go:141] libmachine: (multinode-745925) Calling .GetIP
	I0814 00:35:24.545337   44271 main.go:141] libmachine: (multinode-745925) DBG | domain multinode-745925 has defined MAC address 52:54:00:eb:87:ad in network mk-multinode-745925
	I0814 00:35:24.545734   44271 main.go:141] libmachine: (multinode-745925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:87:ad", ip: ""} in network mk-multinode-745925: {Iface:virbr1 ExpiryTime:2024-08-14 01:28:32 +0000 UTC Type:0 Mac:52:54:00:eb:87:ad Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:multinode-745925 Clientid:01:52:54:00:eb:87:ad}
	I0814 00:35:24.545763   44271 main.go:141] libmachine: (multinode-745925) DBG | domain multinode-745925 has defined IP address 192.168.39.201 and MAC address 52:54:00:eb:87:ad in network mk-multinode-745925
	I0814 00:35:24.545914   44271 main.go:141] libmachine: (multinode-745925) Calling .DriverName
	I0814 00:35:24.546389   44271 main.go:141] libmachine: (multinode-745925) Calling .DriverName
	I0814 00:35:24.546600   44271 main.go:141] libmachine: (multinode-745925) Calling .DriverName
	I0814 00:35:24.546712   44271 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0814 00:35:24.546769   44271 main.go:141] libmachine: (multinode-745925) Calling .GetSSHHostname
	I0814 00:35:24.546792   44271 ssh_runner.go:195] Run: cat /version.json
	I0814 00:35:24.546814   44271 main.go:141] libmachine: (multinode-745925) Calling .GetSSHHostname
	I0814 00:35:24.549376   44271 main.go:141] libmachine: (multinode-745925) DBG | domain multinode-745925 has defined MAC address 52:54:00:eb:87:ad in network mk-multinode-745925
	I0814 00:35:24.549565   44271 main.go:141] libmachine: (multinode-745925) DBG | domain multinode-745925 has defined MAC address 52:54:00:eb:87:ad in network mk-multinode-745925
	I0814 00:35:24.549785   44271 main.go:141] libmachine: (multinode-745925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:87:ad", ip: ""} in network mk-multinode-745925: {Iface:virbr1 ExpiryTime:2024-08-14 01:28:32 +0000 UTC Type:0 Mac:52:54:00:eb:87:ad Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:multinode-745925 Clientid:01:52:54:00:eb:87:ad}
	I0814 00:35:24.549820   44271 main.go:141] libmachine: (multinode-745925) DBG | domain multinode-745925 has defined IP address 192.168.39.201 and MAC address 52:54:00:eb:87:ad in network mk-multinode-745925
	I0814 00:35:24.549948   44271 main.go:141] libmachine: (multinode-745925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:87:ad", ip: ""} in network mk-multinode-745925: {Iface:virbr1 ExpiryTime:2024-08-14 01:28:32 +0000 UTC Type:0 Mac:52:54:00:eb:87:ad Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:multinode-745925 Clientid:01:52:54:00:eb:87:ad}
	I0814 00:35:24.549960   44271 main.go:141] libmachine: (multinode-745925) Calling .GetSSHPort
	I0814 00:35:24.549971   44271 main.go:141] libmachine: (multinode-745925) DBG | domain multinode-745925 has defined IP address 192.168.39.201 and MAC address 52:54:00:eb:87:ad in network mk-multinode-745925
	I0814 00:35:24.550140   44271 main.go:141] libmachine: (multinode-745925) Calling .GetSSHKeyPath
	I0814 00:35:24.550165   44271 main.go:141] libmachine: (multinode-745925) Calling .GetSSHPort
	I0814 00:35:24.550312   44271 main.go:141] libmachine: (multinode-745925) Calling .GetSSHKeyPath
	I0814 00:35:24.550333   44271 main.go:141] libmachine: (multinode-745925) Calling .GetSSHUsername
	I0814 00:35:24.550491   44271 main.go:141] libmachine: (multinode-745925) Calling .GetSSHUsername
	I0814 00:35:24.550511   44271 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/multinode-745925/id_rsa Username:docker}
	I0814 00:35:24.550607   44271 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/multinode-745925/id_rsa Username:docker}
	I0814 00:35:24.656761   44271 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0814 00:35:24.657388   44271 command_runner.go:130] > {"iso_version": "v1.33.1-1723567878-19429", "kicbase_version": "v0.0.44-1723026928-19389", "minikube_version": "v1.33.1", "commit": "99323a71d52eff08226c75fcaff04297eb5d3584"}
	I0814 00:35:24.657557   44271 ssh_runner.go:195] Run: systemctl --version
	I0814 00:35:24.663108   44271 command_runner.go:130] > systemd 252 (252)
	I0814 00:35:24.663136   44271 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0814 00:35:24.663303   44271 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0814 00:35:24.823032   44271 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0814 00:35:24.828427   44271 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0814 00:35:24.828605   44271 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0814 00:35:24.828666   44271 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0814 00:35:24.837558   44271 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0814 00:35:24.837581   44271 start.go:495] detecting cgroup driver to use...
	I0814 00:35:24.837646   44271 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0814 00:35:24.853015   44271 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0814 00:35:24.865844   44271 docker.go:217] disabling cri-docker service (if available) ...
	I0814 00:35:24.865906   44271 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0814 00:35:24.878754   44271 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0814 00:35:24.891135   44271 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0814 00:35:25.029959   44271 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0814 00:35:25.170308   44271 docker.go:233] disabling docker service ...
	I0814 00:35:25.170385   44271 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0814 00:35:25.185699   44271 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0814 00:35:25.198393   44271 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0814 00:35:25.333381   44271 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0814 00:35:25.468063   44271 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0814 00:35:25.481207   44271 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 00:35:25.498586   44271 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0814 00:35:25.499170   44271 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0814 00:35:25.499224   44271 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 00:35:25.509694   44271 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0814 00:35:25.509756   44271 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 00:35:25.519317   44271 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 00:35:25.528597   44271 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 00:35:25.539373   44271 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0814 00:35:25.549149   44271 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 00:35:25.558645   44271 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 00:35:25.569126   44271 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 00:35:25.578563   44271 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0814 00:35:25.587019   44271 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0814 00:35:25.587177   44271 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0814 00:35:25.595505   44271 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 00:35:25.731918   44271 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0814 00:35:29.186981   44271 ssh_runner.go:235] Completed: sudo systemctl restart crio: (3.455027566s)
	I0814 00:35:29.187008   44271 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0814 00:35:29.187058   44271 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0814 00:35:29.192103   44271 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0814 00:35:29.192124   44271 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0814 00:35:29.192131   44271 command_runner.go:130] > Device: 0,22	Inode: 1317        Links: 1
	I0814 00:35:29.192138   44271 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0814 00:35:29.192145   44271 command_runner.go:130] > Access: 2024-08-14 00:35:29.144137867 +0000
	I0814 00:35:29.192162   44271 command_runner.go:130] > Modify: 2024-08-14 00:35:29.056136228 +0000
	I0814 00:35:29.192170   44271 command_runner.go:130] > Change: 2024-08-14 00:35:29.056136228 +0000
	I0814 00:35:29.192176   44271 command_runner.go:130] >  Birth: -
	I0814 00:35:29.192374   44271 start.go:563] Will wait 60s for crictl version
	I0814 00:35:29.192423   44271 ssh_runner.go:195] Run: which crictl
	I0814 00:35:29.195702   44271 command_runner.go:130] > /usr/bin/crictl
	I0814 00:35:29.195838   44271 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0814 00:35:29.238596   44271 command_runner.go:130] > Version:  0.1.0
	I0814 00:35:29.238617   44271 command_runner.go:130] > RuntimeName:  cri-o
	I0814 00:35:29.238767   44271 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0814 00:35:29.238792   44271 command_runner.go:130] > RuntimeApiVersion:  v1
	I0814 00:35:29.240822   44271 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0814 00:35:29.240895   44271 ssh_runner.go:195] Run: crio --version
	I0814 00:35:29.271735   44271 command_runner.go:130] > crio version 1.29.1
	I0814 00:35:29.271762   44271 command_runner.go:130] > Version:        1.29.1
	I0814 00:35:29.271777   44271 command_runner.go:130] > GitCommit:      unknown
	I0814 00:35:29.271784   44271 command_runner.go:130] > GitCommitDate:  unknown
	I0814 00:35:29.271790   44271 command_runner.go:130] > GitTreeState:   clean
	I0814 00:35:29.271797   44271 command_runner.go:130] > BuildDate:      2024-08-13T22:49:54Z
	I0814 00:35:29.271801   44271 command_runner.go:130] > GoVersion:      go1.21.6
	I0814 00:35:29.271805   44271 command_runner.go:130] > Compiler:       gc
	I0814 00:35:29.271810   44271 command_runner.go:130] > Platform:       linux/amd64
	I0814 00:35:29.271825   44271 command_runner.go:130] > Linkmode:       dynamic
	I0814 00:35:29.271833   44271 command_runner.go:130] > BuildTags:      
	I0814 00:35:29.271838   44271 command_runner.go:130] >   containers_image_ostree_stub
	I0814 00:35:29.271845   44271 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0814 00:35:29.271857   44271 command_runner.go:130] >   btrfs_noversion
	I0814 00:35:29.271864   44271 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0814 00:35:29.271871   44271 command_runner.go:130] >   libdm_no_deferred_remove
	I0814 00:35:29.271879   44271 command_runner.go:130] >   seccomp
	I0814 00:35:29.271884   44271 command_runner.go:130] > LDFlags:          unknown
	I0814 00:35:29.271889   44271 command_runner.go:130] > SeccompEnabled:   true
	I0814 00:35:29.271893   44271 command_runner.go:130] > AppArmorEnabled:  false
	I0814 00:35:29.271959   44271 ssh_runner.go:195] Run: crio --version
	I0814 00:35:29.303471   44271 command_runner.go:130] > crio version 1.29.1
	I0814 00:35:29.303490   44271 command_runner.go:130] > Version:        1.29.1
	I0814 00:35:29.303495   44271 command_runner.go:130] > GitCommit:      unknown
	I0814 00:35:29.303499   44271 command_runner.go:130] > GitCommitDate:  unknown
	I0814 00:35:29.303504   44271 command_runner.go:130] > GitTreeState:   clean
	I0814 00:35:29.303510   44271 command_runner.go:130] > BuildDate:      2024-08-13T22:49:54Z
	I0814 00:35:29.303514   44271 command_runner.go:130] > GoVersion:      go1.21.6
	I0814 00:35:29.303518   44271 command_runner.go:130] > Compiler:       gc
	I0814 00:35:29.303522   44271 command_runner.go:130] > Platform:       linux/amd64
	I0814 00:35:29.303526   44271 command_runner.go:130] > Linkmode:       dynamic
	I0814 00:35:29.303531   44271 command_runner.go:130] > BuildTags:      
	I0814 00:35:29.303535   44271 command_runner.go:130] >   containers_image_ostree_stub
	I0814 00:35:29.303539   44271 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0814 00:35:29.303542   44271 command_runner.go:130] >   btrfs_noversion
	I0814 00:35:29.303547   44271 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0814 00:35:29.303551   44271 command_runner.go:130] >   libdm_no_deferred_remove
	I0814 00:35:29.303554   44271 command_runner.go:130] >   seccomp
	I0814 00:35:29.303558   44271 command_runner.go:130] > LDFlags:          unknown
	I0814 00:35:29.303563   44271 command_runner.go:130] > SeccompEnabled:   true
	I0814 00:35:29.303567   44271 command_runner.go:130] > AppArmorEnabled:  false
	I0814 00:35:29.305635   44271 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0814 00:35:29.307010   44271 main.go:141] libmachine: (multinode-745925) Calling .GetIP
	I0814 00:35:29.309603   44271 main.go:141] libmachine: (multinode-745925) DBG | domain multinode-745925 has defined MAC address 52:54:00:eb:87:ad in network mk-multinode-745925
	I0814 00:35:29.309914   44271 main.go:141] libmachine: (multinode-745925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:87:ad", ip: ""} in network mk-multinode-745925: {Iface:virbr1 ExpiryTime:2024-08-14 01:28:32 +0000 UTC Type:0 Mac:52:54:00:eb:87:ad Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:multinode-745925 Clientid:01:52:54:00:eb:87:ad}
	I0814 00:35:29.309941   44271 main.go:141] libmachine: (multinode-745925) DBG | domain multinode-745925 has defined IP address 192.168.39.201 and MAC address 52:54:00:eb:87:ad in network mk-multinode-745925
	I0814 00:35:29.310166   44271 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0814 00:35:29.314013   44271 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0814 00:35:29.314200   44271 kubeadm.go:883] updating cluster {Name:multinode-745925 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.0 ClusterName:multinode-745925 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.201 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.55 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.225 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fal
se inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0814 00:35:29.314323   44271 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0814 00:35:29.314378   44271 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 00:35:29.350689   44271 command_runner.go:130] > {
	I0814 00:35:29.350709   44271 command_runner.go:130] >   "images": [
	I0814 00:35:29.350714   44271 command_runner.go:130] >     {
	I0814 00:35:29.350722   44271 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0814 00:35:29.350727   44271 command_runner.go:130] >       "repoTags": [
	I0814 00:35:29.350737   44271 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0814 00:35:29.350741   44271 command_runner.go:130] >       ],
	I0814 00:35:29.350745   44271 command_runner.go:130] >       "repoDigests": [
	I0814 00:35:29.350753   44271 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0814 00:35:29.350760   44271 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0814 00:35:29.350764   44271 command_runner.go:130] >       ],
	I0814 00:35:29.350774   44271 command_runner.go:130] >       "size": "87165492",
	I0814 00:35:29.350778   44271 command_runner.go:130] >       "uid": null,
	I0814 00:35:29.350782   44271 command_runner.go:130] >       "username": "",
	I0814 00:35:29.350787   44271 command_runner.go:130] >       "spec": null,
	I0814 00:35:29.350792   44271 command_runner.go:130] >       "pinned": false
	I0814 00:35:29.350795   44271 command_runner.go:130] >     },
	I0814 00:35:29.350799   44271 command_runner.go:130] >     {
	I0814 00:35:29.350808   44271 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0814 00:35:29.350824   44271 command_runner.go:130] >       "repoTags": [
	I0814 00:35:29.350832   44271 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0814 00:35:29.350837   44271 command_runner.go:130] >       ],
	I0814 00:35:29.350846   44271 command_runner.go:130] >       "repoDigests": [
	I0814 00:35:29.350858   44271 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0814 00:35:29.350873   44271 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0814 00:35:29.350879   44271 command_runner.go:130] >       ],
	I0814 00:35:29.350883   44271 command_runner.go:130] >       "size": "1363676",
	I0814 00:35:29.350890   44271 command_runner.go:130] >       "uid": null,
	I0814 00:35:29.350898   44271 command_runner.go:130] >       "username": "",
	I0814 00:35:29.350905   44271 command_runner.go:130] >       "spec": null,
	I0814 00:35:29.350909   44271 command_runner.go:130] >       "pinned": false
	I0814 00:35:29.350915   44271 command_runner.go:130] >     },
	I0814 00:35:29.350918   44271 command_runner.go:130] >     {
	I0814 00:35:29.350924   44271 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0814 00:35:29.350930   44271 command_runner.go:130] >       "repoTags": [
	I0814 00:35:29.350936   44271 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0814 00:35:29.350940   44271 command_runner.go:130] >       ],
	I0814 00:35:29.350944   44271 command_runner.go:130] >       "repoDigests": [
	I0814 00:35:29.350954   44271 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0814 00:35:29.350961   44271 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0814 00:35:29.350967   44271 command_runner.go:130] >       ],
	I0814 00:35:29.350971   44271 command_runner.go:130] >       "size": "31470524",
	I0814 00:35:29.350975   44271 command_runner.go:130] >       "uid": null,
	I0814 00:35:29.350978   44271 command_runner.go:130] >       "username": "",
	I0814 00:35:29.350982   44271 command_runner.go:130] >       "spec": null,
	I0814 00:35:29.350987   44271 command_runner.go:130] >       "pinned": false
	I0814 00:35:29.350990   44271 command_runner.go:130] >     },
	I0814 00:35:29.351004   44271 command_runner.go:130] >     {
	I0814 00:35:29.351013   44271 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0814 00:35:29.351017   44271 command_runner.go:130] >       "repoTags": [
	I0814 00:35:29.351022   44271 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0814 00:35:29.351028   44271 command_runner.go:130] >       ],
	I0814 00:35:29.351033   44271 command_runner.go:130] >       "repoDigests": [
	I0814 00:35:29.351040   44271 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0814 00:35:29.351055   44271 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0814 00:35:29.351061   44271 command_runner.go:130] >       ],
	I0814 00:35:29.351065   44271 command_runner.go:130] >       "size": "61245718",
	I0814 00:35:29.351071   44271 command_runner.go:130] >       "uid": null,
	I0814 00:35:29.351076   44271 command_runner.go:130] >       "username": "nonroot",
	I0814 00:35:29.351082   44271 command_runner.go:130] >       "spec": null,
	I0814 00:35:29.351086   44271 command_runner.go:130] >       "pinned": false
	I0814 00:35:29.351091   44271 command_runner.go:130] >     },
	I0814 00:35:29.351095   44271 command_runner.go:130] >     {
	I0814 00:35:29.351101   44271 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0814 00:35:29.351107   44271 command_runner.go:130] >       "repoTags": [
	I0814 00:35:29.351112   44271 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0814 00:35:29.351117   44271 command_runner.go:130] >       ],
	I0814 00:35:29.351121   44271 command_runner.go:130] >       "repoDigests": [
	I0814 00:35:29.351128   44271 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0814 00:35:29.351137   44271 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0814 00:35:29.351140   44271 command_runner.go:130] >       ],
	I0814 00:35:29.351144   44271 command_runner.go:130] >       "size": "149009664",
	I0814 00:35:29.351148   44271 command_runner.go:130] >       "uid": {
	I0814 00:35:29.351152   44271 command_runner.go:130] >         "value": "0"
	I0814 00:35:29.351155   44271 command_runner.go:130] >       },
	I0814 00:35:29.351160   44271 command_runner.go:130] >       "username": "",
	I0814 00:35:29.351164   44271 command_runner.go:130] >       "spec": null,
	I0814 00:35:29.351167   44271 command_runner.go:130] >       "pinned": false
	I0814 00:35:29.351171   44271 command_runner.go:130] >     },
	I0814 00:35:29.351174   44271 command_runner.go:130] >     {
	I0814 00:35:29.351180   44271 command_runner.go:130] >       "id": "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3",
	I0814 00:35:29.351186   44271 command_runner.go:130] >       "repoTags": [
	I0814 00:35:29.351191   44271 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.0"
	I0814 00:35:29.351200   44271 command_runner.go:130] >       ],
	I0814 00:35:29.351207   44271 command_runner.go:130] >       "repoDigests": [
	I0814 00:35:29.351214   44271 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf",
	I0814 00:35:29.351222   44271 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"
	I0814 00:35:29.351226   44271 command_runner.go:130] >       ],
	I0814 00:35:29.351230   44271 command_runner.go:130] >       "size": "95233506",
	I0814 00:35:29.351236   44271 command_runner.go:130] >       "uid": {
	I0814 00:35:29.351240   44271 command_runner.go:130] >         "value": "0"
	I0814 00:35:29.351245   44271 command_runner.go:130] >       },
	I0814 00:35:29.351251   44271 command_runner.go:130] >       "username": "",
	I0814 00:35:29.351260   44271 command_runner.go:130] >       "spec": null,
	I0814 00:35:29.351266   44271 command_runner.go:130] >       "pinned": false
	I0814 00:35:29.351274   44271 command_runner.go:130] >     },
	I0814 00:35:29.351280   44271 command_runner.go:130] >     {
	I0814 00:35:29.351290   44271 command_runner.go:130] >       "id": "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1",
	I0814 00:35:29.351296   44271 command_runner.go:130] >       "repoTags": [
	I0814 00:35:29.351302   44271 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.0"
	I0814 00:35:29.351307   44271 command_runner.go:130] >       ],
	I0814 00:35:29.351312   44271 command_runner.go:130] >       "repoDigests": [
	I0814 00:35:29.351319   44271 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d",
	I0814 00:35:29.351339   44271 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"
	I0814 00:35:29.351344   44271 command_runner.go:130] >       ],
	I0814 00:35:29.351349   44271 command_runner.go:130] >       "size": "89437512",
	I0814 00:35:29.351352   44271 command_runner.go:130] >       "uid": {
	I0814 00:35:29.351356   44271 command_runner.go:130] >         "value": "0"
	I0814 00:35:29.351359   44271 command_runner.go:130] >       },
	I0814 00:35:29.351363   44271 command_runner.go:130] >       "username": "",
	I0814 00:35:29.351367   44271 command_runner.go:130] >       "spec": null,
	I0814 00:35:29.351371   44271 command_runner.go:130] >       "pinned": false
	I0814 00:35:29.351375   44271 command_runner.go:130] >     },
	I0814 00:35:29.351378   44271 command_runner.go:130] >     {
	I0814 00:35:29.351384   44271 command_runner.go:130] >       "id": "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494",
	I0814 00:35:29.351390   44271 command_runner.go:130] >       "repoTags": [
	I0814 00:35:29.351394   44271 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.0"
	I0814 00:35:29.351398   44271 command_runner.go:130] >       ],
	I0814 00:35:29.351402   44271 command_runner.go:130] >       "repoDigests": [
	I0814 00:35:29.351457   44271 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf",
	I0814 00:35:29.351470   44271 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"
	I0814 00:35:29.351474   44271 command_runner.go:130] >       ],
	I0814 00:35:29.351478   44271 command_runner.go:130] >       "size": "92728217",
	I0814 00:35:29.351482   44271 command_runner.go:130] >       "uid": null,
	I0814 00:35:29.351486   44271 command_runner.go:130] >       "username": "",
	I0814 00:35:29.351490   44271 command_runner.go:130] >       "spec": null,
	I0814 00:35:29.351493   44271 command_runner.go:130] >       "pinned": false
	I0814 00:35:29.351496   44271 command_runner.go:130] >     },
	I0814 00:35:29.351499   44271 command_runner.go:130] >     {
	I0814 00:35:29.351514   44271 command_runner.go:130] >       "id": "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94",
	I0814 00:35:29.351519   44271 command_runner.go:130] >       "repoTags": [
	I0814 00:35:29.351523   44271 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.0"
	I0814 00:35:29.351526   44271 command_runner.go:130] >       ],
	I0814 00:35:29.351533   44271 command_runner.go:130] >       "repoDigests": [
	I0814 00:35:29.351542   44271 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a",
	I0814 00:35:29.351550   44271 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"
	I0814 00:35:29.351555   44271 command_runner.go:130] >       ],
	I0814 00:35:29.351560   44271 command_runner.go:130] >       "size": "68420936",
	I0814 00:35:29.351563   44271 command_runner.go:130] >       "uid": {
	I0814 00:35:29.351567   44271 command_runner.go:130] >         "value": "0"
	I0814 00:35:29.351571   44271 command_runner.go:130] >       },
	I0814 00:35:29.351575   44271 command_runner.go:130] >       "username": "",
	I0814 00:35:29.351580   44271 command_runner.go:130] >       "spec": null,
	I0814 00:35:29.351585   44271 command_runner.go:130] >       "pinned": false
	I0814 00:35:29.351588   44271 command_runner.go:130] >     },
	I0814 00:35:29.351592   44271 command_runner.go:130] >     {
	I0814 00:35:29.351598   44271 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0814 00:35:29.351604   44271 command_runner.go:130] >       "repoTags": [
	I0814 00:35:29.351608   44271 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0814 00:35:29.351614   44271 command_runner.go:130] >       ],
	I0814 00:35:29.351618   44271 command_runner.go:130] >       "repoDigests": [
	I0814 00:35:29.351627   44271 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0814 00:35:29.351634   44271 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0814 00:35:29.351639   44271 command_runner.go:130] >       ],
	I0814 00:35:29.351643   44271 command_runner.go:130] >       "size": "742080",
	I0814 00:35:29.351652   44271 command_runner.go:130] >       "uid": {
	I0814 00:35:29.351658   44271 command_runner.go:130] >         "value": "65535"
	I0814 00:35:29.351661   44271 command_runner.go:130] >       },
	I0814 00:35:29.351665   44271 command_runner.go:130] >       "username": "",
	I0814 00:35:29.351669   44271 command_runner.go:130] >       "spec": null,
	I0814 00:35:29.351673   44271 command_runner.go:130] >       "pinned": true
	I0814 00:35:29.351678   44271 command_runner.go:130] >     }
	I0814 00:35:29.351681   44271 command_runner.go:130] >   ]
	I0814 00:35:29.351685   44271 command_runner.go:130] > }
	I0814 00:35:29.352264   44271 crio.go:514] all images are preloaded for cri-o runtime.
	I0814 00:35:29.352278   44271 crio.go:433] Images already preloaded, skipping extraction
	I0814 00:35:29.352322   44271 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 00:35:29.382309   44271 command_runner.go:130] > {
	I0814 00:35:29.382330   44271 command_runner.go:130] >   "images": [
	I0814 00:35:29.382334   44271 command_runner.go:130] >     {
	I0814 00:35:29.382344   44271 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0814 00:35:29.382350   44271 command_runner.go:130] >       "repoTags": [
	I0814 00:35:29.382356   44271 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0814 00:35:29.382359   44271 command_runner.go:130] >       ],
	I0814 00:35:29.382363   44271 command_runner.go:130] >       "repoDigests": [
	I0814 00:35:29.382372   44271 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0814 00:35:29.382383   44271 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0814 00:35:29.382388   44271 command_runner.go:130] >       ],
	I0814 00:35:29.382395   44271 command_runner.go:130] >       "size": "87165492",
	I0814 00:35:29.382400   44271 command_runner.go:130] >       "uid": null,
	I0814 00:35:29.382407   44271 command_runner.go:130] >       "username": "",
	I0814 00:35:29.382423   44271 command_runner.go:130] >       "spec": null,
	I0814 00:35:29.382431   44271 command_runner.go:130] >       "pinned": false
	I0814 00:35:29.382435   44271 command_runner.go:130] >     },
	I0814 00:35:29.382442   44271 command_runner.go:130] >     {
	I0814 00:35:29.382448   44271 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0814 00:35:29.382452   44271 command_runner.go:130] >       "repoTags": [
	I0814 00:35:29.382458   44271 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0814 00:35:29.382463   44271 command_runner.go:130] >       ],
	I0814 00:35:29.382467   44271 command_runner.go:130] >       "repoDigests": [
	I0814 00:35:29.382498   44271 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0814 00:35:29.382514   44271 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0814 00:35:29.382520   44271 command_runner.go:130] >       ],
	I0814 00:35:29.382526   44271 command_runner.go:130] >       "size": "1363676",
	I0814 00:35:29.382533   44271 command_runner.go:130] >       "uid": null,
	I0814 00:35:29.382547   44271 command_runner.go:130] >       "username": "",
	I0814 00:35:29.382556   44271 command_runner.go:130] >       "spec": null,
	I0814 00:35:29.382563   44271 command_runner.go:130] >       "pinned": false
	I0814 00:35:29.382571   44271 command_runner.go:130] >     },
	I0814 00:35:29.382576   44271 command_runner.go:130] >     {
	I0814 00:35:29.382585   44271 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0814 00:35:29.382595   44271 command_runner.go:130] >       "repoTags": [
	I0814 00:35:29.382605   44271 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0814 00:35:29.382611   44271 command_runner.go:130] >       ],
	I0814 00:35:29.382619   44271 command_runner.go:130] >       "repoDigests": [
	I0814 00:35:29.382631   44271 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0814 00:35:29.382647   44271 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0814 00:35:29.382655   44271 command_runner.go:130] >       ],
	I0814 00:35:29.382663   44271 command_runner.go:130] >       "size": "31470524",
	I0814 00:35:29.382671   44271 command_runner.go:130] >       "uid": null,
	I0814 00:35:29.382678   44271 command_runner.go:130] >       "username": "",
	I0814 00:35:29.382685   44271 command_runner.go:130] >       "spec": null,
	I0814 00:35:29.382690   44271 command_runner.go:130] >       "pinned": false
	I0814 00:35:29.382698   44271 command_runner.go:130] >     },
	I0814 00:35:29.382704   44271 command_runner.go:130] >     {
	I0814 00:35:29.382717   44271 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0814 00:35:29.382723   44271 command_runner.go:130] >       "repoTags": [
	I0814 00:35:29.382734   44271 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0814 00:35:29.382743   44271 command_runner.go:130] >       ],
	I0814 00:35:29.382750   44271 command_runner.go:130] >       "repoDigests": [
	I0814 00:35:29.382765   44271 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0814 00:35:29.382790   44271 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0814 00:35:29.382798   44271 command_runner.go:130] >       ],
	I0814 00:35:29.382805   44271 command_runner.go:130] >       "size": "61245718",
	I0814 00:35:29.382812   44271 command_runner.go:130] >       "uid": null,
	I0814 00:35:29.382822   44271 command_runner.go:130] >       "username": "nonroot",
	I0814 00:35:29.382835   44271 command_runner.go:130] >       "spec": null,
	I0814 00:35:29.382844   44271 command_runner.go:130] >       "pinned": false
	I0814 00:35:29.382850   44271 command_runner.go:130] >     },
	I0814 00:35:29.382858   44271 command_runner.go:130] >     {
	I0814 00:35:29.382868   44271 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0814 00:35:29.382876   44271 command_runner.go:130] >       "repoTags": [
	I0814 00:35:29.382881   44271 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0814 00:35:29.382890   44271 command_runner.go:130] >       ],
	I0814 00:35:29.382897   44271 command_runner.go:130] >       "repoDigests": [
	I0814 00:35:29.382911   44271 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0814 00:35:29.382924   44271 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0814 00:35:29.382932   44271 command_runner.go:130] >       ],
	I0814 00:35:29.382939   44271 command_runner.go:130] >       "size": "149009664",
	I0814 00:35:29.382947   44271 command_runner.go:130] >       "uid": {
	I0814 00:35:29.382954   44271 command_runner.go:130] >         "value": "0"
	I0814 00:35:29.382962   44271 command_runner.go:130] >       },
	I0814 00:35:29.382970   44271 command_runner.go:130] >       "username": "",
	I0814 00:35:29.382978   44271 command_runner.go:130] >       "spec": null,
	I0814 00:35:29.382983   44271 command_runner.go:130] >       "pinned": false
	I0814 00:35:29.382989   44271 command_runner.go:130] >     },
	I0814 00:35:29.382995   44271 command_runner.go:130] >     {
	I0814 00:35:29.383007   44271 command_runner.go:130] >       "id": "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3",
	I0814 00:35:29.383014   44271 command_runner.go:130] >       "repoTags": [
	I0814 00:35:29.383025   44271 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.0"
	I0814 00:35:29.383033   44271 command_runner.go:130] >       ],
	I0814 00:35:29.383040   44271 command_runner.go:130] >       "repoDigests": [
	I0814 00:35:29.383054   44271 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf",
	I0814 00:35:29.383069   44271 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"
	I0814 00:35:29.383077   44271 command_runner.go:130] >       ],
	I0814 00:35:29.383082   44271 command_runner.go:130] >       "size": "95233506",
	I0814 00:35:29.383088   44271 command_runner.go:130] >       "uid": {
	I0814 00:35:29.383095   44271 command_runner.go:130] >         "value": "0"
	I0814 00:35:29.383103   44271 command_runner.go:130] >       },
	I0814 00:35:29.383110   44271 command_runner.go:130] >       "username": "",
	I0814 00:35:29.383119   44271 command_runner.go:130] >       "spec": null,
	I0814 00:35:29.383125   44271 command_runner.go:130] >       "pinned": false
	I0814 00:35:29.383139   44271 command_runner.go:130] >     },
	I0814 00:35:29.383147   44271 command_runner.go:130] >     {
	I0814 00:35:29.383157   44271 command_runner.go:130] >       "id": "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1",
	I0814 00:35:29.383165   44271 command_runner.go:130] >       "repoTags": [
	I0814 00:35:29.383171   44271 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.0"
	I0814 00:35:29.383178   44271 command_runner.go:130] >       ],
	I0814 00:35:29.383185   44271 command_runner.go:130] >       "repoDigests": [
	I0814 00:35:29.383201   44271 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d",
	I0814 00:35:29.383215   44271 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"
	I0814 00:35:29.383224   44271 command_runner.go:130] >       ],
	I0814 00:35:29.383231   44271 command_runner.go:130] >       "size": "89437512",
	I0814 00:35:29.383240   44271 command_runner.go:130] >       "uid": {
	I0814 00:35:29.383246   44271 command_runner.go:130] >         "value": "0"
	I0814 00:35:29.383255   44271 command_runner.go:130] >       },
	I0814 00:35:29.383261   44271 command_runner.go:130] >       "username": "",
	I0814 00:35:29.383269   44271 command_runner.go:130] >       "spec": null,
	I0814 00:35:29.383274   44271 command_runner.go:130] >       "pinned": false
	I0814 00:35:29.383281   44271 command_runner.go:130] >     },
	I0814 00:35:29.383286   44271 command_runner.go:130] >     {
	I0814 00:35:29.383298   44271 command_runner.go:130] >       "id": "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494",
	I0814 00:35:29.383308   44271 command_runner.go:130] >       "repoTags": [
	I0814 00:35:29.383317   44271 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.0"
	I0814 00:35:29.383326   44271 command_runner.go:130] >       ],
	I0814 00:35:29.383332   44271 command_runner.go:130] >       "repoDigests": [
	I0814 00:35:29.383363   44271 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf",
	I0814 00:35:29.383373   44271 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"
	I0814 00:35:29.383378   44271 command_runner.go:130] >       ],
	I0814 00:35:29.383385   44271 command_runner.go:130] >       "size": "92728217",
	I0814 00:35:29.383395   44271 command_runner.go:130] >       "uid": null,
	I0814 00:35:29.383401   44271 command_runner.go:130] >       "username": "",
	I0814 00:35:29.383411   44271 command_runner.go:130] >       "spec": null,
	I0814 00:35:29.383418   44271 command_runner.go:130] >       "pinned": false
	I0814 00:35:29.383426   44271 command_runner.go:130] >     },
	I0814 00:35:29.383431   44271 command_runner.go:130] >     {
	I0814 00:35:29.383444   44271 command_runner.go:130] >       "id": "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94",
	I0814 00:35:29.383452   44271 command_runner.go:130] >       "repoTags": [
	I0814 00:35:29.383560   44271 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.0"
	I0814 00:35:29.383708   44271 command_runner.go:130] >       ],
	I0814 00:35:29.383724   44271 command_runner.go:130] >       "repoDigests": [
	I0814 00:35:29.383738   44271 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a",
	I0814 00:35:29.383755   44271 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"
	I0814 00:35:29.383762   44271 command_runner.go:130] >       ],
	I0814 00:35:29.383769   44271 command_runner.go:130] >       "size": "68420936",
	I0814 00:35:29.383776   44271 command_runner.go:130] >       "uid": {
	I0814 00:35:29.383782   44271 command_runner.go:130] >         "value": "0"
	I0814 00:35:29.383793   44271 command_runner.go:130] >       },
	I0814 00:35:29.383800   44271 command_runner.go:130] >       "username": "",
	I0814 00:35:29.383807   44271 command_runner.go:130] >       "spec": null,
	I0814 00:35:29.383813   44271 command_runner.go:130] >       "pinned": false
	I0814 00:35:29.383819   44271 command_runner.go:130] >     },
	I0814 00:35:29.383824   44271 command_runner.go:130] >     {
	I0814 00:35:29.383839   44271 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0814 00:35:29.383846   44271 command_runner.go:130] >       "repoTags": [
	I0814 00:35:29.383852   44271 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0814 00:35:29.383914   44271 command_runner.go:130] >       ],
	I0814 00:35:29.383942   44271 command_runner.go:130] >       "repoDigests": [
	I0814 00:35:29.383961   44271 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0814 00:35:29.383983   44271 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0814 00:35:29.383989   44271 command_runner.go:130] >       ],
	I0814 00:35:29.383995   44271 command_runner.go:130] >       "size": "742080",
	I0814 00:35:29.384000   44271 command_runner.go:130] >       "uid": {
	I0814 00:35:29.384005   44271 command_runner.go:130] >         "value": "65535"
	I0814 00:35:29.384010   44271 command_runner.go:130] >       },
	I0814 00:35:29.384021   44271 command_runner.go:130] >       "username": "",
	I0814 00:35:29.384027   44271 command_runner.go:130] >       "spec": null,
	I0814 00:35:29.384034   44271 command_runner.go:130] >       "pinned": true
	I0814 00:35:29.384039   44271 command_runner.go:130] >     }
	I0814 00:35:29.384044   44271 command_runner.go:130] >   ]
	I0814 00:35:29.384049   44271 command_runner.go:130] > }
	I0814 00:35:29.384260   44271 crio.go:514] all images are preloaded for cri-o runtime.
	I0814 00:35:29.384268   44271 cache_images.go:84] Images are preloaded, skipping loading
	I0814 00:35:29.384279   44271 kubeadm.go:934] updating node { 192.168.39.201 8443 v1.31.0 crio true true} ...
	I0814 00:35:29.384435   44271 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-745925 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.201
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:multinode-745925 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0814 00:35:29.384508   44271 ssh_runner.go:195] Run: crio config
	I0814 00:35:29.426507   44271 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0814 00:35:29.426548   44271 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0814 00:35:29.426560   44271 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0814 00:35:29.426565   44271 command_runner.go:130] > #
	I0814 00:35:29.426577   44271 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0814 00:35:29.426587   44271 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0814 00:35:29.426596   44271 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0814 00:35:29.426608   44271 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0814 00:35:29.426616   44271 command_runner.go:130] > # reload'.
	I0814 00:35:29.426625   44271 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0814 00:35:29.426638   44271 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0814 00:35:29.426648   44271 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0814 00:35:29.426659   44271 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0814 00:35:29.426665   44271 command_runner.go:130] > [crio]
	I0814 00:35:29.426677   44271 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0814 00:35:29.426685   44271 command_runner.go:130] > # containers images, in this directory.
	I0814 00:35:29.426696   44271 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0814 00:35:29.426716   44271 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0814 00:35:29.426726   44271 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0814 00:35:29.426736   44271 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0814 00:35:29.426746   44271 command_runner.go:130] > # imagestore = ""
	I0814 00:35:29.426756   44271 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0814 00:35:29.426768   44271 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0814 00:35:29.426775   44271 command_runner.go:130] > storage_driver = "overlay"
	I0814 00:35:29.426787   44271 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0814 00:35:29.426796   44271 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0814 00:35:29.426802   44271 command_runner.go:130] > storage_option = [
	I0814 00:35:29.426812   44271 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0814 00:35:29.426818   44271 command_runner.go:130] > ]
	I0814 00:35:29.426830   44271 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0814 00:35:29.426843   44271 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0814 00:35:29.426850   44271 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0814 00:35:29.426858   44271 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0814 00:35:29.426867   44271 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0814 00:35:29.426874   44271 command_runner.go:130] > # always happen on a node reboot
	I0814 00:35:29.426898   44271 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0814 00:35:29.426930   44271 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0814 00:35:29.426944   44271 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0814 00:35:29.426955   44271 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0814 00:35:29.426968   44271 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0814 00:35:29.426983   44271 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0814 00:35:29.426997   44271 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0814 00:35:29.427007   44271 command_runner.go:130] > # internal_wipe = true
	I0814 00:35:29.427018   44271 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0814 00:35:29.427027   44271 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0814 00:35:29.427037   44271 command_runner.go:130] > # internal_repair = false
	I0814 00:35:29.427048   44271 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0814 00:35:29.427061   44271 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0814 00:35:29.427072   44271 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0814 00:35:29.427081   44271 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0814 00:35:29.427095   44271 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0814 00:35:29.427103   44271 command_runner.go:130] > [crio.api]
	I0814 00:35:29.427111   44271 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0814 00:35:29.427123   44271 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0814 00:35:29.427136   44271 command_runner.go:130] > # IP address on which the stream server will listen.
	I0814 00:35:29.427146   44271 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0814 00:35:29.427159   44271 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0814 00:35:29.427169   44271 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0814 00:35:29.427176   44271 command_runner.go:130] > # stream_port = "0"
	I0814 00:35:29.427185   44271 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0814 00:35:29.427194   44271 command_runner.go:130] > # stream_enable_tls = false
	I0814 00:35:29.427207   44271 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0814 00:35:29.427214   44271 command_runner.go:130] > # stream_idle_timeout = ""
	I0814 00:35:29.427227   44271 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0814 00:35:29.427239   44271 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0814 00:35:29.427247   44271 command_runner.go:130] > # minutes.
	I0814 00:35:29.427256   44271 command_runner.go:130] > # stream_tls_cert = ""
	I0814 00:35:29.427266   44271 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0814 00:35:29.427282   44271 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0814 00:35:29.427291   44271 command_runner.go:130] > # stream_tls_key = ""
	I0814 00:35:29.427300   44271 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0814 00:35:29.427310   44271 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0814 00:35:29.427337   44271 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0814 00:35:29.427346   44271 command_runner.go:130] > # stream_tls_ca = ""
	I0814 00:35:29.427358   44271 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0814 00:35:29.427367   44271 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0814 00:35:29.427385   44271 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0814 00:35:29.427397   44271 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0814 00:35:29.427410   44271 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0814 00:35:29.427424   44271 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0814 00:35:29.427434   44271 command_runner.go:130] > [crio.runtime]
	I0814 00:35:29.427444   44271 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0814 00:35:29.427456   44271 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0814 00:35:29.427464   44271 command_runner.go:130] > # "nofile=1024:2048"
	I0814 00:35:29.427473   44271 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0814 00:35:29.427482   44271 command_runner.go:130] > # default_ulimits = [
	I0814 00:35:29.427487   44271 command_runner.go:130] > # ]
	I0814 00:35:29.427498   44271 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0814 00:35:29.427507   44271 command_runner.go:130] > # no_pivot = false
	I0814 00:35:29.427516   44271 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0814 00:35:29.427529   44271 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0814 00:35:29.427539   44271 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0814 00:35:29.427554   44271 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0814 00:35:29.427565   44271 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0814 00:35:29.427576   44271 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0814 00:35:29.427587   44271 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0814 00:35:29.427596   44271 command_runner.go:130] > # Cgroup setting for conmon
	I0814 00:35:29.427611   44271 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0814 00:35:29.427618   44271 command_runner.go:130] > conmon_cgroup = "pod"
	I0814 00:35:29.427629   44271 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0814 00:35:29.427640   44271 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0814 00:35:29.427654   44271 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0814 00:35:29.427663   44271 command_runner.go:130] > conmon_env = [
	I0814 00:35:29.427677   44271 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0814 00:35:29.427687   44271 command_runner.go:130] > ]
	I0814 00:35:29.427695   44271 command_runner.go:130] > # Additional environment variables to set for all the
	I0814 00:35:29.427706   44271 command_runner.go:130] > # containers. These are overridden if set in the
	I0814 00:35:29.427717   44271 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0814 00:35:29.427727   44271 command_runner.go:130] > # default_env = [
	I0814 00:35:29.427732   44271 command_runner.go:130] > # ]
	I0814 00:35:29.427745   44271 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0814 00:35:29.427760   44271 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0814 00:35:29.427769   44271 command_runner.go:130] > # selinux = false
	I0814 00:35:29.427779   44271 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0814 00:35:29.427791   44271 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0814 00:35:29.427803   44271 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0814 00:35:29.427809   44271 command_runner.go:130] > # seccomp_profile = ""
	I0814 00:35:29.427821   44271 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0814 00:35:29.427833   44271 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0814 00:35:29.427843   44271 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0814 00:35:29.427853   44271 command_runner.go:130] > # which might increase security.
	I0814 00:35:29.427860   44271 command_runner.go:130] > # This option is currently deprecated,
	I0814 00:35:29.427873   44271 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0814 00:35:29.427883   44271 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0814 00:35:29.427893   44271 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0814 00:35:29.427905   44271 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0814 00:35:29.427916   44271 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0814 00:35:29.427928   44271 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0814 00:35:29.427939   44271 command_runner.go:130] > # This option supports live configuration reload.
	I0814 00:35:29.427950   44271 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0814 00:35:29.427960   44271 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0814 00:35:29.427970   44271 command_runner.go:130] > # the cgroup blockio controller.
	I0814 00:35:29.427976   44271 command_runner.go:130] > # blockio_config_file = ""
	I0814 00:35:29.427990   44271 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0814 00:35:29.428001   44271 command_runner.go:130] > # blockio parameters.
	I0814 00:35:29.428006   44271 command_runner.go:130] > # blockio_reload = false
	I0814 00:35:29.428017   44271 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0814 00:35:29.428025   44271 command_runner.go:130] > # irqbalance daemon.
	I0814 00:35:29.428033   44271 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0814 00:35:29.428043   44271 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0814 00:35:29.428056   44271 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0814 00:35:29.428070   44271 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0814 00:35:29.428082   44271 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0814 00:35:29.428095   44271 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0814 00:35:29.428106   44271 command_runner.go:130] > # This option supports live configuration reload.
	I0814 00:35:29.428118   44271 command_runner.go:130] > # rdt_config_file = ""
	I0814 00:35:29.428127   44271 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0814 00:35:29.428136   44271 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0814 00:35:29.428175   44271 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0814 00:35:29.428185   44271 command_runner.go:130] > # separate_pull_cgroup = ""
	I0814 00:35:29.428198   44271 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0814 00:35:29.428211   44271 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0814 00:35:29.428219   44271 command_runner.go:130] > # will be added.
	I0814 00:35:29.428226   44271 command_runner.go:130] > # default_capabilities = [
	I0814 00:35:29.428234   44271 command_runner.go:130] > # 	"CHOWN",
	I0814 00:35:29.428241   44271 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0814 00:35:29.428247   44271 command_runner.go:130] > # 	"FSETID",
	I0814 00:35:29.428254   44271 command_runner.go:130] > # 	"FOWNER",
	I0814 00:35:29.428263   44271 command_runner.go:130] > # 	"SETGID",
	I0814 00:35:29.428269   44271 command_runner.go:130] > # 	"SETUID",
	I0814 00:35:29.428278   44271 command_runner.go:130] > # 	"SETPCAP",
	I0814 00:35:29.428285   44271 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0814 00:35:29.428295   44271 command_runner.go:130] > # 	"KILL",
	I0814 00:35:29.428300   44271 command_runner.go:130] > # ]
	I0814 00:35:29.428314   44271 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0814 00:35:29.428327   44271 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0814 00:35:29.428333   44271 command_runner.go:130] > # add_inheritable_capabilities = false
	I0814 00:35:29.428343   44271 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0814 00:35:29.428354   44271 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0814 00:35:29.428363   44271 command_runner.go:130] > default_sysctls = [
	I0814 00:35:29.428380   44271 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0814 00:35:29.428397   44271 command_runner.go:130] > ]
	I0814 00:35:29.428408   44271 command_runner.go:130] > # List of devices on the host that a
	I0814 00:35:29.428417   44271 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0814 00:35:29.428427   44271 command_runner.go:130] > # allowed_devices = [
	I0814 00:35:29.428433   44271 command_runner.go:130] > # 	"/dev/fuse",
	I0814 00:35:29.428442   44271 command_runner.go:130] > # ]
	I0814 00:35:29.428450   44271 command_runner.go:130] > # List of additional devices. specified as
	I0814 00:35:29.428464   44271 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0814 00:35:29.428477   44271 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0814 00:35:29.428488   44271 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0814 00:35:29.428495   44271 command_runner.go:130] > # additional_devices = [
	I0814 00:35:29.428503   44271 command_runner.go:130] > # ]
	I0814 00:35:29.428513   44271 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0814 00:35:29.428523   44271 command_runner.go:130] > # cdi_spec_dirs = [
	I0814 00:35:29.428529   44271 command_runner.go:130] > # 	"/etc/cdi",
	I0814 00:35:29.428538   44271 command_runner.go:130] > # 	"/var/run/cdi",
	I0814 00:35:29.428543   44271 command_runner.go:130] > # ]
	I0814 00:35:29.428555   44271 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0814 00:35:29.428568   44271 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0814 00:35:29.428576   44271 command_runner.go:130] > # Defaults to false.
	I0814 00:35:29.428584   44271 command_runner.go:130] > # device_ownership_from_security_context = false
	I0814 00:35:29.428596   44271 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0814 00:35:29.428608   44271 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0814 00:35:29.428617   44271 command_runner.go:130] > # hooks_dir = [
	I0814 00:35:29.428625   44271 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0814 00:35:29.428633   44271 command_runner.go:130] > # ]
	I0814 00:35:29.428642   44271 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0814 00:35:29.428655   44271 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0814 00:35:29.428666   44271 command_runner.go:130] > # its default mounts from the following two files:
	I0814 00:35:29.428672   44271 command_runner.go:130] > #
	I0814 00:35:29.428686   44271 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0814 00:35:29.428699   44271 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0814 00:35:29.428710   44271 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0814 00:35:29.428719   44271 command_runner.go:130] > #
	I0814 00:35:29.428728   44271 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0814 00:35:29.428741   44271 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0814 00:35:29.428759   44271 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0814 00:35:29.428772   44271 command_runner.go:130] > #      only add mounts it finds in this file.
	I0814 00:35:29.428777   44271 command_runner.go:130] > #
	I0814 00:35:29.428786   44271 command_runner.go:130] > # default_mounts_file = ""
	I0814 00:35:29.428795   44271 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0814 00:35:29.428809   44271 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0814 00:35:29.428819   44271 command_runner.go:130] > pids_limit = 1024
	I0814 00:35:29.428828   44271 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0814 00:35:29.428840   44271 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0814 00:35:29.428853   44271 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0814 00:35:29.428867   44271 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0814 00:35:29.428877   44271 command_runner.go:130] > # log_size_max = -1
	I0814 00:35:29.428890   44271 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0814 00:35:29.428899   44271 command_runner.go:130] > # log_to_journald = false
	I0814 00:35:29.428927   44271 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0814 00:35:29.428944   44271 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0814 00:35:29.428955   44271 command_runner.go:130] > # Path to directory for container attach sockets.
	I0814 00:35:29.428964   44271 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0814 00:35:29.428976   44271 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0814 00:35:29.428983   44271 command_runner.go:130] > # bind_mount_prefix = ""
	I0814 00:35:29.428995   44271 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0814 00:35:29.429007   44271 command_runner.go:130] > # read_only = false
	I0814 00:35:29.429017   44271 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0814 00:35:29.429029   44271 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0814 00:35:29.429040   44271 command_runner.go:130] > # live configuration reload.
	I0814 00:35:29.429048   44271 command_runner.go:130] > # log_level = "info"
	I0814 00:35:29.429057   44271 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0814 00:35:29.429067   44271 command_runner.go:130] > # This option supports live configuration reload.
	I0814 00:35:29.429078   44271 command_runner.go:130] > # log_filter = ""
	I0814 00:35:29.429087   44271 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0814 00:35:29.429100   44271 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0814 00:35:29.429110   44271 command_runner.go:130] > # separated by comma.
	I0814 00:35:29.429121   44271 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0814 00:35:29.429130   44271 command_runner.go:130] > # uid_mappings = ""
	I0814 00:35:29.429140   44271 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0814 00:35:29.429150   44271 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0814 00:35:29.429164   44271 command_runner.go:130] > # separated by comma.
	I0814 00:35:29.429178   44271 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0814 00:35:29.429189   44271 command_runner.go:130] > # gid_mappings = ""
	I0814 00:35:29.429198   44271 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0814 00:35:29.429209   44271 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0814 00:35:29.429222   44271 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0814 00:35:29.429239   44271 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0814 00:35:29.429264   44271 command_runner.go:130] > # minimum_mappable_uid = -1
	I0814 00:35:29.429279   44271 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0814 00:35:29.429291   44271 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0814 00:35:29.429302   44271 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0814 00:35:29.429313   44271 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0814 00:35:29.429324   44271 command_runner.go:130] > # minimum_mappable_gid = -1
	I0814 00:35:29.429335   44271 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0814 00:35:29.429347   44271 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0814 00:35:29.429359   44271 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0814 00:35:29.429367   44271 command_runner.go:130] > # ctr_stop_timeout = 30
	I0814 00:35:29.429384   44271 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0814 00:35:29.429393   44271 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0814 00:35:29.429402   44271 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0814 00:35:29.429408   44271 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0814 00:35:29.429417   44271 command_runner.go:130] > drop_infra_ctr = false
	I0814 00:35:29.429426   44271 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0814 00:35:29.429437   44271 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0814 00:35:29.429452   44271 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0814 00:35:29.429459   44271 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0814 00:35:29.429471   44271 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0814 00:35:29.429484   44271 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0814 00:35:29.429496   44271 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0814 00:35:29.429504   44271 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0814 00:35:29.429513   44271 command_runner.go:130] > # shared_cpuset = ""
	I0814 00:35:29.429524   44271 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0814 00:35:29.429535   44271 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0814 00:35:29.429545   44271 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0814 00:35:29.429557   44271 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0814 00:35:29.429567   44271 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0814 00:35:29.429585   44271 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0814 00:35:29.429598   44271 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0814 00:35:29.429604   44271 command_runner.go:130] > # enable_criu_support = false
	I0814 00:35:29.429613   44271 command_runner.go:130] > # Enable/disable the generation of the container,
	I0814 00:35:29.429621   44271 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0814 00:35:29.429632   44271 command_runner.go:130] > # enable_pod_events = false
	I0814 00:35:29.429640   44271 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0814 00:35:29.429653   44271 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0814 00:35:29.429661   44271 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0814 00:35:29.429670   44271 command_runner.go:130] > # default_runtime = "runc"
	I0814 00:35:29.429680   44271 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0814 00:35:29.429694   44271 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0814 00:35:29.429712   44271 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0814 00:35:29.429723   44271 command_runner.go:130] > # creation as a file is not desired either.
	I0814 00:35:29.429739   44271 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0814 00:35:29.429750   44271 command_runner.go:130] > # the hostname is being managed dynamically.
	I0814 00:35:29.429757   44271 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0814 00:35:29.429766   44271 command_runner.go:130] > # ]
	I0814 00:35:29.429775   44271 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0814 00:35:29.429790   44271 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0814 00:35:29.429802   44271 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0814 00:35:29.429812   44271 command_runner.go:130] > # Each entry in the table should follow the format:
	I0814 00:35:29.429821   44271 command_runner.go:130] > #
	I0814 00:35:29.429828   44271 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0814 00:35:29.429839   44271 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0814 00:35:29.429885   44271 command_runner.go:130] > # runtime_type = "oci"
	I0814 00:35:29.429904   44271 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0814 00:35:29.429913   44271 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0814 00:35:29.429922   44271 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0814 00:35:29.429933   44271 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0814 00:35:29.429939   44271 command_runner.go:130] > # monitor_env = []
	I0814 00:35:29.429950   44271 command_runner.go:130] > # privileged_without_host_devices = false
	I0814 00:35:29.429960   44271 command_runner.go:130] > # allowed_annotations = []
	I0814 00:35:29.429969   44271 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0814 00:35:29.429977   44271 command_runner.go:130] > # Where:
	I0814 00:35:29.429985   44271 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0814 00:35:29.429999   44271 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0814 00:35:29.430013   44271 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0814 00:35:29.430026   44271 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0814 00:35:29.430033   44271 command_runner.go:130] > #   in $PATH.
	I0814 00:35:29.430068   44271 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0814 00:35:29.430079   44271 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0814 00:35:29.430089   44271 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0814 00:35:29.430100   44271 command_runner.go:130] > #   state.
	I0814 00:35:29.430115   44271 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0814 00:35:29.430128   44271 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0814 00:35:29.430140   44271 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0814 00:35:29.430152   44271 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0814 00:35:29.430164   44271 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0814 00:35:29.430177   44271 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0814 00:35:29.430188   44271 command_runner.go:130] > #   The currently recognized values are:
	I0814 00:35:29.430202   44271 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0814 00:35:29.430216   44271 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0814 00:35:29.430229   44271 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0814 00:35:29.430243   44271 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0814 00:35:29.430257   44271 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0814 00:35:29.430271   44271 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0814 00:35:29.430284   44271 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0814 00:35:29.430294   44271 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0814 00:35:29.430307   44271 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0814 00:35:29.430318   44271 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0814 00:35:29.430328   44271 command_runner.go:130] > #   deprecated option "conmon".
	I0814 00:35:29.430339   44271 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0814 00:35:29.430349   44271 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0814 00:35:29.430361   44271 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0814 00:35:29.430371   44271 command_runner.go:130] > #   should be moved to the container's cgroup
	I0814 00:35:29.430386   44271 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0814 00:35:29.430396   44271 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0814 00:35:29.430409   44271 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0814 00:35:29.430421   44271 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0814 00:35:29.430427   44271 command_runner.go:130] > #
	I0814 00:35:29.430434   44271 command_runner.go:130] > # Using the seccomp notifier feature:
	I0814 00:35:29.430445   44271 command_runner.go:130] > #
	I0814 00:35:29.430458   44271 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0814 00:35:29.430469   44271 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0814 00:35:29.430477   44271 command_runner.go:130] > #
	I0814 00:35:29.430487   44271 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0814 00:35:29.430499   44271 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0814 00:35:29.430508   44271 command_runner.go:130] > #
	I0814 00:35:29.430518   44271 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0814 00:35:29.430527   44271 command_runner.go:130] > # feature.
	I0814 00:35:29.430533   44271 command_runner.go:130] > #
	I0814 00:35:29.430544   44271 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0814 00:35:29.430556   44271 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0814 00:35:29.430569   44271 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0814 00:35:29.430582   44271 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0814 00:35:29.430594   44271 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0814 00:35:29.430602   44271 command_runner.go:130] > #
	I0814 00:35:29.430612   44271 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0814 00:35:29.430624   44271 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0814 00:35:29.430632   44271 command_runner.go:130] > #
	I0814 00:35:29.430645   44271 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0814 00:35:29.430658   44271 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0814 00:35:29.430666   44271 command_runner.go:130] > #
	I0814 00:35:29.430679   44271 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0814 00:35:29.430691   44271 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0814 00:35:29.430700   44271 command_runner.go:130] > # limitation.
	I0814 00:35:29.430710   44271 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0814 00:35:29.430720   44271 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0814 00:35:29.430730   44271 command_runner.go:130] > runtime_type = "oci"
	I0814 00:35:29.430740   44271 command_runner.go:130] > runtime_root = "/run/runc"
	I0814 00:35:29.430750   44271 command_runner.go:130] > runtime_config_path = ""
	I0814 00:35:29.430762   44271 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0814 00:35:29.430771   44271 command_runner.go:130] > monitor_cgroup = "pod"
	I0814 00:35:29.430780   44271 command_runner.go:130] > monitor_exec_cgroup = ""
	I0814 00:35:29.430786   44271 command_runner.go:130] > monitor_env = [
	I0814 00:35:29.430799   44271 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0814 00:35:29.430806   44271 command_runner.go:130] > ]
	I0814 00:35:29.430815   44271 command_runner.go:130] > privileged_without_host_devices = false
	I0814 00:35:29.430827   44271 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0814 00:35:29.430838   44271 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0814 00:35:29.430850   44271 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0814 00:35:29.430864   44271 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0814 00:35:29.430879   44271 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0814 00:35:29.430895   44271 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0814 00:35:29.430913   44271 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0814 00:35:29.430928   44271 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0814 00:35:29.430941   44271 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0814 00:35:29.430956   44271 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0814 00:35:29.430965   44271 command_runner.go:130] > # Example:
	I0814 00:35:29.430976   44271 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0814 00:35:29.430986   44271 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0814 00:35:29.430997   44271 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0814 00:35:29.431007   44271 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0814 00:35:29.431015   44271 command_runner.go:130] > # cpuset = 0
	I0814 00:35:29.431021   44271 command_runner.go:130] > # cpushares = "0-1"
	I0814 00:35:29.431029   44271 command_runner.go:130] > # Where:
	I0814 00:35:29.431039   44271 command_runner.go:130] > # The workload name is workload-type.
	I0814 00:35:29.431052   44271 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0814 00:35:29.431062   44271 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0814 00:35:29.431072   44271 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0814 00:35:29.431085   44271 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0814 00:35:29.431098   44271 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0814 00:35:29.431109   44271 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0814 00:35:29.431122   44271 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0814 00:35:29.431131   44271 command_runner.go:130] > # Default value is set to true
	I0814 00:35:29.431141   44271 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0814 00:35:29.431153   44271 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0814 00:35:29.431163   44271 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0814 00:35:29.431173   44271 command_runner.go:130] > # Default value is set to 'false'
	I0814 00:35:29.431183   44271 command_runner.go:130] > # disable_hostport_mapping = false
	I0814 00:35:29.431196   44271 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0814 00:35:29.431205   44271 command_runner.go:130] > #
	I0814 00:35:29.431217   44271 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0814 00:35:29.431231   44271 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0814 00:35:29.431240   44271 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0814 00:35:29.431250   44271 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0814 00:35:29.431260   44271 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0814 00:35:29.431265   44271 command_runner.go:130] > [crio.image]
	I0814 00:35:29.431273   44271 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0814 00:35:29.431278   44271 command_runner.go:130] > # default_transport = "docker://"
	I0814 00:35:29.431287   44271 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0814 00:35:29.431296   44271 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0814 00:35:29.431302   44271 command_runner.go:130] > # global_auth_file = ""
	I0814 00:35:29.431309   44271 command_runner.go:130] > # The image used to instantiate infra containers.
	I0814 00:35:29.431315   44271 command_runner.go:130] > # This option supports live configuration reload.
	I0814 00:35:29.431323   44271 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0814 00:35:29.431335   44271 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0814 00:35:29.431346   44271 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0814 00:35:29.431355   44271 command_runner.go:130] > # This option supports live configuration reload.
	I0814 00:35:29.431362   44271 command_runner.go:130] > # pause_image_auth_file = ""
	I0814 00:35:29.431378   44271 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0814 00:35:29.431389   44271 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0814 00:35:29.431401   44271 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0814 00:35:29.431412   44271 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0814 00:35:29.431422   44271 command_runner.go:130] > # pause_command = "/pause"
	I0814 00:35:29.431432   44271 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0814 00:35:29.431443   44271 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0814 00:35:29.431455   44271 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0814 00:35:29.431466   44271 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0814 00:35:29.431477   44271 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0814 00:35:29.431489   44271 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0814 00:35:29.431498   44271 command_runner.go:130] > # pinned_images = [
	I0814 00:35:29.431506   44271 command_runner.go:130] > # ]
	I0814 00:35:29.431518   44271 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0814 00:35:29.431532   44271 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0814 00:35:29.431545   44271 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0814 00:35:29.431558   44271 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0814 00:35:29.431568   44271 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0814 00:35:29.431577   44271 command_runner.go:130] > # signature_policy = ""
	I0814 00:35:29.431589   44271 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0814 00:35:29.431602   44271 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0814 00:35:29.431613   44271 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0814 00:35:29.431631   44271 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0814 00:35:29.431641   44271 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0814 00:35:29.431650   44271 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0814 00:35:29.431660   44271 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0814 00:35:29.431674   44271 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0814 00:35:29.431683   44271 command_runner.go:130] > # changing them here.
	I0814 00:35:29.431693   44271 command_runner.go:130] > # insecure_registries = [
	I0814 00:35:29.431701   44271 command_runner.go:130] > # ]
	I0814 00:35:29.431712   44271 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0814 00:35:29.431723   44271 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0814 00:35:29.431731   44271 command_runner.go:130] > # image_volumes = "mkdir"
	I0814 00:35:29.431739   44271 command_runner.go:130] > # Temporary directory to use for storing big files
	I0814 00:35:29.431747   44271 command_runner.go:130] > # big_files_temporary_dir = ""
	I0814 00:35:29.431760   44271 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0814 00:35:29.431769   44271 command_runner.go:130] > # CNI plugins.
	I0814 00:35:29.431777   44271 command_runner.go:130] > [crio.network]
	I0814 00:35:29.431790   44271 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0814 00:35:29.431800   44271 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0814 00:35:29.431809   44271 command_runner.go:130] > # cni_default_network = ""
	I0814 00:35:29.431821   44271 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0814 00:35:29.431832   44271 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0814 00:35:29.431843   44271 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0814 00:35:29.431852   44271 command_runner.go:130] > # plugin_dirs = [
	I0814 00:35:29.431858   44271 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0814 00:35:29.431866   44271 command_runner.go:130] > # ]
	I0814 00:35:29.431876   44271 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0814 00:35:29.431885   44271 command_runner.go:130] > [crio.metrics]
	I0814 00:35:29.431892   44271 command_runner.go:130] > # Globally enable or disable metrics support.
	I0814 00:35:29.431901   44271 command_runner.go:130] > enable_metrics = true
	I0814 00:35:29.431911   44271 command_runner.go:130] > # Specify enabled metrics collectors.
	I0814 00:35:29.431921   44271 command_runner.go:130] > # Per default all metrics are enabled.
	I0814 00:35:29.431934   44271 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0814 00:35:29.431946   44271 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0814 00:35:29.431960   44271 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0814 00:35:29.431968   44271 command_runner.go:130] > # metrics_collectors = [
	I0814 00:35:29.431977   44271 command_runner.go:130] > # 	"operations",
	I0814 00:35:29.431987   44271 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0814 00:35:29.431997   44271 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0814 00:35:29.432008   44271 command_runner.go:130] > # 	"operations_errors",
	I0814 00:35:29.432016   44271 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0814 00:35:29.432025   44271 command_runner.go:130] > # 	"image_pulls_by_name",
	I0814 00:35:29.432032   44271 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0814 00:35:29.432040   44271 command_runner.go:130] > # 	"image_pulls_failures",
	I0814 00:35:29.432049   44271 command_runner.go:130] > # 	"image_pulls_successes",
	I0814 00:35:29.432058   44271 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0814 00:35:29.432067   44271 command_runner.go:130] > # 	"image_layer_reuse",
	I0814 00:35:29.432077   44271 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0814 00:35:29.432086   44271 command_runner.go:130] > # 	"containers_oom_total",
	I0814 00:35:29.432095   44271 command_runner.go:130] > # 	"containers_oom",
	I0814 00:35:29.432103   44271 command_runner.go:130] > # 	"processes_defunct",
	I0814 00:35:29.432110   44271 command_runner.go:130] > # 	"operations_total",
	I0814 00:35:29.432119   44271 command_runner.go:130] > # 	"operations_latency_seconds",
	I0814 00:35:29.432129   44271 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0814 00:35:29.432138   44271 command_runner.go:130] > # 	"operations_errors_total",
	I0814 00:35:29.432147   44271 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0814 00:35:29.432157   44271 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0814 00:35:29.432167   44271 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0814 00:35:29.432177   44271 command_runner.go:130] > # 	"image_pulls_success_total",
	I0814 00:35:29.432186   44271 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0814 00:35:29.432194   44271 command_runner.go:130] > # 	"containers_oom_count_total",
	I0814 00:35:29.432201   44271 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0814 00:35:29.432211   44271 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0814 00:35:29.432217   44271 command_runner.go:130] > # ]
	I0814 00:35:29.432225   44271 command_runner.go:130] > # The port on which the metrics server will listen.
	I0814 00:35:29.432234   44271 command_runner.go:130] > # metrics_port = 9090
	I0814 00:35:29.432242   44271 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0814 00:35:29.432251   44271 command_runner.go:130] > # metrics_socket = ""
	I0814 00:35:29.432261   44271 command_runner.go:130] > # The certificate for the secure metrics server.
	I0814 00:35:29.432274   44271 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0814 00:35:29.432288   44271 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0814 00:35:29.432301   44271 command_runner.go:130] > # certificate on any modification event.
	I0814 00:35:29.432311   44271 command_runner.go:130] > # metrics_cert = ""
	I0814 00:35:29.432319   44271 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0814 00:35:29.432329   44271 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0814 00:35:29.432339   44271 command_runner.go:130] > # metrics_key = ""
	I0814 00:35:29.432351   44271 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0814 00:35:29.432359   44271 command_runner.go:130] > [crio.tracing]
	I0814 00:35:29.432371   44271 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0814 00:35:29.432386   44271 command_runner.go:130] > # enable_tracing = false
	I0814 00:35:29.432393   44271 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0814 00:35:29.432403   44271 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0814 00:35:29.432415   44271 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0814 00:35:29.432425   44271 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0814 00:35:29.432434   44271 command_runner.go:130] > # CRI-O NRI configuration.
	I0814 00:35:29.432443   44271 command_runner.go:130] > [crio.nri]
	I0814 00:35:29.432452   44271 command_runner.go:130] > # Globally enable or disable NRI.
	I0814 00:35:29.432461   44271 command_runner.go:130] > # enable_nri = false
	I0814 00:35:29.432470   44271 command_runner.go:130] > # NRI socket to listen on.
	I0814 00:35:29.432479   44271 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0814 00:35:29.432489   44271 command_runner.go:130] > # NRI plugin directory to use.
	I0814 00:35:29.432498   44271 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0814 00:35:29.432508   44271 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0814 00:35:29.432519   44271 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0814 00:35:29.432530   44271 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0814 00:35:29.432539   44271 command_runner.go:130] > # nri_disable_connections = false
	I0814 00:35:29.432548   44271 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0814 00:35:29.432558   44271 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0814 00:35:29.432564   44271 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0814 00:35:29.432572   44271 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0814 00:35:29.432578   44271 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0814 00:35:29.432585   44271 command_runner.go:130] > [crio.stats]
	I0814 00:35:29.432591   44271 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0814 00:35:29.432599   44271 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0814 00:35:29.432606   44271 command_runner.go:130] > # stats_collection_period = 0
	I0814 00:35:29.432634   44271 command_runner.go:130] ! time="2024-08-14 00:35:29.387565487Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0814 00:35:29.432649   44271 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0814 00:35:29.432749   44271 cni.go:84] Creating CNI manager for ""
	I0814 00:35:29.432757   44271 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0814 00:35:29.432765   44271 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0814 00:35:29.432784   44271 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.201 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-745925 NodeName:multinode-745925 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.201"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.201 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0814 00:35:29.432947   44271 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.201
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-745925"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.201
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.201"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0814 00:35:29.433006   44271 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0814 00:35:29.443631   44271 command_runner.go:130] > kubeadm
	I0814 00:35:29.443662   44271 command_runner.go:130] > kubectl
	I0814 00:35:29.443668   44271 command_runner.go:130] > kubelet
	I0814 00:35:29.443695   44271 binaries.go:44] Found k8s binaries, skipping transfer
	I0814 00:35:29.443747   44271 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0814 00:35:29.452610   44271 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0814 00:35:29.469032   44271 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0814 00:35:29.485178   44271 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0814 00:35:29.500456   44271 ssh_runner.go:195] Run: grep 192.168.39.201	control-plane.minikube.internal$ /etc/hosts
	I0814 00:35:29.503765   44271 command_runner.go:130] > 192.168.39.201	control-plane.minikube.internal
	I0814 00:35:29.503844   44271 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 00:35:29.637304   44271 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 00:35:29.650883   44271 certs.go:68] Setting up /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/multinode-745925 for IP: 192.168.39.201
	I0814 00:35:29.650907   44271 certs.go:194] generating shared ca certs ...
	I0814 00:35:29.650928   44271 certs.go:226] acquiring lock for ca certs: {Name:mke321171338291cf6d66f3acbfe43d3fabcafa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 00:35:29.651104   44271 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19429-9425/.minikube/ca.key
	I0814 00:35:29.651145   44271 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.key
	I0814 00:35:29.651155   44271 certs.go:256] generating profile certs ...
	I0814 00:35:29.651225   44271 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/multinode-745925/client.key
	I0814 00:35:29.651278   44271 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/multinode-745925/apiserver.key.a77e74ae
	I0814 00:35:29.651314   44271 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/multinode-745925/proxy-client.key
	I0814 00:35:29.651324   44271 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19429-9425/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0814 00:35:29.651337   44271 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19429-9425/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0814 00:35:29.651352   44271 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0814 00:35:29.651365   44271 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0814 00:35:29.651380   44271 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/multinode-745925/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0814 00:35:29.651393   44271 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/multinode-745925/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0814 00:35:29.651406   44271 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/multinode-745925/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0814 00:35:29.651418   44271 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/multinode-745925/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0814 00:35:29.651466   44271 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589.pem (1338 bytes)
	W0814 00:35:29.651493   44271 certs.go:480] ignoring /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589_empty.pem, impossibly tiny 0 bytes
	I0814 00:35:29.651503   44271 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem (1679 bytes)
	I0814 00:35:29.651526   44271 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem (1082 bytes)
	I0814 00:35:29.651547   44271 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem (1123 bytes)
	I0814 00:35:29.651573   44271 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem (1675 bytes)
	I0814 00:35:29.651609   44271 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem (1708 bytes)
	I0814 00:35:29.651643   44271 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem -> /usr/share/ca-certificates/165892.pem
	I0814 00:35:29.651657   44271 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19429-9425/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0814 00:35:29.651670   44271 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589.pem -> /usr/share/ca-certificates/16589.pem
	I0814 00:35:29.652235   44271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0814 00:35:29.674628   44271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0814 00:35:29.696286   44271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0814 00:35:29.717270   44271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0814 00:35:29.739572   44271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/multinode-745925/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0814 00:35:29.761334   44271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/multinode-745925/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0814 00:35:29.782503   44271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/multinode-745925/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0814 00:35:29.804098   44271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/multinode-745925/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0814 00:35:29.825978   44271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem --> /usr/share/ca-certificates/165892.pem (1708 bytes)
	I0814 00:35:29.847189   44271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0814 00:35:29.867924   44271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589.pem --> /usr/share/ca-certificates/16589.pem (1338 bytes)
	I0814 00:35:29.889316   44271 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0814 00:35:29.903920   44271 ssh_runner.go:195] Run: openssl version
	I0814 00:35:29.909175   44271 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0814 00:35:29.909253   44271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16589.pem && ln -fs /usr/share/ca-certificates/16589.pem /etc/ssl/certs/16589.pem"
	I0814 00:35:29.918805   44271 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16589.pem
	I0814 00:35:29.922707   44271 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug 13 23:59 /usr/share/ca-certificates/16589.pem
	I0814 00:35:29.922786   44271 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 13 23:59 /usr/share/ca-certificates/16589.pem
	I0814 00:35:29.922838   44271 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16589.pem
	I0814 00:35:29.927994   44271 command_runner.go:130] > 51391683
	I0814 00:35:29.928121   44271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16589.pem /etc/ssl/certs/51391683.0"
	I0814 00:35:29.936740   44271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/165892.pem && ln -fs /usr/share/ca-certificates/165892.pem /etc/ssl/certs/165892.pem"
	I0814 00:35:29.946889   44271 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/165892.pem
	I0814 00:35:29.950847   44271 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug 13 23:59 /usr/share/ca-certificates/165892.pem
	I0814 00:35:29.950870   44271 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 13 23:59 /usr/share/ca-certificates/165892.pem
	I0814 00:35:29.950905   44271 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/165892.pem
	I0814 00:35:29.956186   44271 command_runner.go:130] > 3ec20f2e
	I0814 00:35:29.956238   44271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/165892.pem /etc/ssl/certs/3ec20f2e.0"
	I0814 00:35:29.964992   44271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0814 00:35:29.975179   44271 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0814 00:35:29.979118   44271 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug 13 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0814 00:35:29.979239   44271 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 13 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0814 00:35:29.979286   44271 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0814 00:35:29.984278   44271 command_runner.go:130] > b5213941
	I0814 00:35:29.984328   44271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0814 00:35:29.992818   44271 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0814 00:35:29.996719   44271 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0814 00:35:29.996738   44271 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0814 00:35:29.996744   44271 command_runner.go:130] > Device: 253,1	Inode: 7338518     Links: 1
	I0814 00:35:29.996750   44271 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0814 00:35:29.996756   44271 command_runner.go:130] > Access: 2024-08-14 00:28:49.868086845 +0000
	I0814 00:35:29.996761   44271 command_runner.go:130] > Modify: 2024-08-14 00:28:49.868086845 +0000
	I0814 00:35:29.996765   44271 command_runner.go:130] > Change: 2024-08-14 00:28:49.868086845 +0000
	I0814 00:35:29.996770   44271 command_runner.go:130] >  Birth: 2024-08-14 00:28:49.868086845 +0000
	I0814 00:35:29.996808   44271 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0814 00:35:30.001812   44271 command_runner.go:130] > Certificate will not expire
	I0814 00:35:30.001885   44271 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0814 00:35:30.006858   44271 command_runner.go:130] > Certificate will not expire
	I0814 00:35:30.006916   44271 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0814 00:35:30.011694   44271 command_runner.go:130] > Certificate will not expire
	I0814 00:35:30.011888   44271 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0814 00:35:30.016736   44271 command_runner.go:130] > Certificate will not expire
	I0814 00:35:30.016781   44271 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0814 00:35:30.021575   44271 command_runner.go:130] > Certificate will not expire
	I0814 00:35:30.021802   44271 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0814 00:35:30.026653   44271 command_runner.go:130] > Certificate will not expire
	I0814 00:35:30.026711   44271 kubeadm.go:392] StartCluster: {Name:multinode-745925 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:multinode-745925 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.201 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.55 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.225 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 00:35:30.026818   44271 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0814 00:35:30.026886   44271 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 00:35:30.060478   44271 command_runner.go:130] > eae8460ab3854c106136e0ebfc2e6438f306e26ac28df73f29788b373b39c1a6
	I0814 00:35:30.060508   44271 command_runner.go:130] > da26853bb0e0e4f765db1e5539436826636f3097e5d75d7aab7d9cf136a5fe42
	I0814 00:35:30.060517   44271 command_runner.go:130] > c0a5b54e67fb3eded06b4f3af4d4ca0576c1bc12e7fcc728d1abae5fc42964db
	I0814 00:35:30.060531   44271 command_runner.go:130] > c4fcefb3fdcea5401dcac2f0926c175690591bc6708e7487325a84810c9a3b6e
	I0814 00:35:30.060540   44271 command_runner.go:130] > d3b1b295216971a1ac2224e9411af3298ba5a538e1ad234dff54e5489d7945f7
	I0814 00:35:30.060548   44271 command_runner.go:130] > da0de171031bff0d0f0c40041b75aa57d0acaa98a37e4fa2143fff8554a135ba
	I0814 00:35:30.060557   44271 command_runner.go:130] > 98cb3f5f3e0f6b39eb346dacc9c9584057abe80a2b3fead9e9a4160cae92d7d7
	I0814 00:35:30.060575   44271 command_runner.go:130] > 0826d520b837d333cb9e8db12cfb2a3195420daee26767cd7ebe43cd46ff2963
	I0814 00:35:30.061958   44271 cri.go:89] found id: "eae8460ab3854c106136e0ebfc2e6438f306e26ac28df73f29788b373b39c1a6"
	I0814 00:35:30.061978   44271 cri.go:89] found id: "da26853bb0e0e4f765db1e5539436826636f3097e5d75d7aab7d9cf136a5fe42"
	I0814 00:35:30.061985   44271 cri.go:89] found id: "c0a5b54e67fb3eded06b4f3af4d4ca0576c1bc12e7fcc728d1abae5fc42964db"
	I0814 00:35:30.061989   44271 cri.go:89] found id: "c4fcefb3fdcea5401dcac2f0926c175690591bc6708e7487325a84810c9a3b6e"
	I0814 00:35:30.061993   44271 cri.go:89] found id: "d3b1b295216971a1ac2224e9411af3298ba5a538e1ad234dff54e5489d7945f7"
	I0814 00:35:30.061998   44271 cri.go:89] found id: "da0de171031bff0d0f0c40041b75aa57d0acaa98a37e4fa2143fff8554a135ba"
	I0814 00:35:30.062002   44271 cri.go:89] found id: "98cb3f5f3e0f6b39eb346dacc9c9584057abe80a2b3fead9e9a4160cae92d7d7"
	I0814 00:35:30.062006   44271 cri.go:89] found id: "0826d520b837d333cb9e8db12cfb2a3195420daee26767cd7ebe43cd46ff2963"
	I0814 00:35:30.062010   44271 cri.go:89] found id: ""
	I0814 00:35:30.062078   44271 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 14 00:39:37 multinode-745925 crio[2741]: time="2024-08-14 00:39:37.537149772Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ebe4a079-649f-4b96-8c63-1172ba8f8e16 name=/runtime.v1.RuntimeService/Version
	Aug 14 00:39:37 multinode-745925 crio[2741]: time="2024-08-14 00:39:37.538172965Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d9b2dfa4-d55f-42ef-8ea3-df67b5e711fd name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 00:39:37 multinode-745925 crio[2741]: time="2024-08-14 00:39:37.538564845Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723595977538544130,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134106,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d9b2dfa4-d55f-42ef-8ea3-df67b5e711fd name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 00:39:37 multinode-745925 crio[2741]: time="2024-08-14 00:39:37.539407301Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9bae34e3-8e02-4fc6-b197-d537fe987161 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 00:39:37 multinode-745925 crio[2741]: time="2024-08-14 00:39:37.539475011Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9bae34e3-8e02-4fc6-b197-d537fe987161 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 00:39:37 multinode-745925 crio[2741]: time="2024-08-14 00:39:37.539878944Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bdd01c460191f7cef86d4bd30239ffbae3175c12c2b4a861d542e57d9aeb7b32,PodSandboxId:0d5d2a387657aad12c763de116f41321dcb185a108cccaf0fccead5521a01e9a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723595770921657606,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-q5qs4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2be22ae6-914d-4acc-a956-458c46f75090,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a8f8d0cbc5b0251014bd8c2230d63236db842bf1b0f787d972c920cd149b6a7,PodSandboxId:5215f31e9dadbebac285182026b4a1b860c0bb70956c0441122f1e75a1ca3401,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_RUNNING,CreatedAt:1723595737363921993,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dpqll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4e4b3c8-077a-4af9-8b09-463d6ff33bb7,},Annotations:map[string]string{io.kubernetes.container.hash: 49c194ef,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83669ef86608ef7b74baf1cd13371fce7a52c8c4c38820f713991b5aae72da67,PodSandboxId:83e8f1fa57c59190bcc3a9a9a70295532baa5410fd9ca03da8153096a613a78f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723595737238174970,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-42npp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5270fbf9-4681-4491-9e81-7538ba368977,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d40d3ea3bdcb515b59751a57cd931bbdf9e7836ae4e8760dafd40c83f53e699,PodSandboxId:8e7eac2966ebf71edad71801d0c98a6e948ebc90803a09e444ebff640da8db8f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723595737139537458,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wjs78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c84ef830-a81e-46d5-8dc1-6b8e826fc666,},Annotations:map[string]
string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f8ca6a66fd01793525fbfe5df2bb98677bbf13445f26b53da75040073c63f54,PodSandboxId:994050b8051dbca82143ba3b1d04d6c0ac8360b77b6ffad3798e36a6e3c9392a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723595737096452555,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66743779-9b1f-437e-8554-6f828a863e02,},Annotations:map[string]string{io.kub
ernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4388dbc1285435fa23d08e95bdc7fe47ea3ec8c1d661b27176003aa77f6d6c48,PodSandboxId:9a12d7bb37b38941445b9c2fc25822c58cc3b206856b300c973f52c36f432689,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723595732293563260,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-745925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 785240a4505ff96df5aeec83a66029df,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26907166372cd38b39caa8d25e060e61509027312b2f0325fcac273f1a90ce9a,PodSandboxId:926eca98650905da1bdb3d2222f512929d09aa2141f5b69c2681bd2703aa666c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723595732287504554,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-745925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4728b2edfec0f1c2f274c0eb1d84a79e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5293423821d467aa472c3a3bd890f34c1911316eae819cef2d0ca642e7a4be6b,PodSandboxId:8505babb9c47b3f31b52344ef6332498bd00b3860864a978f77acbc414451c34,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723595732248368032,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-745925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da57b742c36b6c761c31b0d0b9d29de5,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b16f783a23a0ce16a50f7dcec0d5a31b8d36401c26ddb636e6cc01c981fe26a2,PodSandboxId:f049204edba77b1fdc056392ab6bd5a0fee87a71c65c60cdde93f59308747321,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723595732209594326,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-745925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f851d83a2c3e07f14dec64e8651a3ec,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3834013e3a57d6a20bf3999d8ab486311628761b6cbb3a792f6731f48e873e6,PodSandboxId:88538e8639c0a344a975d49fc0aee49bdbdc39a385e85878f20bd116d378a30d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723595412998824243,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-q5qs4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2be22ae6-914d-4acc-a956-458c46f75090,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eae8460ab3854c106136e0ebfc2e6438f306e26ac28df73f29788b373b39c1a6,PodSandboxId:359828cbd3a6a9fad65e8c86e16e2f0e5deb75986dd961d24153e982a8727a72,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723595356394401050,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-42npp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5270fbf9-4681-4491-9e81-7538ba368977,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da26853bb0e0e4f765db1e5539436826636f3097e5d75d7aab7d9cf136a5fe42,PodSandboxId:30168f7b62fd63b6b2c3212c5175b14ff61f4c808c3979ac07c1f5f6fcfa9335,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723595356336143983,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 66743779-9b1f-437e-8554-6f828a863e02,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0a5b54e67fb3eded06b4f3af4d4ca0576c1bc12e7fcc728d1abae5fc42964db,PodSandboxId:e6bc55e0ce83be8673eeb2f68a71843cbaac04ddb0387fee4bf00160f7970974,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_EXITED,CreatedAt:1723595344652296579,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dpqll,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: c4e4b3c8-077a-4af9-8b09-463d6ff33bb7,},Annotations:map[string]string{io.kubernetes.container.hash: 49c194ef,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4fcefb3fdcea5401dcac2f0926c175690591bc6708e7487325a84810c9a3b6e,PodSandboxId:55e339b006ca5ee34d4de2e10968d4bc40458ca8077168d59b722769f68c5790,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723595344559937852,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wjs78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c84ef830-a81e-46d5-8dc1-
6b8e826fc666,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da0de171031bff0d0f0c40041b75aa57d0acaa98a37e4fa2143fff8554a135ba,PodSandboxId:9823caa1bb86e291545ce708c22d1d2789e8132ae25167c05224476dae42fc58,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723595333719586256,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-745925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4728b2edfec0f1c2f274c0eb1d84a79e,},Annotations:map[string]string{i
o.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3b1b295216971a1ac2224e9411af3298ba5a538e1ad234dff54e5489d7945f7,PodSandboxId:b7ca0981e0cd9d9d8d2c48e90a3df655ed0174e28d6bf844e716f4ad09d31c68,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723595333738785481,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-745925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 785240a4505ff96df5aeec83a66029df,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0826d520b837d333cb9e8db12cfb2a3195420daee26767cd7ebe43cd46ff2963,PodSandboxId:43bc1a5d8bfa6b0dec1e3817f318d1617ddcdf10ba744772540fbd7d75cba12f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723595333701664563,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-745925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f851d83a2c3e07f14dec64e8651a3ec,},Annotations:map[string]string{io.kubernetes.container.hash: f
72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98cb3f5f3e0f6b39eb346dacc9c9584057abe80a2b3fead9e9a4160cae92d7d7,PodSandboxId:f58497b82867e2069e6294da926112064a0da4dba22ae94753c88711bb267a20,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723595333706988101,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-745925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da57b742c36b6c761c31b0d0b9d29de5,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9bae34e3-8e02-4fc6-b197-d537fe987161 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 00:39:37 multinode-745925 crio[2741]: time="2024-08-14 00:39:37.579992749Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8765b81d-8d51-43fd-a752-b91935eaa76a name=/runtime.v1.RuntimeService/Version
	Aug 14 00:39:37 multinode-745925 crio[2741]: time="2024-08-14 00:39:37.580069782Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8765b81d-8d51-43fd-a752-b91935eaa76a name=/runtime.v1.RuntimeService/Version
	Aug 14 00:39:37 multinode-745925 crio[2741]: time="2024-08-14 00:39:37.582175249Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5646f4ae-6108-450d-a069-dc849eb2ba9b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 00:39:37 multinode-745925 crio[2741]: time="2024-08-14 00:39:37.582581342Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723595977582557896,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134106,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5646f4ae-6108-450d-a069-dc849eb2ba9b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 00:39:37 multinode-745925 crio[2741]: time="2024-08-14 00:39:37.583091580Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7cf8724d-fa14-46b2-82d6-42fe0d9c7c83 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 00:39:37 multinode-745925 crio[2741]: time="2024-08-14 00:39:37.583158567Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7cf8724d-fa14-46b2-82d6-42fe0d9c7c83 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 00:39:37 multinode-745925 crio[2741]: time="2024-08-14 00:39:37.583512957Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bdd01c460191f7cef86d4bd30239ffbae3175c12c2b4a861d542e57d9aeb7b32,PodSandboxId:0d5d2a387657aad12c763de116f41321dcb185a108cccaf0fccead5521a01e9a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723595770921657606,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-q5qs4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2be22ae6-914d-4acc-a956-458c46f75090,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a8f8d0cbc5b0251014bd8c2230d63236db842bf1b0f787d972c920cd149b6a7,PodSandboxId:5215f31e9dadbebac285182026b4a1b860c0bb70956c0441122f1e75a1ca3401,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_RUNNING,CreatedAt:1723595737363921993,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dpqll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4e4b3c8-077a-4af9-8b09-463d6ff33bb7,},Annotations:map[string]string{io.kubernetes.container.hash: 49c194ef,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83669ef86608ef7b74baf1cd13371fce7a52c8c4c38820f713991b5aae72da67,PodSandboxId:83e8f1fa57c59190bcc3a9a9a70295532baa5410fd9ca03da8153096a613a78f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723595737238174970,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-42npp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5270fbf9-4681-4491-9e81-7538ba368977,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d40d3ea3bdcb515b59751a57cd931bbdf9e7836ae4e8760dafd40c83f53e699,PodSandboxId:8e7eac2966ebf71edad71801d0c98a6e948ebc90803a09e444ebff640da8db8f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723595737139537458,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wjs78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c84ef830-a81e-46d5-8dc1-6b8e826fc666,},Annotations:map[string]
string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f8ca6a66fd01793525fbfe5df2bb98677bbf13445f26b53da75040073c63f54,PodSandboxId:994050b8051dbca82143ba3b1d04d6c0ac8360b77b6ffad3798e36a6e3c9392a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723595737096452555,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66743779-9b1f-437e-8554-6f828a863e02,},Annotations:map[string]string{io.kub
ernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4388dbc1285435fa23d08e95bdc7fe47ea3ec8c1d661b27176003aa77f6d6c48,PodSandboxId:9a12d7bb37b38941445b9c2fc25822c58cc3b206856b300c973f52c36f432689,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723595732293563260,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-745925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 785240a4505ff96df5aeec83a66029df,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26907166372cd38b39caa8d25e060e61509027312b2f0325fcac273f1a90ce9a,PodSandboxId:926eca98650905da1bdb3d2222f512929d09aa2141f5b69c2681bd2703aa666c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723595732287504554,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-745925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4728b2edfec0f1c2f274c0eb1d84a79e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5293423821d467aa472c3a3bd890f34c1911316eae819cef2d0ca642e7a4be6b,PodSandboxId:8505babb9c47b3f31b52344ef6332498bd00b3860864a978f77acbc414451c34,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723595732248368032,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-745925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da57b742c36b6c761c31b0d0b9d29de5,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b16f783a23a0ce16a50f7dcec0d5a31b8d36401c26ddb636e6cc01c981fe26a2,PodSandboxId:f049204edba77b1fdc056392ab6bd5a0fee87a71c65c60cdde93f59308747321,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723595732209594326,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-745925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f851d83a2c3e07f14dec64e8651a3ec,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3834013e3a57d6a20bf3999d8ab486311628761b6cbb3a792f6731f48e873e6,PodSandboxId:88538e8639c0a344a975d49fc0aee49bdbdc39a385e85878f20bd116d378a30d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723595412998824243,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-q5qs4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2be22ae6-914d-4acc-a956-458c46f75090,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eae8460ab3854c106136e0ebfc2e6438f306e26ac28df73f29788b373b39c1a6,PodSandboxId:359828cbd3a6a9fad65e8c86e16e2f0e5deb75986dd961d24153e982a8727a72,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723595356394401050,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-42npp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5270fbf9-4681-4491-9e81-7538ba368977,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da26853bb0e0e4f765db1e5539436826636f3097e5d75d7aab7d9cf136a5fe42,PodSandboxId:30168f7b62fd63b6b2c3212c5175b14ff61f4c808c3979ac07c1f5f6fcfa9335,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723595356336143983,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 66743779-9b1f-437e-8554-6f828a863e02,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0a5b54e67fb3eded06b4f3af4d4ca0576c1bc12e7fcc728d1abae5fc42964db,PodSandboxId:e6bc55e0ce83be8673eeb2f68a71843cbaac04ddb0387fee4bf00160f7970974,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_EXITED,CreatedAt:1723595344652296579,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dpqll,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: c4e4b3c8-077a-4af9-8b09-463d6ff33bb7,},Annotations:map[string]string{io.kubernetes.container.hash: 49c194ef,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4fcefb3fdcea5401dcac2f0926c175690591bc6708e7487325a84810c9a3b6e,PodSandboxId:55e339b006ca5ee34d4de2e10968d4bc40458ca8077168d59b722769f68c5790,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723595344559937852,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wjs78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c84ef830-a81e-46d5-8dc1-
6b8e826fc666,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da0de171031bff0d0f0c40041b75aa57d0acaa98a37e4fa2143fff8554a135ba,PodSandboxId:9823caa1bb86e291545ce708c22d1d2789e8132ae25167c05224476dae42fc58,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723595333719586256,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-745925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4728b2edfec0f1c2f274c0eb1d84a79e,},Annotations:map[string]string{i
o.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3b1b295216971a1ac2224e9411af3298ba5a538e1ad234dff54e5489d7945f7,PodSandboxId:b7ca0981e0cd9d9d8d2c48e90a3df655ed0174e28d6bf844e716f4ad09d31c68,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723595333738785481,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-745925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 785240a4505ff96df5aeec83a66029df,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0826d520b837d333cb9e8db12cfb2a3195420daee26767cd7ebe43cd46ff2963,PodSandboxId:43bc1a5d8bfa6b0dec1e3817f318d1617ddcdf10ba744772540fbd7d75cba12f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723595333701664563,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-745925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f851d83a2c3e07f14dec64e8651a3ec,},Annotations:map[string]string{io.kubernetes.container.hash: f
72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98cb3f5f3e0f6b39eb346dacc9c9584057abe80a2b3fead9e9a4160cae92d7d7,PodSandboxId:f58497b82867e2069e6294da926112064a0da4dba22ae94753c88711bb267a20,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723595333706988101,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-745925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da57b742c36b6c761c31b0d0b9d29de5,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7cf8724d-fa14-46b2-82d6-42fe0d9c7c83 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 00:39:37 multinode-745925 crio[2741]: time="2024-08-14 00:39:37.608080439Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3c209fad-c10e-4c94-8a57-e54b862481d6 name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 14 00:39:37 multinode-745925 crio[2741]: time="2024-08-14 00:39:37.608341939Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:0d5d2a387657aad12c763de116f41321dcb185a108cccaf0fccead5521a01e9a,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-q5qs4,Uid:2be22ae6-914d-4acc-a956-458c46f75090,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1723595770733981440,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-q5qs4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2be22ae6-914d-4acc-a956-458c46f75090,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-14T00:35:36.597549378Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:83e8f1fa57c59190bcc3a9a9a70295532baa5410fd9ca03da8153096a613a78f,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-42npp,Uid:5270fbf9-4681-4491-9e81-7538ba368977,Namespace:kube-system,Attempt:1,}
,State:SANDBOX_READY,CreatedAt:1723595736960596502,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-42npp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5270fbf9-4681-4491-9e81-7538ba368977,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-14T00:35:36.597550672Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8e7eac2966ebf71edad71801d0c98a6e948ebc90803a09e444ebff640da8db8f,Metadata:&PodSandboxMetadata{Name:kube-proxy-wjs78,Uid:c84ef830-a81e-46d5-8dc1-6b8e826fc666,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1723595736937526607,Labels:map[string]string{controller-revision-hash: 5976bc5f75,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-wjs78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c84ef830-a81e-46d5-8dc1-6b8e826fc666,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{
kubernetes.io/config.seen: 2024-08-14T00:35:36.597556125Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:994050b8051dbca82143ba3b1d04d6c0ac8360b77b6ffad3798e36a6e3c9392a,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:66743779-9b1f-437e-8554-6f828a863e02,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1723595736932412900,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66743779-9b1f-437e-8554-6f828a863e02,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"
/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-08-14T00:35:36.597541829Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5215f31e9dadbebac285182026b4a1b860c0bb70956c0441122f1e75a1ca3401,Metadata:&PodSandboxMetadata{Name:kindnet-dpqll,Uid:c4e4b3c8-077a-4af9-8b09-463d6ff33bb7,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1723595736918038532,Labels:map[string]string{app: kindnet,controller-revision-hash: 5857d8f49,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-dpqll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4e4b3c8-077a-4af9-8b09-463d6ff33bb7,k8s-app: kindnet,pod-template-generati
on: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-14T00:35:36.597553294Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9a12d7bb37b38941445b9c2fc25822c58cc3b206856b300c973f52c36f432689,Metadata:&PodSandboxMetadata{Name:kube-scheduler-multinode-745925,Uid:785240a4505ff96df5aeec83a66029df,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1723595732062065165,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-745925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 785240a4505ff96df5aeec83a66029df,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 785240a4505ff96df5aeec83a66029df,kubernetes.io/config.seen: 2024-08-14T00:35:31.594445557Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:926eca98650905da1bdb3d2222f512929d09aa2141f5b69c2681bd2703aa666c,Metadata:&PodSandboxMetadata{Name:etcd-multinode-745925
,Uid:4728b2edfec0f1c2f274c0eb1d84a79e,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1723595732060753358,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-multinode-745925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4728b2edfec0f1c2f274c0eb1d84a79e,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.201:2379,kubernetes.io/config.hash: 4728b2edfec0f1c2f274c0eb1d84a79e,kubernetes.io/config.seen: 2024-08-14T00:35:31.594487482Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f049204edba77b1fdc056392ab6bd5a0fee87a71c65c60cdde93f59308747321,Metadata:&PodSandboxMetadata{Name:kube-apiserver-multinode-745925,Uid:4f851d83a2c3e07f14dec64e8651a3ec,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1723595732048464878,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-a
piserver-multinode-745925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f851d83a2c3e07f14dec64e8651a3ec,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.201:8443,kubernetes.io/config.hash: 4f851d83a2c3e07f14dec64e8651a3ec,kubernetes.io/config.seen: 2024-08-14T00:35:31.594488681Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:8505babb9c47b3f31b52344ef6332498bd00b3860864a978f77acbc414451c34,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-multinode-745925,Uid:da57b742c36b6c761c31b0d0b9d29de5,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1723595732044660612,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-745925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da57b742c36b6c761c31b0d0b9d29de5,tier: control-plane,},Annotations:map[string]string{kubernet
es.io/config.hash: da57b742c36b6c761c31b0d0b9d29de5,kubernetes.io/config.seen: 2024-08-14T00:35:31.594489803Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=3c209fad-c10e-4c94-8a57-e54b862481d6 name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 14 00:39:37 multinode-745925 crio[2741]: time="2024-08-14 00:39:37.609027582Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=57bec5d1-e999-47d3-b3a8-8167275f7102 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 00:39:37 multinode-745925 crio[2741]: time="2024-08-14 00:39:37.609122244Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=57bec5d1-e999-47d3-b3a8-8167275f7102 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 00:39:37 multinode-745925 crio[2741]: time="2024-08-14 00:39:37.609458724Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bdd01c460191f7cef86d4bd30239ffbae3175c12c2b4a861d542e57d9aeb7b32,PodSandboxId:0d5d2a387657aad12c763de116f41321dcb185a108cccaf0fccead5521a01e9a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723595770921657606,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-q5qs4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2be22ae6-914d-4acc-a956-458c46f75090,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a8f8d0cbc5b0251014bd8c2230d63236db842bf1b0f787d972c920cd149b6a7,PodSandboxId:5215f31e9dadbebac285182026b4a1b860c0bb70956c0441122f1e75a1ca3401,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_RUNNING,CreatedAt:1723595737363921993,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dpqll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4e4b3c8-077a-4af9-8b09-463d6ff33bb7,},Annotations:map[string]string{io.kubernetes.container.hash: 49c194ef,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83669ef86608ef7b74baf1cd13371fce7a52c8c4c38820f713991b5aae72da67,PodSandboxId:83e8f1fa57c59190bcc3a9a9a70295532baa5410fd9ca03da8153096a613a78f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723595737238174970,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-42npp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5270fbf9-4681-4491-9e81-7538ba368977,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d40d3ea3bdcb515b59751a57cd931bbdf9e7836ae4e8760dafd40c83f53e699,PodSandboxId:8e7eac2966ebf71edad71801d0c98a6e948ebc90803a09e444ebff640da8db8f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723595737139537458,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wjs78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c84ef830-a81e-46d5-8dc1-6b8e826fc666,},Annotations:map[string]
string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f8ca6a66fd01793525fbfe5df2bb98677bbf13445f26b53da75040073c63f54,PodSandboxId:994050b8051dbca82143ba3b1d04d6c0ac8360b77b6ffad3798e36a6e3c9392a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723595737096452555,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66743779-9b1f-437e-8554-6f828a863e02,},Annotations:map[string]string{io.kub
ernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4388dbc1285435fa23d08e95bdc7fe47ea3ec8c1d661b27176003aa77f6d6c48,PodSandboxId:9a12d7bb37b38941445b9c2fc25822c58cc3b206856b300c973f52c36f432689,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723595732293563260,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-745925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 785240a4505ff96df5aeec83a66029df,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26907166372cd38b39caa8d25e060e61509027312b2f0325fcac273f1a90ce9a,PodSandboxId:926eca98650905da1bdb3d2222f512929d09aa2141f5b69c2681bd2703aa666c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723595732287504554,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-745925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4728b2edfec0f1c2f274c0eb1d84a79e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5293423821d467aa472c3a3bd890f34c1911316eae819cef2d0ca642e7a4be6b,PodSandboxId:8505babb9c47b3f31b52344ef6332498bd00b3860864a978f77acbc414451c34,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723595732248368032,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-745925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da57b742c36b6c761c31b0d0b9d29de5,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b16f783a23a0ce16a50f7dcec0d5a31b8d36401c26ddb636e6cc01c981fe26a2,PodSandboxId:f049204edba77b1fdc056392ab6bd5a0fee87a71c65c60cdde93f59308747321,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723595732209594326,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-745925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f851d83a2c3e07f14dec64e8651a3ec,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=57bec5d1-e999-47d3-b3a8-8167275f7102 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 00:39:37 multinode-745925 crio[2741]: time="2024-08-14 00:39:37.622163533Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5edae198-de2f-43fe-a97f-756c187fbe15 name=/runtime.v1.RuntimeService/Version
	Aug 14 00:39:37 multinode-745925 crio[2741]: time="2024-08-14 00:39:37.622251751Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5edae198-de2f-43fe-a97f-756c187fbe15 name=/runtime.v1.RuntimeService/Version
	Aug 14 00:39:37 multinode-745925 crio[2741]: time="2024-08-14 00:39:37.623447975Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dc367a77-553b-442c-a89b-9b0d3c72c599 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 00:39:37 multinode-745925 crio[2741]: time="2024-08-14 00:39:37.624088886Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723595977624065320,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134106,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dc367a77-553b-442c-a89b-9b0d3c72c599 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 00:39:37 multinode-745925 crio[2741]: time="2024-08-14 00:39:37.624575374Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3634a2d1-964d-4874-934a-64f5ecd8fe15 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 00:39:37 multinode-745925 crio[2741]: time="2024-08-14 00:39:37.624646087Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3634a2d1-964d-4874-934a-64f5ecd8fe15 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 00:39:37 multinode-745925 crio[2741]: time="2024-08-14 00:39:37.625014100Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bdd01c460191f7cef86d4bd30239ffbae3175c12c2b4a861d542e57d9aeb7b32,PodSandboxId:0d5d2a387657aad12c763de116f41321dcb185a108cccaf0fccead5521a01e9a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723595770921657606,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-q5qs4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2be22ae6-914d-4acc-a956-458c46f75090,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a8f8d0cbc5b0251014bd8c2230d63236db842bf1b0f787d972c920cd149b6a7,PodSandboxId:5215f31e9dadbebac285182026b4a1b860c0bb70956c0441122f1e75a1ca3401,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_RUNNING,CreatedAt:1723595737363921993,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dpqll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4e4b3c8-077a-4af9-8b09-463d6ff33bb7,},Annotations:map[string]string{io.kubernetes.container.hash: 49c194ef,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83669ef86608ef7b74baf1cd13371fce7a52c8c4c38820f713991b5aae72da67,PodSandboxId:83e8f1fa57c59190bcc3a9a9a70295532baa5410fd9ca03da8153096a613a78f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723595737238174970,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-42npp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5270fbf9-4681-4491-9e81-7538ba368977,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d40d3ea3bdcb515b59751a57cd931bbdf9e7836ae4e8760dafd40c83f53e699,PodSandboxId:8e7eac2966ebf71edad71801d0c98a6e948ebc90803a09e444ebff640da8db8f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723595737139537458,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wjs78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c84ef830-a81e-46d5-8dc1-6b8e826fc666,},Annotations:map[string]
string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f8ca6a66fd01793525fbfe5df2bb98677bbf13445f26b53da75040073c63f54,PodSandboxId:994050b8051dbca82143ba3b1d04d6c0ac8360b77b6ffad3798e36a6e3c9392a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723595737096452555,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66743779-9b1f-437e-8554-6f828a863e02,},Annotations:map[string]string{io.kub
ernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4388dbc1285435fa23d08e95bdc7fe47ea3ec8c1d661b27176003aa77f6d6c48,PodSandboxId:9a12d7bb37b38941445b9c2fc25822c58cc3b206856b300c973f52c36f432689,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723595732293563260,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-745925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 785240a4505ff96df5aeec83a66029df,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26907166372cd38b39caa8d25e060e61509027312b2f0325fcac273f1a90ce9a,PodSandboxId:926eca98650905da1bdb3d2222f512929d09aa2141f5b69c2681bd2703aa666c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723595732287504554,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-745925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4728b2edfec0f1c2f274c0eb1d84a79e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5293423821d467aa472c3a3bd890f34c1911316eae819cef2d0ca642e7a4be6b,PodSandboxId:8505babb9c47b3f31b52344ef6332498bd00b3860864a978f77acbc414451c34,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723595732248368032,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-745925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da57b742c36b6c761c31b0d0b9d29de5,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b16f783a23a0ce16a50f7dcec0d5a31b8d36401c26ddb636e6cc01c981fe26a2,PodSandboxId:f049204edba77b1fdc056392ab6bd5a0fee87a71c65c60cdde93f59308747321,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723595732209594326,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-745925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f851d83a2c3e07f14dec64e8651a3ec,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3834013e3a57d6a20bf3999d8ab486311628761b6cbb3a792f6731f48e873e6,PodSandboxId:88538e8639c0a344a975d49fc0aee49bdbdc39a385e85878f20bd116d378a30d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723595412998824243,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-q5qs4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2be22ae6-914d-4acc-a956-458c46f75090,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eae8460ab3854c106136e0ebfc2e6438f306e26ac28df73f29788b373b39c1a6,PodSandboxId:359828cbd3a6a9fad65e8c86e16e2f0e5deb75986dd961d24153e982a8727a72,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723595356394401050,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-42npp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5270fbf9-4681-4491-9e81-7538ba368977,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da26853bb0e0e4f765db1e5539436826636f3097e5d75d7aab7d9cf136a5fe42,PodSandboxId:30168f7b62fd63b6b2c3212c5175b14ff61f4c808c3979ac07c1f5f6fcfa9335,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723595356336143983,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 66743779-9b1f-437e-8554-6f828a863e02,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0a5b54e67fb3eded06b4f3af4d4ca0576c1bc12e7fcc728d1abae5fc42964db,PodSandboxId:e6bc55e0ce83be8673eeb2f68a71843cbaac04ddb0387fee4bf00160f7970974,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_EXITED,CreatedAt:1723595344652296579,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-dpqll,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: c4e4b3c8-077a-4af9-8b09-463d6ff33bb7,},Annotations:map[string]string{io.kubernetes.container.hash: 49c194ef,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4fcefb3fdcea5401dcac2f0926c175690591bc6708e7487325a84810c9a3b6e,PodSandboxId:55e339b006ca5ee34d4de2e10968d4bc40458ca8077168d59b722769f68c5790,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723595344559937852,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wjs78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c84ef830-a81e-46d5-8dc1-
6b8e826fc666,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da0de171031bff0d0f0c40041b75aa57d0acaa98a37e4fa2143fff8554a135ba,PodSandboxId:9823caa1bb86e291545ce708c22d1d2789e8132ae25167c05224476dae42fc58,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723595333719586256,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-745925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4728b2edfec0f1c2f274c0eb1d84a79e,},Annotations:map[string]string{i
o.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3b1b295216971a1ac2224e9411af3298ba5a538e1ad234dff54e5489d7945f7,PodSandboxId:b7ca0981e0cd9d9d8d2c48e90a3df655ed0174e28d6bf844e716f4ad09d31c68,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723595333738785481,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-745925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 785240a4505ff96df5aeec83a66029df,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0826d520b837d333cb9e8db12cfb2a3195420daee26767cd7ebe43cd46ff2963,PodSandboxId:43bc1a5d8bfa6b0dec1e3817f318d1617ddcdf10ba744772540fbd7d75cba12f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723595333701664563,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-745925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f851d83a2c3e07f14dec64e8651a3ec,},Annotations:map[string]string{io.kubernetes.container.hash: f
72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98cb3f5f3e0f6b39eb346dacc9c9584057abe80a2b3fead9e9a4160cae92d7d7,PodSandboxId:f58497b82867e2069e6294da926112064a0da4dba22ae94753c88711bb267a20,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723595333706988101,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-745925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da57b742c36b6c761c31b0d0b9d29de5,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3634a2d1-964d-4874-934a-64f5ecd8fe15 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	bdd01c460191f       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   0d5d2a387657a       busybox-7dff88458-q5qs4
	9a8f8d0cbc5b0       917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557                                      4 minutes ago       Running             kindnet-cni               1                   5215f31e9dadb       kindnet-dpqll
	83669ef86608e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago       Running             coredns                   1                   83e8f1fa57c59       coredns-6f6b679f8f-42npp
	1d40d3ea3bdcb       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      4 minutes ago       Running             kube-proxy                1                   8e7eac2966ebf       kube-proxy-wjs78
	8f8ca6a66fd01       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       1                   994050b8051db       storage-provisioner
	4388dbc128543       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      4 minutes ago       Running             kube-scheduler            1                   9a12d7bb37b38       kube-scheduler-multinode-745925
	26907166372cd       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      4 minutes ago       Running             etcd                      1                   926eca9865090       etcd-multinode-745925
	5293423821d46       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      4 minutes ago       Running             kube-controller-manager   1                   8505babb9c47b       kube-controller-manager-multinode-745925
	b16f783a23a0c       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      4 minutes ago       Running             kube-apiserver            1                   f049204edba77       kube-apiserver-multinode-745925
	c3834013e3a57       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   9 minutes ago       Exited              busybox                   0                   88538e8639c0a       busybox-7dff88458-q5qs4
	eae8460ab3854       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      10 minutes ago      Exited              coredns                   0                   359828cbd3a6a       coredns-6f6b679f8f-42npp
	da26853bb0e0e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Exited              storage-provisioner       0                   30168f7b62fd6       storage-provisioner
	c0a5b54e67fb3       917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557                                      10 minutes ago      Exited              kindnet-cni               0                   e6bc55e0ce83b       kindnet-dpqll
	c4fcefb3fdcea       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      10 minutes ago      Exited              kube-proxy                0                   55e339b006ca5       kube-proxy-wjs78
	d3b1b29521697       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      10 minutes ago      Exited              kube-scheduler            0                   b7ca0981e0cd9       kube-scheduler-multinode-745925
	da0de171031bf       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      10 minutes ago      Exited              etcd                      0                   9823caa1bb86e       etcd-multinode-745925
	98cb3f5f3e0f6       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      10 minutes ago      Exited              kube-controller-manager   0                   f58497b82867e       kube-controller-manager-multinode-745925
	0826d520b837d       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      10 minutes ago      Exited              kube-apiserver            0                   43bc1a5d8bfa6       kube-apiserver-multinode-745925
	
	
	==> coredns [83669ef86608ef7b74baf1cd13371fce7a52c8c4c38820f713991b5aae72da67] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:51980 - 44192 "HINFO IN 4627935223448981007.9129784899072471225. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.01195331s
	
	
	==> coredns [eae8460ab3854c106136e0ebfc2e6438f306e26ac28df73f29788b373b39c1a6] <==
	[INFO] 10.244.1.2:46018 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001534261s
	[INFO] 10.244.1.2:43464 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000070235s
	[INFO] 10.244.1.2:38869 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000075645s
	[INFO] 10.244.1.2:34500 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001099481s
	[INFO] 10.244.1.2:55262 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000064953s
	[INFO] 10.244.1.2:60067 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000074835s
	[INFO] 10.244.1.2:54269 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000058182s
	[INFO] 10.244.0.3:33565 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000072903s
	[INFO] 10.244.0.3:55121 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000159692s
	[INFO] 10.244.0.3:50568 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000036901s
	[INFO] 10.244.0.3:54217 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000031753s
	[INFO] 10.244.1.2:43247 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000105057s
	[INFO] 10.244.1.2:35711 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000064728s
	[INFO] 10.244.1.2:59512 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000054406s
	[INFO] 10.244.1.2:37804 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000048475s
	[INFO] 10.244.0.3:33612 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000080442s
	[INFO] 10.244.0.3:48076 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000107043s
	[INFO] 10.244.0.3:50714 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000062317s
	[INFO] 10.244.0.3:42453 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000050605s
	[INFO] 10.244.1.2:60045 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000151408s
	[INFO] 10.244.1.2:42388 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000090237s
	[INFO] 10.244.1.2:57343 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000102036s
	[INFO] 10.244.1.2:60620 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000107288s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-745925
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-745925
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a5ac17629aabe8562ef9220195ba6559cf416caf
	                    minikube.k8s.io/name=multinode-745925
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_14T00_28_59_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 14 Aug 2024 00:28:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-745925
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 14 Aug 2024 00:39:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 14 Aug 2024 00:35:35 +0000   Wed, 14 Aug 2024 00:28:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 14 Aug 2024 00:35:35 +0000   Wed, 14 Aug 2024 00:28:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 14 Aug 2024 00:35:35 +0000   Wed, 14 Aug 2024 00:28:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 14 Aug 2024 00:35:35 +0000   Wed, 14 Aug 2024 00:29:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.201
	  Hostname:    multinode-745925
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a29b00ae6c9445bb9ec55db18ac99634
	  System UUID:                a29b00ae-6c94-45bb-9ec5-5db18ac99634
	  Boot ID:                    d038d6a5-2e6b-4a2c-a4b9-cc85ebf99a02
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-q5qs4                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m28s
	  kube-system                 coredns-6f6b679f8f-42npp                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     10m
	  kube-system                 etcd-multinode-745925                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                 kindnet-dpqll                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-apiserver-multinode-745925             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-controller-manager-multinode-745925    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-proxy-wjs78                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-scheduler-multinode-745925             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 10m                  kube-proxy       
	  Normal  Starting                 4m                   kube-proxy       
	  Normal  Starting                 10m                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  10m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     10m                  kubelet          Node multinode-745925 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    10m                  kubelet          Node multinode-745925 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  10m                  kubelet          Node multinode-745925 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           10m                  node-controller  Node multinode-745925 event: Registered Node multinode-745925 in Controller
	  Normal  NodeReady                10m                  kubelet          Node multinode-745925 status is now: NodeReady
	  Normal  Starting                 4m6s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m6s (x8 over 4m6s)  kubelet          Node multinode-745925 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m6s (x8 over 4m6s)  kubelet          Node multinode-745925 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m6s (x7 over 4m6s)  kubelet          Node multinode-745925 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m6s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m58s                node-controller  Node multinode-745925 event: Registered Node multinode-745925 in Controller
	
	
	Name:               multinode-745925-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-745925-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a5ac17629aabe8562ef9220195ba6559cf416caf
	                    minikube.k8s.io/name=multinode-745925
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_14T00_36_15_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 14 Aug 2024 00:36:14 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-745925-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 14 Aug 2024 00:37:16 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 14 Aug 2024 00:36:45 +0000   Wed, 14 Aug 2024 00:37:59 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 14 Aug 2024 00:36:45 +0000   Wed, 14 Aug 2024 00:37:59 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 14 Aug 2024 00:36:45 +0000   Wed, 14 Aug 2024 00:37:59 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 14 Aug 2024 00:36:45 +0000   Wed, 14 Aug 2024 00:37:59 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.55
	  Hostname:    multinode-745925-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 91f5344f07d1452cb1780ba954b6bfac
	  System UUID:                91f5344f-07d1-452c-b178-0ba954b6bfac
	  Boot ID:                    0e30ab73-0588-4779-8d5b-6d70176877c8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-mklsc    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m27s
	  kube-system                 kindnet-jldn7              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m50s
	  kube-system                 kube-proxy-69crd           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m18s                  kube-proxy       
	  Normal  Starting                 9m45s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  9m51s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    9m50s (x2 over 9m51s)  kubelet          Node multinode-745925-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m50s (x2 over 9m51s)  kubelet          Node multinode-745925-m02 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  9m50s (x2 over 9m51s)  kubelet          Node multinode-745925-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                9m30s                  kubelet          Node multinode-745925-m02 status is now: NodeReady
	  Normal  Starting                 3m23s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m23s (x2 over 3m23s)  kubelet          Node multinode-745925-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m23s (x2 over 3m23s)  kubelet          Node multinode-745925-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m23s (x2 over 3m23s)  kubelet          Node multinode-745925-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m18s                  node-controller  Node multinode-745925-m02 event: Registered Node multinode-745925-m02 in Controller
	  Normal  NodeReady                3m3s                   kubelet          Node multinode-745925-m02 status is now: NodeReady
	  Normal  NodeNotReady             98s                    node-controller  Node multinode-745925-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.047949] systemd-fstab-generator[613]: Ignoring "noauto" option for root device
	[  +0.185818] systemd-fstab-generator[627]: Ignoring "noauto" option for root device
	[  +0.096502] systemd-fstab-generator[639]: Ignoring "noauto" option for root device
	[  +0.240879] systemd-fstab-generator[669]: Ignoring "noauto" option for root device
	[  +3.695074] systemd-fstab-generator[764]: Ignoring "noauto" option for root device
	[  +4.227842] systemd-fstab-generator[897]: Ignoring "noauto" option for root device
	[  +0.064972] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.985702] systemd-fstab-generator[1233]: Ignoring "noauto" option for root device
	[  +0.084765] kauditd_printk_skb: 69 callbacks suppressed
	[Aug14 00:29] systemd-fstab-generator[1338]: Ignoring "noauto" option for root device
	[  +0.137775] kauditd_printk_skb: 21 callbacks suppressed
	[ +12.318947] kauditd_printk_skb: 60 callbacks suppressed
	[Aug14 00:30] kauditd_printk_skb: 12 callbacks suppressed
	[Aug14 00:35] systemd-fstab-generator[2657]: Ignoring "noauto" option for root device
	[  +0.142572] systemd-fstab-generator[2669]: Ignoring "noauto" option for root device
	[  +0.171268] systemd-fstab-generator[2684]: Ignoring "noauto" option for root device
	[  +0.130113] systemd-fstab-generator[2696]: Ignoring "noauto" option for root device
	[  +0.264436] systemd-fstab-generator[2724]: Ignoring "noauto" option for root device
	[  +3.903620] systemd-fstab-generator[2825]: Ignoring "noauto" option for root device
	[  +1.835360] systemd-fstab-generator[2945]: Ignoring "noauto" option for root device
	[  +0.086317] kauditd_printk_skb: 122 callbacks suppressed
	[  +5.575789] kauditd_printk_skb: 52 callbacks suppressed
	[ +13.687003] systemd-fstab-generator[3788]: Ignoring "noauto" option for root device
	[  +0.108937] kauditd_printk_skb: 36 callbacks suppressed
	[Aug14 00:36] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [26907166372cd38b39caa8d25e060e61509027312b2f0325fcac273f1a90ce9a] <==
	{"level":"info","ts":"2024-08-14T00:35:32.783418Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7315e47f21b89457 switched to configuration voters=(8292785523550360663)"}
	{"level":"info","ts":"2024-08-14T00:35:32.792148Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"1777413e1d1fef45","local-member-id":"7315e47f21b89457","added-peer-id":"7315e47f21b89457","added-peer-peer-urls":["https://192.168.39.201:2380"]}
	{"level":"info","ts":"2024-08-14T00:35:32.792290Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"1777413e1d1fef45","local-member-id":"7315e47f21b89457","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-14T00:35:32.792339Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-14T00:35:32.794137Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-14T00:35:32.802319Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.201:2380"}
	{"level":"info","ts":"2024-08-14T00:35:32.802416Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.201:2380"}
	{"level":"info","ts":"2024-08-14T00:35:32.806985Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"7315e47f21b89457","initial-advertise-peer-urls":["https://192.168.39.201:2380"],"listen-peer-urls":["https://192.168.39.201:2380"],"advertise-client-urls":["https://192.168.39.201:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.201:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-14T00:35:32.810869Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-14T00:35:34.534360Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7315e47f21b89457 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-14T00:35:34.534416Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7315e47f21b89457 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-14T00:35:34.534464Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7315e47f21b89457 received MsgPreVoteResp from 7315e47f21b89457 at term 2"}
	{"level":"info","ts":"2024-08-14T00:35:34.534480Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7315e47f21b89457 became candidate at term 3"}
	{"level":"info","ts":"2024-08-14T00:35:34.534485Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7315e47f21b89457 received MsgVoteResp from 7315e47f21b89457 at term 3"}
	{"level":"info","ts":"2024-08-14T00:35:34.534506Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7315e47f21b89457 became leader at term 3"}
	{"level":"info","ts":"2024-08-14T00:35:34.534516Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7315e47f21b89457 elected leader 7315e47f21b89457 at term 3"}
	{"level":"info","ts":"2024-08-14T00:35:34.538898Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"7315e47f21b89457","local-member-attributes":"{Name:multinode-745925 ClientURLs:[https://192.168.39.201:2379]}","request-path":"/0/members/7315e47f21b89457/attributes","cluster-id":"1777413e1d1fef45","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-14T00:35:34.538941Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-14T00:35:34.539140Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-14T00:35:34.539161Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-14T00:35:34.539178Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-14T00:35:34.540110Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-14T00:35:34.540931Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-14T00:35:34.540113Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-14T00:35:34.541723Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.201:2379"}
	
	
	==> etcd [da0de171031bff0d0f0c40041b75aa57d0acaa98a37e4fa2143fff8554a135ba] <==
	{"level":"info","ts":"2024-08-14T00:28:55.111326Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-14T00:28:55.111349Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-14T00:28:55.111653Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-14T00:28:55.111971Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"1777413e1d1fef45","local-member-id":"7315e47f21b89457","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-14T00:28:55.112067Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-14T00:28:55.112109Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-14T00:28:55.112432Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-14T00:28:55.112698Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-14T00:28:55.113210Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-14T00:28:55.113521Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.201:2379"}
	{"level":"warn","ts":"2024-08-14T00:29:46.986169Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"163.339589ms","expected-duration":"100ms","prefix":"","request":"header:<ID:10689172006015982457 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-745925-m02.17eb70df49150e60\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-745925-m02.17eb70df49150e60\" value_size:642 lease:1465799969161206075 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-08-14T00:29:46.986409Z","caller":"traceutil/trace.go:171","msg":"trace[132266961] transaction","detail":"{read_only:false; response_revision:434; number_of_response:1; }","duration":"240.714098ms","start":"2024-08-14T00:29:46.745683Z","end":"2024-08-14T00:29:46.986397Z","steps":["trace[132266961] 'process raft request'  (duration: 76.754473ms)","trace[132266961] 'compare'  (duration: 163.171551ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-14T00:30:39.467992Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"154.741361ms","expected-duration":"100ms","prefix":"","request":"header:<ID:10689172006015982967 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-745925-m03.17eb70eb82fe355b\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-745925-m03.17eb70eb82fe355b\" value_size:646 lease:1465799969161206861 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-08-14T00:30:39.468386Z","caller":"traceutil/trace.go:171","msg":"trace[1910179642] transaction","detail":"{read_only:false; response_revision:566; number_of_response:1; }","duration":"230.216937ms","start":"2024-08-14T00:30:39.238135Z","end":"2024-08-14T00:30:39.468352Z","steps":["trace[1910179642] 'process raft request'  (duration: 74.953018ms)","trace[1910179642] 'compare'  (duration: 154.61239ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-14T00:31:33.249731Z","caller":"traceutil/trace.go:171","msg":"trace[1150244368] transaction","detail":"{read_only:false; response_revision:698; number_of_response:1; }","duration":"139.477068ms","start":"2024-08-14T00:31:33.110235Z","end":"2024-08-14T00:31:33.249712Z","steps":["trace[1150244368] 'process raft request'  (duration: 138.337249ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-14T00:33:53.710910Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-14T00:33:53.711047Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"multinode-745925","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.201:2380"],"advertise-client-urls":["https://192.168.39.201:2379"]}
	{"level":"warn","ts":"2024-08-14T00:33:53.711164Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-14T00:33:53.711260Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-14T00:33:53.790638Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.201:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-14T00:33:53.790721Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.201:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-14T00:33:53.792200Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7315e47f21b89457","current-leader-member-id":"7315e47f21b89457"}
	{"level":"info","ts":"2024-08-14T00:33:53.794555Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.201:2380"}
	{"level":"info","ts":"2024-08-14T00:33:53.794700Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.201:2380"}
	{"level":"info","ts":"2024-08-14T00:33:53.794732Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"multinode-745925","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.201:2380"],"advertise-client-urls":["https://192.168.39.201:2379"]}
	
	
	==> kernel <==
	 00:39:38 up 11 min,  0 users,  load average: 0.62, 0.45, 0.25
	Linux multinode-745925 5.10.207 #1 SMP Tue Aug 13 22:05:29 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [9a8f8d0cbc5b0251014bd8c2230d63236db842bf1b0f787d972c920cd149b6a7] <==
	I0814 00:38:28.278871       1 main.go:322] Node multinode-745925-m02 has CIDR [10.244.1.0/24] 
	I0814 00:38:38.269924       1 main.go:295] Handling node with IPs: map[192.168.39.201:{}]
	I0814 00:38:38.269973       1 main.go:299] handling current node
	I0814 00:38:38.269989       1 main.go:295] Handling node with IPs: map[192.168.39.55:{}]
	I0814 00:38:38.269995       1 main.go:322] Node multinode-745925-m02 has CIDR [10.244.1.0/24] 
	I0814 00:38:48.278570       1 main.go:295] Handling node with IPs: map[192.168.39.201:{}]
	I0814 00:38:48.278707       1 main.go:299] handling current node
	I0814 00:38:48.278743       1 main.go:295] Handling node with IPs: map[192.168.39.55:{}]
	I0814 00:38:48.278767       1 main.go:322] Node multinode-745925-m02 has CIDR [10.244.1.0/24] 
	I0814 00:38:58.278585       1 main.go:295] Handling node with IPs: map[192.168.39.201:{}]
	I0814 00:38:58.278691       1 main.go:299] handling current node
	I0814 00:38:58.278729       1 main.go:295] Handling node with IPs: map[192.168.39.55:{}]
	I0814 00:38:58.278735       1 main.go:322] Node multinode-745925-m02 has CIDR [10.244.1.0/24] 
	I0814 00:39:08.269491       1 main.go:295] Handling node with IPs: map[192.168.39.201:{}]
	I0814 00:39:08.269534       1 main.go:299] handling current node
	I0814 00:39:08.269547       1 main.go:295] Handling node with IPs: map[192.168.39.55:{}]
	I0814 00:39:08.269552       1 main.go:322] Node multinode-745925-m02 has CIDR [10.244.1.0/24] 
	I0814 00:39:18.275982       1 main.go:295] Handling node with IPs: map[192.168.39.201:{}]
	I0814 00:39:18.276034       1 main.go:299] handling current node
	I0814 00:39:18.276055       1 main.go:295] Handling node with IPs: map[192.168.39.55:{}]
	I0814 00:39:18.276062       1 main.go:322] Node multinode-745925-m02 has CIDR [10.244.1.0/24] 
	I0814 00:39:28.272980       1 main.go:295] Handling node with IPs: map[192.168.39.201:{}]
	I0814 00:39:28.273091       1 main.go:299] handling current node
	I0814 00:39:28.273119       1 main.go:295] Handling node with IPs: map[192.168.39.55:{}]
	I0814 00:39:28.273138       1 main.go:322] Node multinode-745925-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [c0a5b54e67fb3eded06b4f3af4d4ca0576c1bc12e7fcc728d1abae5fc42964db] <==
	I0814 00:33:05.680246       1 main.go:322] Node multinode-745925-m03 has CIDR [10.244.4.0/24] 
	I0814 00:33:15.682842       1 main.go:295] Handling node with IPs: map[192.168.39.201:{}]
	I0814 00:33:15.682953       1 main.go:299] handling current node
	I0814 00:33:15.682981       1 main.go:295] Handling node with IPs: map[192.168.39.55:{}]
	I0814 00:33:15.682999       1 main.go:322] Node multinode-745925-m02 has CIDR [10.244.1.0/24] 
	I0814 00:33:15.683150       1 main.go:295] Handling node with IPs: map[192.168.39.225:{}]
	I0814 00:33:15.683173       1 main.go:322] Node multinode-745925-m03 has CIDR [10.244.4.0/24] 
	I0814 00:33:25.681267       1 main.go:295] Handling node with IPs: map[192.168.39.201:{}]
	I0814 00:33:25.681295       1 main.go:299] handling current node
	I0814 00:33:25.681308       1 main.go:295] Handling node with IPs: map[192.168.39.55:{}]
	I0814 00:33:25.681313       1 main.go:322] Node multinode-745925-m02 has CIDR [10.244.1.0/24] 
	I0814 00:33:25.681496       1 main.go:295] Handling node with IPs: map[192.168.39.225:{}]
	I0814 00:33:25.681514       1 main.go:322] Node multinode-745925-m03 has CIDR [10.244.4.0/24] 
	I0814 00:33:35.689356       1 main.go:295] Handling node with IPs: map[192.168.39.201:{}]
	I0814 00:33:35.689502       1 main.go:299] handling current node
	I0814 00:33:35.689534       1 main.go:295] Handling node with IPs: map[192.168.39.55:{}]
	I0814 00:33:35.689554       1 main.go:322] Node multinode-745925-m02 has CIDR [10.244.1.0/24] 
	I0814 00:33:35.689709       1 main.go:295] Handling node with IPs: map[192.168.39.225:{}]
	I0814 00:33:35.689736       1 main.go:322] Node multinode-745925-m03 has CIDR [10.244.4.0/24] 
	I0814 00:33:45.682662       1 main.go:295] Handling node with IPs: map[192.168.39.201:{}]
	I0814 00:33:45.682877       1 main.go:299] handling current node
	I0814 00:33:45.682913       1 main.go:295] Handling node with IPs: map[192.168.39.55:{}]
	I0814 00:33:45.682934       1 main.go:322] Node multinode-745925-m02 has CIDR [10.244.1.0/24] 
	I0814 00:33:45.683092       1 main.go:295] Handling node with IPs: map[192.168.39.225:{}]
	I0814 00:33:45.683113       1 main.go:322] Node multinode-745925-m03 has CIDR [10.244.4.0/24] 
	
	
	==> kube-apiserver [0826d520b837d333cb9e8db12cfb2a3195420daee26767cd7ebe43cd46ff2963] <==
	W0814 00:33:53.741269       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 00:33:53.741363       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 00:33:53.741419       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 00:33:53.741500       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 00:33:53.741556       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 00:33:53.741604       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 00:33:53.741650       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 00:33:53.741709       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 00:33:53.741768       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 00:33:53.741868       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 00:33:53.741933       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 00:33:53.742090       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 00:33:53.742160       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 00:33:53.742209       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 00:33:53.742256       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 00:33:53.742307       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 00:33:53.742364       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 00:33:53.742418       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 00:33:53.742465       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 00:33:53.742622       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 00:33:53.742699       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 00:33:53.742758       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 00:33:53.742932       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 00:33:53.743006       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 00:33:53.743080       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [b16f783a23a0ce16a50f7dcec0d5a31b8d36401c26ddb636e6cc01c981fe26a2] <==
	I0814 00:35:35.744895       1 aggregator.go:171] initial CRD sync complete...
	I0814 00:35:35.744931       1 autoregister_controller.go:144] Starting autoregister controller
	I0814 00:35:35.744942       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0814 00:35:35.766964       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0814 00:35:35.786313       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0814 00:35:35.787542       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0814 00:35:35.789976       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0814 00:35:35.812661       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0814 00:35:35.812702       1 policy_source.go:224] refreshing policies
	I0814 00:35:35.847154       1 cache.go:39] Caches are synced for autoregister controller
	I0814 00:35:35.866849       1 shared_informer.go:320] Caches are synced for configmaps
	I0814 00:35:35.866887       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0814 00:35:35.866963       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0814 00:35:35.867012       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0814 00:35:35.867018       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	E0814 00:35:35.875047       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0814 00:35:35.875241       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0814 00:35:36.675902       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0814 00:35:37.869372       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0814 00:35:38.044392       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0814 00:35:38.063939       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0814 00:35:38.134312       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0814 00:35:38.144051       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0814 00:35:39.211744       1 controller.go:615] quota admission added evaluator for: endpoints
	I0814 00:35:39.460927       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [5293423821d467aa472c3a3bd890f34c1911316eae819cef2d0ca642e7a4be6b] <==
	E0814 00:36:53.323227       1 range_allocator.go:433] "CIDR assignment for node failed. Releasing allocated CIDR" err="failed to patch node CIDR: Node \"multinode-745925-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.3.0/24\", \"10.244.2.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="multinode-745925-m03"
	E0814 00:36:53.323395       1 range_allocator.go:246] "Unhandled Error" err="error syncing 'multinode-745925-m03': failed to patch node CIDR: Node \"multinode-745925-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.3.0/24\", \"10.244.2.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid], requeuing" logger="UnhandledError"
	I0814 00:36:53.323507       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-745925-m03"
	I0814 00:36:53.328794       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-745925-m03"
	I0814 00:36:53.702020       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-745925-m03"
	I0814 00:36:54.039026       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-745925-m03"
	I0814 00:36:54.160539       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-745925-m03"
	I0814 00:37:03.400165       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-745925-m03"
	I0814 00:37:11.621923       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-745925-m02"
	I0814 00:37:11.622309       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-745925-m03"
	I0814 00:37:11.633729       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-745925-m03"
	I0814 00:37:14.161509       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-745925-m03"
	I0814 00:37:16.024992       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-745925-m03"
	I0814 00:37:16.039409       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-745925-m03"
	I0814 00:37:16.481038       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-745925-m03"
	I0814 00:37:16.481109       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-745925-m02"
	I0814 00:37:59.179694       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-745925-m02"
	I0814 00:37:59.200591       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-745925-m02"
	I0814 00:37:59.237682       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="10.639743ms"
	I0814 00:37:59.238507       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="32.654µs"
	I0814 00:38:04.288232       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-745925-m02"
	I0814 00:38:19.051027       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-vlh75"
	I0814 00:38:19.079330       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-vlh75"
	I0814 00:38:19.079517       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-n2qv9"
	I0814 00:38:19.102636       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-n2qv9"
	
	
	==> kube-controller-manager [98cb3f5f3e0f6b39eb346dacc9c9584057abe80a2b3fead9e9a4160cae92d7d7] <==
	I0814 00:31:28.004175       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-745925-m02"
	I0814 00:31:28.004925       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-745925-m03"
	I0814 00:31:28.998411       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-745925-m02"
	I0814 00:31:28.998658       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-745925-m03\" does not exist"
	I0814 00:31:29.030492       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-745925-m03" podCIDRs=["10.244.4.0/24"]
	I0814 00:31:29.030527       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-745925-m03"
	I0814 00:31:29.030548       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-745925-m03"
	I0814 00:31:29.042181       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-745925-m03"
	I0814 00:31:29.051234       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-745925-m03"
	I0814 00:31:29.373645       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-745925-m03"
	I0814 00:31:33.252200       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-745925-m03"
	I0814 00:31:39.297700       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-745925-m03"
	I0814 00:31:48.447288       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-745925-m03"
	I0814 00:31:48.447347       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-745925-m02"
	I0814 00:31:48.458890       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-745925-m03"
	I0814 00:31:53.054088       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-745925-m03"
	I0814 00:32:33.070708       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-745925-m02"
	I0814 00:32:33.071033       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-745925-m03"
	I0814 00:32:33.074241       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-745925-m03"
	I0814 00:32:33.095573       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-745925-m02"
	I0814 00:32:33.113502       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-745925-m03"
	I0814 00:32:33.138160       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="11.635545ms"
	I0814 00:32:33.138846       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="73.562µs"
	I0814 00:32:38.141476       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-745925-m02"
	I0814 00:32:48.213395       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-745925-m03"
	
	
	==> kube-proxy [1d40d3ea3bdcb515b59751a57cd931bbdf9e7836ae4e8760dafd40c83f53e699] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0814 00:35:37.574430       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0814 00:35:37.589325       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.201"]
	E0814 00:35:37.589424       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0814 00:35:37.640432       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0814 00:35:37.640492       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0814 00:35:37.640520       1 server_linux.go:169] "Using iptables Proxier"
	I0814 00:35:37.643905       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0814 00:35:37.644208       1 server.go:483] "Version info" version="v1.31.0"
	I0814 00:35:37.644366       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0814 00:35:37.645752       1 config.go:197] "Starting service config controller"
	I0814 00:35:37.645885       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0814 00:35:37.645973       1 config.go:104] "Starting endpoint slice config controller"
	I0814 00:35:37.646008       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0814 00:35:37.646513       1 config.go:326] "Starting node config controller"
	I0814 00:35:37.647873       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0814 00:35:37.746060       1 shared_informer.go:320] Caches are synced for service config
	I0814 00:35:37.746111       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0814 00:35:37.748163       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [c4fcefb3fdcea5401dcac2f0926c175690591bc6708e7487325a84810c9a3b6e] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0814 00:29:04.986961       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0814 00:29:05.003706       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.201"]
	E0814 00:29:05.003783       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0814 00:29:05.073544       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0814 00:29:05.073598       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0814 00:29:05.073629       1 server_linux.go:169] "Using iptables Proxier"
	I0814 00:29:05.075720       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0814 00:29:05.076078       1 server.go:483] "Version info" version="v1.31.0"
	I0814 00:29:05.076102       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0814 00:29:05.077513       1 config.go:197] "Starting service config controller"
	I0814 00:29:05.077557       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0814 00:29:05.077578       1 config.go:104] "Starting endpoint slice config controller"
	I0814 00:29:05.077582       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0814 00:29:05.079610       1 config.go:326] "Starting node config controller"
	I0814 00:29:05.079636       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0814 00:29:05.178226       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0814 00:29:05.178287       1 shared_informer.go:320] Caches are synced for service config
	I0814 00:29:05.179839       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [4388dbc1285435fa23d08e95bdc7fe47ea3ec8c1d661b27176003aa77f6d6c48] <==
	I0814 00:35:33.229607       1 serving.go:386] Generated self-signed cert in-memory
	W0814 00:35:35.690886       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0814 00:35:35.691047       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0814 00:35:35.691075       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0814 00:35:35.691156       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0814 00:35:35.765310       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0814 00:35:35.765644       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0814 00:35:35.768108       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0814 00:35:35.768550       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0814 00:35:35.768578       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0814 00:35:35.769289       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	W0814 00:35:35.795147       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0814 00:35:35.795203       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0814 00:35:35.796440       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0814 00:35:35.798188       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0814 00:35:35.798397       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0814 00:35:35.798513       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0814 00:35:35.800545       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0814 00:35:35.802871       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0814 00:35:35.870271       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [d3b1b295216971a1ac2224e9411af3298ba5a538e1ad234dff54e5489d7945f7] <==
	E0814 00:28:56.317230       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0814 00:28:56.314674       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0814 00:28:56.317296       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0814 00:28:56.314727       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0814 00:28:56.317349       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0814 00:28:56.314775       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0814 00:28:56.317406       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0814 00:28:56.314847       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0814 00:28:56.318051       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0814 00:28:56.314894       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0814 00:28:56.318111       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0814 00:28:56.314932       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0814 00:28:56.318191       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0814 00:28:57.168970       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0814 00:28:57.169126       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0814 00:28:57.406763       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0814 00:28:57.406940       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0814 00:28:57.436759       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0814 00:28:57.436886       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0814 00:28:57.499518       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0814 00:28:57.499563       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0814 00:28:57.525941       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0814 00:28:57.525995       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0814 00:28:57.799295       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0814 00:33:53.714717       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Aug 14 00:38:21 multinode-745925 kubelet[2952]: E0814 00:38:21.697430    2952 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723595901696625198,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134106,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 00:38:31 multinode-745925 kubelet[2952]: E0814 00:38:31.622699    2952 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 14 00:38:31 multinode-745925 kubelet[2952]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 14 00:38:31 multinode-745925 kubelet[2952]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 14 00:38:31 multinode-745925 kubelet[2952]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 14 00:38:31 multinode-745925 kubelet[2952]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 14 00:38:31 multinode-745925 kubelet[2952]: E0814 00:38:31.699891    2952 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723595911699375298,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134106,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 00:38:31 multinode-745925 kubelet[2952]: E0814 00:38:31.699973    2952 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723595911699375298,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134106,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 00:38:41 multinode-745925 kubelet[2952]: E0814 00:38:41.701419    2952 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723595921700979855,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134106,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 00:38:41 multinode-745925 kubelet[2952]: E0814 00:38:41.701468    2952 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723595921700979855,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134106,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 00:38:51 multinode-745925 kubelet[2952]: E0814 00:38:51.703402    2952 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723595931702984729,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134106,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 00:38:51 multinode-745925 kubelet[2952]: E0814 00:38:51.703775    2952 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723595931702984729,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134106,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 00:39:01 multinode-745925 kubelet[2952]: E0814 00:39:01.705437    2952 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723595941705186511,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134106,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 00:39:01 multinode-745925 kubelet[2952]: E0814 00:39:01.705483    2952 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723595941705186511,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134106,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 00:39:11 multinode-745925 kubelet[2952]: E0814 00:39:11.707479    2952 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723595951707128776,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134106,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 00:39:11 multinode-745925 kubelet[2952]: E0814 00:39:11.707943    2952 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723595951707128776,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134106,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 00:39:21 multinode-745925 kubelet[2952]: E0814 00:39:21.710174    2952 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723595961709657549,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134106,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 00:39:21 multinode-745925 kubelet[2952]: E0814 00:39:21.710504    2952 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723595961709657549,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134106,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 00:39:31 multinode-745925 kubelet[2952]: E0814 00:39:31.623071    2952 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 14 00:39:31 multinode-745925 kubelet[2952]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 14 00:39:31 multinode-745925 kubelet[2952]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 14 00:39:31 multinode-745925 kubelet[2952]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 14 00:39:31 multinode-745925 kubelet[2952]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 14 00:39:31 multinode-745925 kubelet[2952]: E0814 00:39:31.713008    2952 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723595971712606568,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134106,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 00:39:31 multinode-745925 kubelet[2952]: E0814 00:39:31.713079    2952 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723595971712606568,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134106,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0814 00:39:37.247389   46142 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19429-9425/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-745925 -n multinode-745925
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-745925 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.11s)

                                                
                                    
x
+
TestPreload (283.69s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-789303 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0814 00:45:05.519316   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/addons-937866/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-789303 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m20.889040516s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-789303 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-789303 image pull gcr.io/k8s-minikube/busybox: (2.800937662s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-789303
E0814 00:47:14.189316   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/functional-770612/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:58: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p test-preload-789303: exit status 82 (2m0.453326443s)

                                                
                                                
-- stdout --
	* Stopping node "test-preload-789303"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
preload_test.go:60: out/minikube-linux-amd64 stop -p test-preload-789303 failed: exit status 82
panic.go:626: *** TestPreload FAILED at 2024-08-14 00:47:46.927481402 +0000 UTC m=+3657.090978606
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-789303 -n test-preload-789303
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-789303 -n test-preload-789303: exit status 3 (18.666126023s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0814 00:48:05.590402   48989 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.231:22: connect: no route to host
	E0814 00:48:05.590421   48989 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.231:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "test-preload-789303" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "test-preload-789303" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-789303
--- FAIL: TestPreload (283.69s)

                                                
                                    
x
+
TestKubernetesUpgrade (389.32s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-492920 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-492920 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m49.731141174s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-492920] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19429
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19429-9425/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19429-9425/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-492920" primary control-plane node in "kubernetes-upgrade-492920" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 00:49:56.759827   50563 out.go:291] Setting OutFile to fd 1 ...
	I0814 00:49:56.759925   50563 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 00:49:56.759932   50563 out.go:304] Setting ErrFile to fd 2...
	I0814 00:49:56.759936   50563 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 00:49:56.760129   50563 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19429-9425/.minikube/bin
	I0814 00:49:56.760675   50563 out.go:298] Setting JSON to false
	I0814 00:49:56.761556   50563 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5543,"bootTime":1723591054,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0814 00:49:56.761619   50563 start.go:139] virtualization: kvm guest
	I0814 00:49:56.763033   50563 out.go:177] * [kubernetes-upgrade-492920] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0814 00:49:56.764755   50563 notify.go:220] Checking for updates...
	I0814 00:49:56.765662   50563 out.go:177]   - MINIKUBE_LOCATION=19429
	I0814 00:49:56.766811   50563 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0814 00:49:56.768277   50563 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19429-9425/kubeconfig
	I0814 00:49:56.770035   50563 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19429-9425/.minikube
	I0814 00:49:56.772280   50563 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0814 00:49:56.774387   50563 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0814 00:49:56.776003   50563 driver.go:392] Setting default libvirt URI to qemu:///system
	I0814 00:49:56.811874   50563 out.go:177] * Using the kvm2 driver based on user configuration
	I0814 00:49:56.812994   50563 start.go:297] selected driver: kvm2
	I0814 00:49:56.813007   50563 start.go:901] validating driver "kvm2" against <nil>
	I0814 00:49:56.813015   50563 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0814 00:49:56.813892   50563 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 00:49:56.828443   50563 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19429-9425/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0814 00:49:56.846006   50563 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0814 00:49:56.846071   50563 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0814 00:49:56.846344   50563 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0814 00:49:56.846411   50563 cni.go:84] Creating CNI manager for ""
	I0814 00:49:56.846427   50563 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 00:49:56.846440   50563 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0814 00:49:56.846494   50563 start.go:340] cluster config:
	{Name:kubernetes-upgrade-492920 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-492920 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 00:49:56.846611   50563 iso.go:125] acquiring lock: {Name:mk654171f0e78c238a265344dbbd1eacb21d0f1b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 00:49:56.848132   50563 out.go:177] * Starting "kubernetes-upgrade-492920" primary control-plane node in "kubernetes-upgrade-492920" cluster
	I0814 00:49:56.849182   50563 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0814 00:49:56.849220   50563 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0814 00:49:56.849231   50563 cache.go:56] Caching tarball of preloaded images
	I0814 00:49:56.849300   50563 preload.go:172] Found /home/jenkins/minikube-integration/19429-9425/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0814 00:49:56.849312   50563 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0814 00:49:56.849695   50563 profile.go:143] Saving config to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/kubernetes-upgrade-492920/config.json ...
	I0814 00:49:56.849715   50563 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/kubernetes-upgrade-492920/config.json: {Name:mkad092c980af320fa9b668bbd6e515be3c3c39c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 00:49:56.849881   50563 start.go:360] acquireMachinesLock for kubernetes-upgrade-492920: {Name:mk8ab1b491404dd6cc95a37a50c74a14d6d0e4c9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 00:50:17.963168   50563 start.go:364] duration metric: took 21.113260805s to acquireMachinesLock for "kubernetes-upgrade-492920"
	I0814 00:50:17.963274   50563 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-492920 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-492920 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0814 00:50:17.963389   50563 start.go:125] createHost starting for "" (driver="kvm2")
	I0814 00:50:17.965702   50563 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0814 00:50:17.965913   50563 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 00:50:17.965970   50563 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 00:50:17.984125   50563 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44407
	I0814 00:50:17.984539   50563 main.go:141] libmachine: () Calling .GetVersion
	I0814 00:50:17.985051   50563 main.go:141] libmachine: Using API Version  1
	I0814 00:50:17.985069   50563 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 00:50:17.985533   50563 main.go:141] libmachine: () Calling .GetMachineName
	I0814 00:50:17.985786   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) Calling .GetMachineName
	I0814 00:50:17.985986   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) Calling .DriverName
	I0814 00:50:17.986185   50563 start.go:159] libmachine.API.Create for "kubernetes-upgrade-492920" (driver="kvm2")
	I0814 00:50:17.986205   50563 client.go:168] LocalClient.Create starting
	I0814 00:50:17.986236   50563 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem
	I0814 00:50:17.986272   50563 main.go:141] libmachine: Decoding PEM data...
	I0814 00:50:17.986294   50563 main.go:141] libmachine: Parsing certificate...
	I0814 00:50:17.986368   50563 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem
	I0814 00:50:17.986402   50563 main.go:141] libmachine: Decoding PEM data...
	I0814 00:50:17.986423   50563 main.go:141] libmachine: Parsing certificate...
	I0814 00:50:17.986447   50563 main.go:141] libmachine: Running pre-create checks...
	I0814 00:50:17.986465   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) Calling .PreCreateCheck
	I0814 00:50:17.986923   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) Calling .GetConfigRaw
	I0814 00:50:17.987325   50563 main.go:141] libmachine: Creating machine...
	I0814 00:50:17.987338   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) Calling .Create
	I0814 00:50:17.987478   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) Creating KVM machine...
	I0814 00:50:17.988644   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | found existing default KVM network
	I0814 00:50:17.990032   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | I0814 00:50:17.989869   50861 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:2d:48:a2} reservation:<nil>}
	I0814 00:50:17.991180   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | I0814 00:50:17.991102   50861 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000252330}
	I0814 00:50:17.991236   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | created network xml: 
	I0814 00:50:17.991259   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | <network>
	I0814 00:50:17.991275   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG |   <name>mk-kubernetes-upgrade-492920</name>
	I0814 00:50:17.991283   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG |   <dns enable='no'/>
	I0814 00:50:17.991294   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG |   
	I0814 00:50:17.991309   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0814 00:50:17.991319   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG |     <dhcp>
	I0814 00:50:17.991340   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0814 00:50:17.991353   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG |     </dhcp>
	I0814 00:50:17.991363   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG |   </ip>
	I0814 00:50:17.991373   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG |   
	I0814 00:50:17.991384   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | </network>
	I0814 00:50:17.991406   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | 
	I0814 00:50:17.996882   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | trying to create private KVM network mk-kubernetes-upgrade-492920 192.168.50.0/24...
	I0814 00:50:18.064235   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | private KVM network mk-kubernetes-upgrade-492920 192.168.50.0/24 created
	I0814 00:50:18.064269   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) Setting up store path in /home/jenkins/minikube-integration/19429-9425/.minikube/machines/kubernetes-upgrade-492920 ...
	I0814 00:50:18.064290   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | I0814 00:50:18.064196   50861 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19429-9425/.minikube
	I0814 00:50:18.064311   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) Building disk image from file:///home/jenkins/minikube-integration/19429-9425/.minikube/cache/iso/amd64/minikube-v1.33.1-1723567878-19429-amd64.iso
	I0814 00:50:18.064337   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) Downloading /home/jenkins/minikube-integration/19429-9425/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19429-9425/.minikube/cache/iso/amd64/minikube-v1.33.1-1723567878-19429-amd64.iso...
	I0814 00:50:18.314463   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | I0814 00:50:18.314341   50861 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19429-9425/.minikube/machines/kubernetes-upgrade-492920/id_rsa...
	I0814 00:50:18.411297   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | I0814 00:50:18.411181   50861 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19429-9425/.minikube/machines/kubernetes-upgrade-492920/kubernetes-upgrade-492920.rawdisk...
	I0814 00:50:18.411333   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | Writing magic tar header
	I0814 00:50:18.411350   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | Writing SSH key tar header
	I0814 00:50:18.411362   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | I0814 00:50:18.411296   50861 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19429-9425/.minikube/machines/kubernetes-upgrade-492920 ...
	I0814 00:50:18.411434   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19429-9425/.minikube/machines/kubernetes-upgrade-492920
	I0814 00:50:18.411479   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19429-9425/.minikube/machines
	I0814 00:50:18.411502   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) Setting executable bit set on /home/jenkins/minikube-integration/19429-9425/.minikube/machines/kubernetes-upgrade-492920 (perms=drwx------)
	I0814 00:50:18.411514   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19429-9425/.minikube
	I0814 00:50:18.411525   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19429-9425
	I0814 00:50:18.411535   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0814 00:50:18.411548   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) Setting executable bit set on /home/jenkins/minikube-integration/19429-9425/.minikube/machines (perms=drwxr-xr-x)
	I0814 00:50:18.411562   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) Setting executable bit set on /home/jenkins/minikube-integration/19429-9425/.minikube (perms=drwxr-xr-x)
	I0814 00:50:18.411572   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) Setting executable bit set on /home/jenkins/minikube-integration/19429-9425 (perms=drwxrwxr-x)
	I0814 00:50:18.411578   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | Checking permissions on dir: /home/jenkins
	I0814 00:50:18.411596   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | Checking permissions on dir: /home
	I0814 00:50:18.411602   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | Skipping /home - not owner
	I0814 00:50:18.411614   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0814 00:50:18.411621   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0814 00:50:18.411633   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) Creating domain...
	I0814 00:50:18.412717   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) define libvirt domain using xml: 
	I0814 00:50:18.412747   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) <domain type='kvm'>
	I0814 00:50:18.412760   50563 main.go:141] libmachine: (kubernetes-upgrade-492920)   <name>kubernetes-upgrade-492920</name>
	I0814 00:50:18.412779   50563 main.go:141] libmachine: (kubernetes-upgrade-492920)   <memory unit='MiB'>2200</memory>
	I0814 00:50:18.412806   50563 main.go:141] libmachine: (kubernetes-upgrade-492920)   <vcpu>2</vcpu>
	I0814 00:50:18.412828   50563 main.go:141] libmachine: (kubernetes-upgrade-492920)   <features>
	I0814 00:50:18.412841   50563 main.go:141] libmachine: (kubernetes-upgrade-492920)     <acpi/>
	I0814 00:50:18.412855   50563 main.go:141] libmachine: (kubernetes-upgrade-492920)     <apic/>
	I0814 00:50:18.412877   50563 main.go:141] libmachine: (kubernetes-upgrade-492920)     <pae/>
	I0814 00:50:18.412883   50563 main.go:141] libmachine: (kubernetes-upgrade-492920)     
	I0814 00:50:18.412893   50563 main.go:141] libmachine: (kubernetes-upgrade-492920)   </features>
	I0814 00:50:18.412898   50563 main.go:141] libmachine: (kubernetes-upgrade-492920)   <cpu mode='host-passthrough'>
	I0814 00:50:18.412906   50563 main.go:141] libmachine: (kubernetes-upgrade-492920)   
	I0814 00:50:18.412911   50563 main.go:141] libmachine: (kubernetes-upgrade-492920)   </cpu>
	I0814 00:50:18.412917   50563 main.go:141] libmachine: (kubernetes-upgrade-492920)   <os>
	I0814 00:50:18.412926   50563 main.go:141] libmachine: (kubernetes-upgrade-492920)     <type>hvm</type>
	I0814 00:50:18.412954   50563 main.go:141] libmachine: (kubernetes-upgrade-492920)     <boot dev='cdrom'/>
	I0814 00:50:18.412980   50563 main.go:141] libmachine: (kubernetes-upgrade-492920)     <boot dev='hd'/>
	I0814 00:50:18.412993   50563 main.go:141] libmachine: (kubernetes-upgrade-492920)     <bootmenu enable='no'/>
	I0814 00:50:18.413004   50563 main.go:141] libmachine: (kubernetes-upgrade-492920)   </os>
	I0814 00:50:18.413014   50563 main.go:141] libmachine: (kubernetes-upgrade-492920)   <devices>
	I0814 00:50:18.413025   50563 main.go:141] libmachine: (kubernetes-upgrade-492920)     <disk type='file' device='cdrom'>
	I0814 00:50:18.413060   50563 main.go:141] libmachine: (kubernetes-upgrade-492920)       <source file='/home/jenkins/minikube-integration/19429-9425/.minikube/machines/kubernetes-upgrade-492920/boot2docker.iso'/>
	I0814 00:50:18.413080   50563 main.go:141] libmachine: (kubernetes-upgrade-492920)       <target dev='hdc' bus='scsi'/>
	I0814 00:50:18.413103   50563 main.go:141] libmachine: (kubernetes-upgrade-492920)       <readonly/>
	I0814 00:50:18.413114   50563 main.go:141] libmachine: (kubernetes-upgrade-492920)     </disk>
	I0814 00:50:18.413132   50563 main.go:141] libmachine: (kubernetes-upgrade-492920)     <disk type='file' device='disk'>
	I0814 00:50:18.413145   50563 main.go:141] libmachine: (kubernetes-upgrade-492920)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0814 00:50:18.413166   50563 main.go:141] libmachine: (kubernetes-upgrade-492920)       <source file='/home/jenkins/minikube-integration/19429-9425/.minikube/machines/kubernetes-upgrade-492920/kubernetes-upgrade-492920.rawdisk'/>
	I0814 00:50:18.413184   50563 main.go:141] libmachine: (kubernetes-upgrade-492920)       <target dev='hda' bus='virtio'/>
	I0814 00:50:18.413192   50563 main.go:141] libmachine: (kubernetes-upgrade-492920)     </disk>
	I0814 00:50:18.413200   50563 main.go:141] libmachine: (kubernetes-upgrade-492920)     <interface type='network'>
	I0814 00:50:18.413211   50563 main.go:141] libmachine: (kubernetes-upgrade-492920)       <source network='mk-kubernetes-upgrade-492920'/>
	I0814 00:50:18.413224   50563 main.go:141] libmachine: (kubernetes-upgrade-492920)       <model type='virtio'/>
	I0814 00:50:18.413234   50563 main.go:141] libmachine: (kubernetes-upgrade-492920)     </interface>
	I0814 00:50:18.413246   50563 main.go:141] libmachine: (kubernetes-upgrade-492920)     <interface type='network'>
	I0814 00:50:18.413260   50563 main.go:141] libmachine: (kubernetes-upgrade-492920)       <source network='default'/>
	I0814 00:50:18.413277   50563 main.go:141] libmachine: (kubernetes-upgrade-492920)       <model type='virtio'/>
	I0814 00:50:18.413287   50563 main.go:141] libmachine: (kubernetes-upgrade-492920)     </interface>
	I0814 00:50:18.413299   50563 main.go:141] libmachine: (kubernetes-upgrade-492920)     <serial type='pty'>
	I0814 00:50:18.413312   50563 main.go:141] libmachine: (kubernetes-upgrade-492920)       <target port='0'/>
	I0814 00:50:18.413323   50563 main.go:141] libmachine: (kubernetes-upgrade-492920)     </serial>
	I0814 00:50:18.413334   50563 main.go:141] libmachine: (kubernetes-upgrade-492920)     <console type='pty'>
	I0814 00:50:18.413350   50563 main.go:141] libmachine: (kubernetes-upgrade-492920)       <target type='serial' port='0'/>
	I0814 00:50:18.413364   50563 main.go:141] libmachine: (kubernetes-upgrade-492920)     </console>
	I0814 00:50:18.413372   50563 main.go:141] libmachine: (kubernetes-upgrade-492920)     <rng model='virtio'>
	I0814 00:50:18.413382   50563 main.go:141] libmachine: (kubernetes-upgrade-492920)       <backend model='random'>/dev/random</backend>
	I0814 00:50:18.413398   50563 main.go:141] libmachine: (kubernetes-upgrade-492920)     </rng>
	I0814 00:50:18.413409   50563 main.go:141] libmachine: (kubernetes-upgrade-492920)     
	I0814 00:50:18.413420   50563 main.go:141] libmachine: (kubernetes-upgrade-492920)     
	I0814 00:50:18.413432   50563 main.go:141] libmachine: (kubernetes-upgrade-492920)   </devices>
	I0814 00:50:18.413442   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) </domain>
	I0814 00:50:18.413453   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) 
	I0814 00:50:18.417662   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | domain kubernetes-upgrade-492920 has defined MAC address 52:54:00:16:29:d0 in network default
	I0814 00:50:18.418248   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) Ensuring networks are active...
	I0814 00:50:18.418268   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | domain kubernetes-upgrade-492920 has defined MAC address 52:54:00:39:60:27 in network mk-kubernetes-upgrade-492920
	I0814 00:50:18.419048   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) Ensuring network default is active
	I0814 00:50:18.419378   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) Ensuring network mk-kubernetes-upgrade-492920 is active
	I0814 00:50:18.419959   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) Getting domain xml...
	I0814 00:50:18.420733   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) Creating domain...
	I0814 00:50:19.748317   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) Waiting to get IP...
	I0814 00:50:19.749285   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | domain kubernetes-upgrade-492920 has defined MAC address 52:54:00:39:60:27 in network mk-kubernetes-upgrade-492920
	I0814 00:50:19.749777   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | unable to find current IP address of domain kubernetes-upgrade-492920 in network mk-kubernetes-upgrade-492920
	I0814 00:50:19.749833   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | I0814 00:50:19.749759   50861 retry.go:31] will retry after 198.508428ms: waiting for machine to come up
	I0814 00:50:19.950497   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | domain kubernetes-upgrade-492920 has defined MAC address 52:54:00:39:60:27 in network mk-kubernetes-upgrade-492920
	I0814 00:50:19.951025   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | unable to find current IP address of domain kubernetes-upgrade-492920 in network mk-kubernetes-upgrade-492920
	I0814 00:50:19.951054   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | I0814 00:50:19.950982   50861 retry.go:31] will retry after 282.266075ms: waiting for machine to come up
	I0814 00:50:20.234510   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | domain kubernetes-upgrade-492920 has defined MAC address 52:54:00:39:60:27 in network mk-kubernetes-upgrade-492920
	I0814 00:50:20.234971   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | unable to find current IP address of domain kubernetes-upgrade-492920 in network mk-kubernetes-upgrade-492920
	I0814 00:50:20.234997   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | I0814 00:50:20.234929   50861 retry.go:31] will retry after 312.473286ms: waiting for machine to come up
	I0814 00:50:20.549448   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | domain kubernetes-upgrade-492920 has defined MAC address 52:54:00:39:60:27 in network mk-kubernetes-upgrade-492920
	I0814 00:50:20.549979   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | unable to find current IP address of domain kubernetes-upgrade-492920 in network mk-kubernetes-upgrade-492920
	I0814 00:50:20.550008   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | I0814 00:50:20.549933   50861 retry.go:31] will retry after 502.602713ms: waiting for machine to come up
	I0814 00:50:21.054461   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | domain kubernetes-upgrade-492920 has defined MAC address 52:54:00:39:60:27 in network mk-kubernetes-upgrade-492920
	I0814 00:50:21.054922   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | unable to find current IP address of domain kubernetes-upgrade-492920 in network mk-kubernetes-upgrade-492920
	I0814 00:50:21.054949   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | I0814 00:50:21.054882   50861 retry.go:31] will retry after 720.050679ms: waiting for machine to come up
	I0814 00:50:21.776255   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | domain kubernetes-upgrade-492920 has defined MAC address 52:54:00:39:60:27 in network mk-kubernetes-upgrade-492920
	I0814 00:50:21.776737   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | unable to find current IP address of domain kubernetes-upgrade-492920 in network mk-kubernetes-upgrade-492920
	I0814 00:50:21.776764   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | I0814 00:50:21.776694   50861 retry.go:31] will retry after 668.520891ms: waiting for machine to come up
	I0814 00:50:22.446571   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | domain kubernetes-upgrade-492920 has defined MAC address 52:54:00:39:60:27 in network mk-kubernetes-upgrade-492920
	I0814 00:50:22.447092   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | unable to find current IP address of domain kubernetes-upgrade-492920 in network mk-kubernetes-upgrade-492920
	I0814 00:50:22.447117   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | I0814 00:50:22.447049   50861 retry.go:31] will retry after 724.503868ms: waiting for machine to come up
	I0814 00:50:23.173712   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | domain kubernetes-upgrade-492920 has defined MAC address 52:54:00:39:60:27 in network mk-kubernetes-upgrade-492920
	I0814 00:50:23.174166   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | unable to find current IP address of domain kubernetes-upgrade-492920 in network mk-kubernetes-upgrade-492920
	I0814 00:50:23.174211   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | I0814 00:50:23.174131   50861 retry.go:31] will retry after 1.190277895s: waiting for machine to come up
	I0814 00:50:24.365845   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | domain kubernetes-upgrade-492920 has defined MAC address 52:54:00:39:60:27 in network mk-kubernetes-upgrade-492920
	I0814 00:50:24.366337   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | unable to find current IP address of domain kubernetes-upgrade-492920 in network mk-kubernetes-upgrade-492920
	I0814 00:50:24.366362   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | I0814 00:50:24.366298   50861 retry.go:31] will retry after 1.212101351s: waiting for machine to come up
	I0814 00:50:25.579672   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | domain kubernetes-upgrade-492920 has defined MAC address 52:54:00:39:60:27 in network mk-kubernetes-upgrade-492920
	I0814 00:50:25.580198   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | unable to find current IP address of domain kubernetes-upgrade-492920 in network mk-kubernetes-upgrade-492920
	I0814 00:50:25.580229   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | I0814 00:50:25.580131   50861 retry.go:31] will retry after 2.066131151s: waiting for machine to come up
	I0814 00:50:27.648212   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | domain kubernetes-upgrade-492920 has defined MAC address 52:54:00:39:60:27 in network mk-kubernetes-upgrade-492920
	I0814 00:50:27.648728   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | unable to find current IP address of domain kubernetes-upgrade-492920 in network mk-kubernetes-upgrade-492920
	I0814 00:50:27.648757   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | I0814 00:50:27.648658   50861 retry.go:31] will retry after 2.352461171s: waiting for machine to come up
	I0814 00:50:30.003591   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | domain kubernetes-upgrade-492920 has defined MAC address 52:54:00:39:60:27 in network mk-kubernetes-upgrade-492920
	I0814 00:50:30.004011   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | unable to find current IP address of domain kubernetes-upgrade-492920 in network mk-kubernetes-upgrade-492920
	I0814 00:50:30.004047   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | I0814 00:50:30.003973   50861 retry.go:31] will retry after 2.964906729s: waiting for machine to come up
	I0814 00:50:32.971240   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | domain kubernetes-upgrade-492920 has defined MAC address 52:54:00:39:60:27 in network mk-kubernetes-upgrade-492920
	I0814 00:50:32.971744   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | unable to find current IP address of domain kubernetes-upgrade-492920 in network mk-kubernetes-upgrade-492920
	I0814 00:50:32.971772   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | I0814 00:50:32.971682   50861 retry.go:31] will retry after 3.617026042s: waiting for machine to come up
	I0814 00:50:36.592807   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | domain kubernetes-upgrade-492920 has defined MAC address 52:54:00:39:60:27 in network mk-kubernetes-upgrade-492920
	I0814 00:50:36.593215   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | unable to find current IP address of domain kubernetes-upgrade-492920 in network mk-kubernetes-upgrade-492920
	I0814 00:50:36.593242   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | I0814 00:50:36.593168   50861 retry.go:31] will retry after 3.524323425s: waiting for machine to come up
	I0814 00:50:40.121607   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | domain kubernetes-upgrade-492920 has defined MAC address 52:54:00:39:60:27 in network mk-kubernetes-upgrade-492920
	I0814 00:50:40.122201   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | domain kubernetes-upgrade-492920 has current primary IP address 192.168.50.136 and MAC address 52:54:00:39:60:27 in network mk-kubernetes-upgrade-492920
	I0814 00:50:40.122226   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) Found IP for machine: 192.168.50.136
	I0814 00:50:40.122241   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) Reserving static IP address...
	I0814 00:50:40.122686   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-492920", mac: "52:54:00:39:60:27", ip: "192.168.50.136"} in network mk-kubernetes-upgrade-492920
	I0814 00:50:40.196026   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) Reserved static IP address: 192.168.50.136
	I0814 00:50:40.196061   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | Getting to WaitForSSH function...
	I0814 00:50:40.196070   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) Waiting for SSH to be available...
	I0814 00:50:40.199059   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | domain kubernetes-upgrade-492920 has defined MAC address 52:54:00:39:60:27 in network mk-kubernetes-upgrade-492920
	I0814 00:50:40.199500   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:60:27", ip: ""} in network mk-kubernetes-upgrade-492920: {Iface:virbr2 ExpiryTime:2024-08-14 01:50:32 +0000 UTC Type:0 Mac:52:54:00:39:60:27 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:minikube Clientid:01:52:54:00:39:60:27}
	I0814 00:50:40.199533   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | domain kubernetes-upgrade-492920 has defined IP address 192.168.50.136 and MAC address 52:54:00:39:60:27 in network mk-kubernetes-upgrade-492920
	I0814 00:50:40.199674   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | Using SSH client type: external
	I0814 00:50:40.199702   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | Using SSH private key: /home/jenkins/minikube-integration/19429-9425/.minikube/machines/kubernetes-upgrade-492920/id_rsa (-rw-------)
	I0814 00:50:40.199743   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.136 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19429-9425/.minikube/machines/kubernetes-upgrade-492920/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0814 00:50:40.199759   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | About to run SSH command:
	I0814 00:50:40.199773   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | exit 0
	I0814 00:50:40.325995   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | SSH cmd err, output: <nil>: 
	I0814 00:50:40.326293   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) KVM machine creation complete!
	I0814 00:50:40.326646   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) Calling .GetConfigRaw
	I0814 00:50:40.327185   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) Calling .DriverName
	I0814 00:50:40.327435   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) Calling .DriverName
	I0814 00:50:40.327633   50563 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0814 00:50:40.327650   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) Calling .GetState
	I0814 00:50:40.328978   50563 main.go:141] libmachine: Detecting operating system of created instance...
	I0814 00:50:40.328993   50563 main.go:141] libmachine: Waiting for SSH to be available...
	I0814 00:50:40.329013   50563 main.go:141] libmachine: Getting to WaitForSSH function...
	I0814 00:50:40.329025   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) Calling .GetSSHHostname
	I0814 00:50:40.331288   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | domain kubernetes-upgrade-492920 has defined MAC address 52:54:00:39:60:27 in network mk-kubernetes-upgrade-492920
	I0814 00:50:40.331617   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:60:27", ip: ""} in network mk-kubernetes-upgrade-492920: {Iface:virbr2 ExpiryTime:2024-08-14 01:50:32 +0000 UTC Type:0 Mac:52:54:00:39:60:27 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:kubernetes-upgrade-492920 Clientid:01:52:54:00:39:60:27}
	I0814 00:50:40.331655   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | domain kubernetes-upgrade-492920 has defined IP address 192.168.50.136 and MAC address 52:54:00:39:60:27 in network mk-kubernetes-upgrade-492920
	I0814 00:50:40.331727   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) Calling .GetSSHPort
	I0814 00:50:40.331887   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) Calling .GetSSHKeyPath
	I0814 00:50:40.332049   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) Calling .GetSSHKeyPath
	I0814 00:50:40.332196   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) Calling .GetSSHUsername
	I0814 00:50:40.332406   50563 main.go:141] libmachine: Using SSH client type: native
	I0814 00:50:40.332618   50563 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.136 22 <nil> <nil>}
	I0814 00:50:40.332632   50563 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0814 00:50:40.437360   50563 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 00:50:40.437388   50563 main.go:141] libmachine: Detecting the provisioner...
	I0814 00:50:40.437395   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) Calling .GetSSHHostname
	I0814 00:50:40.440172   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | domain kubernetes-upgrade-492920 has defined MAC address 52:54:00:39:60:27 in network mk-kubernetes-upgrade-492920
	I0814 00:50:40.440521   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:60:27", ip: ""} in network mk-kubernetes-upgrade-492920: {Iface:virbr2 ExpiryTime:2024-08-14 01:50:32 +0000 UTC Type:0 Mac:52:54:00:39:60:27 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:kubernetes-upgrade-492920 Clientid:01:52:54:00:39:60:27}
	I0814 00:50:40.440553   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | domain kubernetes-upgrade-492920 has defined IP address 192.168.50.136 and MAC address 52:54:00:39:60:27 in network mk-kubernetes-upgrade-492920
	I0814 00:50:40.440645   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) Calling .GetSSHPort
	I0814 00:50:40.440815   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) Calling .GetSSHKeyPath
	I0814 00:50:40.440951   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) Calling .GetSSHKeyPath
	I0814 00:50:40.441067   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) Calling .GetSSHUsername
	I0814 00:50:40.441273   50563 main.go:141] libmachine: Using SSH client type: native
	I0814 00:50:40.441452   50563 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.136 22 <nil> <nil>}
	I0814 00:50:40.441462   50563 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0814 00:50:40.542734   50563 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0814 00:50:40.542810   50563 main.go:141] libmachine: found compatible host: buildroot
	I0814 00:50:40.542819   50563 main.go:141] libmachine: Provisioning with buildroot...
	I0814 00:50:40.542827   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) Calling .GetMachineName
	I0814 00:50:40.543071   50563 buildroot.go:166] provisioning hostname "kubernetes-upgrade-492920"
	I0814 00:50:40.543093   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) Calling .GetMachineName
	I0814 00:50:40.543272   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) Calling .GetSSHHostname
	I0814 00:50:40.546060   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | domain kubernetes-upgrade-492920 has defined MAC address 52:54:00:39:60:27 in network mk-kubernetes-upgrade-492920
	I0814 00:50:40.546401   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:60:27", ip: ""} in network mk-kubernetes-upgrade-492920: {Iface:virbr2 ExpiryTime:2024-08-14 01:50:32 +0000 UTC Type:0 Mac:52:54:00:39:60:27 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:kubernetes-upgrade-492920 Clientid:01:52:54:00:39:60:27}
	I0814 00:50:40.546428   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | domain kubernetes-upgrade-492920 has defined IP address 192.168.50.136 and MAC address 52:54:00:39:60:27 in network mk-kubernetes-upgrade-492920
	I0814 00:50:40.546549   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) Calling .GetSSHPort
	I0814 00:50:40.546735   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) Calling .GetSSHKeyPath
	I0814 00:50:40.546883   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) Calling .GetSSHKeyPath
	I0814 00:50:40.547020   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) Calling .GetSSHUsername
	I0814 00:50:40.547188   50563 main.go:141] libmachine: Using SSH client type: native
	I0814 00:50:40.547375   50563 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.136 22 <nil> <nil>}
	I0814 00:50:40.547401   50563 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-492920 && echo "kubernetes-upgrade-492920" | sudo tee /etc/hostname
	I0814 00:50:40.659273   50563 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-492920
	
	I0814 00:50:40.659318   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) Calling .GetSSHHostname
	I0814 00:50:40.662540   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | domain kubernetes-upgrade-492920 has defined MAC address 52:54:00:39:60:27 in network mk-kubernetes-upgrade-492920
	I0814 00:50:40.662923   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:60:27", ip: ""} in network mk-kubernetes-upgrade-492920: {Iface:virbr2 ExpiryTime:2024-08-14 01:50:32 +0000 UTC Type:0 Mac:52:54:00:39:60:27 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:kubernetes-upgrade-492920 Clientid:01:52:54:00:39:60:27}
	I0814 00:50:40.662951   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | domain kubernetes-upgrade-492920 has defined IP address 192.168.50.136 and MAC address 52:54:00:39:60:27 in network mk-kubernetes-upgrade-492920
	I0814 00:50:40.663113   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) Calling .GetSSHPort
	I0814 00:50:40.663300   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) Calling .GetSSHKeyPath
	I0814 00:50:40.663455   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) Calling .GetSSHKeyPath
	I0814 00:50:40.663618   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) Calling .GetSSHUsername
	I0814 00:50:40.663770   50563 main.go:141] libmachine: Using SSH client type: native
	I0814 00:50:40.663962   50563 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.136 22 <nil> <nil>}
	I0814 00:50:40.663986   50563 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-492920' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-492920/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-492920' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0814 00:50:40.776514   50563 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 00:50:40.776541   50563 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19429-9425/.minikube CaCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19429-9425/.minikube}
	I0814 00:50:40.776579   50563 buildroot.go:174] setting up certificates
	I0814 00:50:40.776594   50563 provision.go:84] configureAuth start
	I0814 00:50:40.776610   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) Calling .GetMachineName
	I0814 00:50:40.776914   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) Calling .GetIP
	I0814 00:50:40.779779   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | domain kubernetes-upgrade-492920 has defined MAC address 52:54:00:39:60:27 in network mk-kubernetes-upgrade-492920
	I0814 00:50:40.780191   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:60:27", ip: ""} in network mk-kubernetes-upgrade-492920: {Iface:virbr2 ExpiryTime:2024-08-14 01:50:32 +0000 UTC Type:0 Mac:52:54:00:39:60:27 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:kubernetes-upgrade-492920 Clientid:01:52:54:00:39:60:27}
	I0814 00:50:40.780215   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | domain kubernetes-upgrade-492920 has defined IP address 192.168.50.136 and MAC address 52:54:00:39:60:27 in network mk-kubernetes-upgrade-492920
	I0814 00:50:40.780416   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) Calling .GetSSHHostname
	I0814 00:50:40.782633   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | domain kubernetes-upgrade-492920 has defined MAC address 52:54:00:39:60:27 in network mk-kubernetes-upgrade-492920
	I0814 00:50:40.782981   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:60:27", ip: ""} in network mk-kubernetes-upgrade-492920: {Iface:virbr2 ExpiryTime:2024-08-14 01:50:32 +0000 UTC Type:0 Mac:52:54:00:39:60:27 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:kubernetes-upgrade-492920 Clientid:01:52:54:00:39:60:27}
	I0814 00:50:40.783006   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | domain kubernetes-upgrade-492920 has defined IP address 192.168.50.136 and MAC address 52:54:00:39:60:27 in network mk-kubernetes-upgrade-492920
	I0814 00:50:40.783121   50563 provision.go:143] copyHostCerts
	I0814 00:50:40.783181   50563 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem, removing ...
	I0814 00:50:40.783196   50563 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem
	I0814 00:50:40.783249   50563 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem (1082 bytes)
	I0814 00:50:40.783350   50563 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem, removing ...
	I0814 00:50:40.783357   50563 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem
	I0814 00:50:40.783377   50563 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem (1123 bytes)
	I0814 00:50:40.783445   50563 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem, removing ...
	I0814 00:50:40.783452   50563 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem
	I0814 00:50:40.783469   50563 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem (1675 bytes)
	I0814 00:50:40.783519   50563 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-492920 san=[127.0.0.1 192.168.50.136 kubernetes-upgrade-492920 localhost minikube]
	I0814 00:50:41.167022   50563 provision.go:177] copyRemoteCerts
	I0814 00:50:41.167084   50563 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0814 00:50:41.167109   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) Calling .GetSSHHostname
	I0814 00:50:41.169898   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | domain kubernetes-upgrade-492920 has defined MAC address 52:54:00:39:60:27 in network mk-kubernetes-upgrade-492920
	I0814 00:50:41.170356   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:60:27", ip: ""} in network mk-kubernetes-upgrade-492920: {Iface:virbr2 ExpiryTime:2024-08-14 01:50:32 +0000 UTC Type:0 Mac:52:54:00:39:60:27 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:kubernetes-upgrade-492920 Clientid:01:52:54:00:39:60:27}
	I0814 00:50:41.170399   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | domain kubernetes-upgrade-492920 has defined IP address 192.168.50.136 and MAC address 52:54:00:39:60:27 in network mk-kubernetes-upgrade-492920
	I0814 00:50:41.170716   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) Calling .GetSSHPort
	I0814 00:50:41.170911   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) Calling .GetSSHKeyPath
	I0814 00:50:41.171096   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) Calling .GetSSHUsername
	I0814 00:50:41.171288   50563 sshutil.go:53] new ssh client: &{IP:192.168.50.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/kubernetes-upgrade-492920/id_rsa Username:docker}
	I0814 00:50:41.256345   50563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0814 00:50:41.280640   50563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0814 00:50:41.304167   50563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0814 00:50:41.332500   50563 provision.go:87] duration metric: took 555.889008ms to configureAuth
	I0814 00:50:41.332531   50563 buildroot.go:189] setting minikube options for container-runtime
	I0814 00:50:41.332708   50563 config.go:182] Loaded profile config "kubernetes-upgrade-492920": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0814 00:50:41.332797   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) Calling .GetSSHHostname
	I0814 00:50:41.336179   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | domain kubernetes-upgrade-492920 has defined MAC address 52:54:00:39:60:27 in network mk-kubernetes-upgrade-492920
	I0814 00:50:41.336539   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:60:27", ip: ""} in network mk-kubernetes-upgrade-492920: {Iface:virbr2 ExpiryTime:2024-08-14 01:50:32 +0000 UTC Type:0 Mac:52:54:00:39:60:27 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:kubernetes-upgrade-492920 Clientid:01:52:54:00:39:60:27}
	I0814 00:50:41.336570   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | domain kubernetes-upgrade-492920 has defined IP address 192.168.50.136 and MAC address 52:54:00:39:60:27 in network mk-kubernetes-upgrade-492920
	I0814 00:50:41.336771   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) Calling .GetSSHPort
	I0814 00:50:41.336993   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) Calling .GetSSHKeyPath
	I0814 00:50:41.337203   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) Calling .GetSSHKeyPath
	I0814 00:50:41.337387   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) Calling .GetSSHUsername
	I0814 00:50:41.337607   50563 main.go:141] libmachine: Using SSH client type: native
	I0814 00:50:41.337801   50563 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.136 22 <nil> <nil>}
	I0814 00:50:41.337824   50563 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0814 00:50:41.862990   50563 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0814 00:50:41.863013   50563 main.go:141] libmachine: Checking connection to Docker...
	I0814 00:50:41.863021   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) Calling .GetURL
	I0814 00:50:41.864307   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | Using libvirt version 6000000
	I0814 00:50:41.866337   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | domain kubernetes-upgrade-492920 has defined MAC address 52:54:00:39:60:27 in network mk-kubernetes-upgrade-492920
	I0814 00:50:41.866777   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:60:27", ip: ""} in network mk-kubernetes-upgrade-492920: {Iface:virbr2 ExpiryTime:2024-08-14 01:50:32 +0000 UTC Type:0 Mac:52:54:00:39:60:27 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:kubernetes-upgrade-492920 Clientid:01:52:54:00:39:60:27}
	I0814 00:50:41.866806   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | domain kubernetes-upgrade-492920 has defined IP address 192.168.50.136 and MAC address 52:54:00:39:60:27 in network mk-kubernetes-upgrade-492920
	I0814 00:50:41.866922   50563 main.go:141] libmachine: Docker is up and running!
	I0814 00:50:41.866939   50563 main.go:141] libmachine: Reticulating splines...
	I0814 00:50:41.866947   50563 client.go:171] duration metric: took 23.88073472s to LocalClient.Create
	I0814 00:50:41.866974   50563 start.go:167] duration metric: took 23.880789915s to libmachine.API.Create "kubernetes-upgrade-492920"
	I0814 00:50:41.866984   50563 start.go:293] postStartSetup for "kubernetes-upgrade-492920" (driver="kvm2")
	I0814 00:50:41.866993   50563 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0814 00:50:41.867012   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) Calling .DriverName
	I0814 00:50:41.867247   50563 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0814 00:50:41.867273   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) Calling .GetSSHHostname
	I0814 00:50:41.869469   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | domain kubernetes-upgrade-492920 has defined MAC address 52:54:00:39:60:27 in network mk-kubernetes-upgrade-492920
	I0814 00:50:41.869883   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:60:27", ip: ""} in network mk-kubernetes-upgrade-492920: {Iface:virbr2 ExpiryTime:2024-08-14 01:50:32 +0000 UTC Type:0 Mac:52:54:00:39:60:27 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:kubernetes-upgrade-492920 Clientid:01:52:54:00:39:60:27}
	I0814 00:50:41.869911   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | domain kubernetes-upgrade-492920 has defined IP address 192.168.50.136 and MAC address 52:54:00:39:60:27 in network mk-kubernetes-upgrade-492920
	I0814 00:50:41.870020   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) Calling .GetSSHPort
	I0814 00:50:41.870207   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) Calling .GetSSHKeyPath
	I0814 00:50:41.870395   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) Calling .GetSSHUsername
	I0814 00:50:41.870578   50563 sshutil.go:53] new ssh client: &{IP:192.168.50.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/kubernetes-upgrade-492920/id_rsa Username:docker}
	I0814 00:50:41.955863   50563 ssh_runner.go:195] Run: cat /etc/os-release
	I0814 00:50:41.960067   50563 info.go:137] Remote host: Buildroot 2023.02.9
	I0814 00:50:41.960096   50563 filesync.go:126] Scanning /home/jenkins/minikube-integration/19429-9425/.minikube/addons for local assets ...
	I0814 00:50:41.960169   50563 filesync.go:126] Scanning /home/jenkins/minikube-integration/19429-9425/.minikube/files for local assets ...
	I0814 00:50:41.960263   50563 filesync.go:149] local asset: /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem -> 165892.pem in /etc/ssl/certs
	I0814 00:50:41.960378   50563 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0814 00:50:41.969382   50563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem --> /etc/ssl/certs/165892.pem (1708 bytes)
	I0814 00:50:41.991835   50563 start.go:296] duration metric: took 124.825699ms for postStartSetup
	I0814 00:50:41.991903   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) Calling .GetConfigRaw
	I0814 00:50:41.992527   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) Calling .GetIP
	I0814 00:50:41.995307   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | domain kubernetes-upgrade-492920 has defined MAC address 52:54:00:39:60:27 in network mk-kubernetes-upgrade-492920
	I0814 00:50:41.995703   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:60:27", ip: ""} in network mk-kubernetes-upgrade-492920: {Iface:virbr2 ExpiryTime:2024-08-14 01:50:32 +0000 UTC Type:0 Mac:52:54:00:39:60:27 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:kubernetes-upgrade-492920 Clientid:01:52:54:00:39:60:27}
	I0814 00:50:41.995730   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | domain kubernetes-upgrade-492920 has defined IP address 192.168.50.136 and MAC address 52:54:00:39:60:27 in network mk-kubernetes-upgrade-492920
	I0814 00:50:41.995923   50563 profile.go:143] Saving config to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/kubernetes-upgrade-492920/config.json ...
	I0814 00:50:41.996123   50563 start.go:128] duration metric: took 24.032722812s to createHost
	I0814 00:50:41.996150   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) Calling .GetSSHHostname
	I0814 00:50:41.998214   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | domain kubernetes-upgrade-492920 has defined MAC address 52:54:00:39:60:27 in network mk-kubernetes-upgrade-492920
	I0814 00:50:41.998449   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:60:27", ip: ""} in network mk-kubernetes-upgrade-492920: {Iface:virbr2 ExpiryTime:2024-08-14 01:50:32 +0000 UTC Type:0 Mac:52:54:00:39:60:27 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:kubernetes-upgrade-492920 Clientid:01:52:54:00:39:60:27}
	I0814 00:50:41.998482   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | domain kubernetes-upgrade-492920 has defined IP address 192.168.50.136 and MAC address 52:54:00:39:60:27 in network mk-kubernetes-upgrade-492920
	I0814 00:50:41.998598   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) Calling .GetSSHPort
	I0814 00:50:41.998806   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) Calling .GetSSHKeyPath
	I0814 00:50:41.998968   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) Calling .GetSSHKeyPath
	I0814 00:50:41.999105   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) Calling .GetSSHUsername
	I0814 00:50:41.999244   50563 main.go:141] libmachine: Using SSH client type: native
	I0814 00:50:41.999480   50563 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.136 22 <nil> <nil>}
	I0814 00:50:41.999491   50563 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0814 00:50:42.102701   50563 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723596642.065739809
	
	I0814 00:50:42.102722   50563 fix.go:216] guest clock: 1723596642.065739809
	I0814 00:50:42.102732   50563 fix.go:229] Guest: 2024-08-14 00:50:42.065739809 +0000 UTC Remote: 2024-08-14 00:50:41.996135167 +0000 UTC m=+45.286153489 (delta=69.604642ms)
	I0814 00:50:42.102776   50563 fix.go:200] guest clock delta is within tolerance: 69.604642ms
	I0814 00:50:42.102783   50563 start.go:83] releasing machines lock for "kubernetes-upgrade-492920", held for 24.139549385s
	I0814 00:50:42.102812   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) Calling .DriverName
	I0814 00:50:42.103149   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) Calling .GetIP
	I0814 00:50:42.106214   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | domain kubernetes-upgrade-492920 has defined MAC address 52:54:00:39:60:27 in network mk-kubernetes-upgrade-492920
	I0814 00:50:42.106670   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:60:27", ip: ""} in network mk-kubernetes-upgrade-492920: {Iface:virbr2 ExpiryTime:2024-08-14 01:50:32 +0000 UTC Type:0 Mac:52:54:00:39:60:27 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:kubernetes-upgrade-492920 Clientid:01:52:54:00:39:60:27}
	I0814 00:50:42.106704   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | domain kubernetes-upgrade-492920 has defined IP address 192.168.50.136 and MAC address 52:54:00:39:60:27 in network mk-kubernetes-upgrade-492920
	I0814 00:50:42.106848   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) Calling .DriverName
	I0814 00:50:42.107334   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) Calling .DriverName
	I0814 00:50:42.107531   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) Calling .DriverName
	I0814 00:50:42.107592   50563 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0814 00:50:42.107646   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) Calling .GetSSHHostname
	I0814 00:50:42.107789   50563 ssh_runner.go:195] Run: cat /version.json
	I0814 00:50:42.107809   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) Calling .GetSSHHostname
	I0814 00:50:42.110358   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | domain kubernetes-upgrade-492920 has defined MAC address 52:54:00:39:60:27 in network mk-kubernetes-upgrade-492920
	I0814 00:50:42.110724   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:60:27", ip: ""} in network mk-kubernetes-upgrade-492920: {Iface:virbr2 ExpiryTime:2024-08-14 01:50:32 +0000 UTC Type:0 Mac:52:54:00:39:60:27 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:kubernetes-upgrade-492920 Clientid:01:52:54:00:39:60:27}
	I0814 00:50:42.110756   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | domain kubernetes-upgrade-492920 has defined IP address 192.168.50.136 and MAC address 52:54:00:39:60:27 in network mk-kubernetes-upgrade-492920
	I0814 00:50:42.110808   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | domain kubernetes-upgrade-492920 has defined MAC address 52:54:00:39:60:27 in network mk-kubernetes-upgrade-492920
	I0814 00:50:42.110914   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) Calling .GetSSHPort
	I0814 00:50:42.111101   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) Calling .GetSSHKeyPath
	I0814 00:50:42.111204   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:60:27", ip: ""} in network mk-kubernetes-upgrade-492920: {Iface:virbr2 ExpiryTime:2024-08-14 01:50:32 +0000 UTC Type:0 Mac:52:54:00:39:60:27 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:kubernetes-upgrade-492920 Clientid:01:52:54:00:39:60:27}
	I0814 00:50:42.111229   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | domain kubernetes-upgrade-492920 has defined IP address 192.168.50.136 and MAC address 52:54:00:39:60:27 in network mk-kubernetes-upgrade-492920
	I0814 00:50:42.111457   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) Calling .GetSSHPort
	I0814 00:50:42.111478   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) Calling .GetSSHUsername
	I0814 00:50:42.111629   50563 sshutil.go:53] new ssh client: &{IP:192.168.50.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/kubernetes-upgrade-492920/id_rsa Username:docker}
	I0814 00:50:42.111649   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) Calling .GetSSHKeyPath
	I0814 00:50:42.111776   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) Calling .GetSSHUsername
	I0814 00:50:42.111886   50563 sshutil.go:53] new ssh client: &{IP:192.168.50.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/kubernetes-upgrade-492920/id_rsa Username:docker}
	I0814 00:50:42.196346   50563 ssh_runner.go:195] Run: systemctl --version
	I0814 00:50:42.226395   50563 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0814 00:50:42.386033   50563 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0814 00:50:42.392924   50563 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0814 00:50:42.393054   50563 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0814 00:50:42.411153   50563 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0814 00:50:42.411178   50563 start.go:495] detecting cgroup driver to use...
	I0814 00:50:42.411251   50563 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0814 00:50:42.429332   50563 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0814 00:50:42.444519   50563 docker.go:217] disabling cri-docker service (if available) ...
	I0814 00:50:42.444579   50563 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0814 00:50:42.458917   50563 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0814 00:50:42.473778   50563 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0814 00:50:42.595333   50563 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0814 00:50:42.770636   50563 docker.go:233] disabling docker service ...
	I0814 00:50:42.770698   50563 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0814 00:50:42.784594   50563 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0814 00:50:42.796897   50563 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0814 00:50:42.910787   50563 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0814 00:50:43.039642   50563 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0814 00:50:43.056103   50563 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 00:50:43.074749   50563 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0814 00:50:43.074807   50563 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 00:50:43.084373   50563 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0814 00:50:43.084434   50563 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 00:50:43.093964   50563 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 00:50:43.103238   50563 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 00:50:43.112426   50563 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0814 00:50:43.121885   50563 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0814 00:50:43.130590   50563 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0814 00:50:43.130647   50563 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0814 00:50:43.144877   50563 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0814 00:50:43.155101   50563 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 00:50:43.286770   50563 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0814 00:50:43.424509   50563 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0814 00:50:43.424619   50563 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0814 00:50:43.429135   50563 start.go:563] Will wait 60s for crictl version
	I0814 00:50:43.429196   50563 ssh_runner.go:195] Run: which crictl
	I0814 00:50:43.432851   50563 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0814 00:50:43.473830   50563 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0814 00:50:43.473903   50563 ssh_runner.go:195] Run: crio --version
	I0814 00:50:43.501017   50563 ssh_runner.go:195] Run: crio --version
	I0814 00:50:43.536731   50563 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0814 00:50:43.537962   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) Calling .GetIP
	I0814 00:50:43.541410   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | domain kubernetes-upgrade-492920 has defined MAC address 52:54:00:39:60:27 in network mk-kubernetes-upgrade-492920
	I0814 00:50:43.541836   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:60:27", ip: ""} in network mk-kubernetes-upgrade-492920: {Iface:virbr2 ExpiryTime:2024-08-14 01:50:32 +0000 UTC Type:0 Mac:52:54:00:39:60:27 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:kubernetes-upgrade-492920 Clientid:01:52:54:00:39:60:27}
	I0814 00:50:43.541860   50563 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | domain kubernetes-upgrade-492920 has defined IP address 192.168.50.136 and MAC address 52:54:00:39:60:27 in network mk-kubernetes-upgrade-492920
	I0814 00:50:43.542094   50563 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0814 00:50:43.546400   50563 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 00:50:43.559956   50563 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-492920 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-492920 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.136 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0814 00:50:43.560090   50563 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0814 00:50:43.560157   50563 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 00:50:43.594788   50563 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0814 00:50:43.594863   50563 ssh_runner.go:195] Run: which lz4
	I0814 00:50:43.598801   50563 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0814 00:50:43.602941   50563 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0814 00:50:43.602971   50563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0814 00:50:45.036483   50563 crio.go:462] duration metric: took 1.437715415s to copy over tarball
	I0814 00:50:45.036567   50563 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0814 00:50:47.639959   50563 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.603359216s)
	I0814 00:50:47.639992   50563 crio.go:469] duration metric: took 2.603475903s to extract the tarball
	I0814 00:50:47.640001   50563 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0814 00:50:47.687806   50563 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 00:50:47.733752   50563 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0814 00:50:47.733772   50563 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0814 00:50:47.733834   50563 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 00:50:47.733846   50563 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0814 00:50:47.733900   50563 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0814 00:50:47.733917   50563 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0814 00:50:47.733884   50563 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0814 00:50:47.734114   50563 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0814 00:50:47.733870   50563 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 00:50:47.734153   50563 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0814 00:50:47.735622   50563 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0814 00:50:47.735785   50563 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0814 00:50:47.735805   50563 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 00:50:47.735853   50563 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0814 00:50:47.735788   50563 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0814 00:50:47.735880   50563 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0814 00:50:47.735977   50563 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 00:50:47.736247   50563 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0814 00:50:47.994440   50563 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0814 00:50:47.998850   50563 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0814 00:50:48.009828   50563 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0814 00:50:48.014444   50563 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0814 00:50:48.018810   50563 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 00:50:48.042887   50563 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0814 00:50:48.049977   50563 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0814 00:50:48.150079   50563 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0814 00:50:48.150124   50563 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0814 00:50:48.150136   50563 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0814 00:50:48.150157   50563 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0814 00:50:48.150193   50563 ssh_runner.go:195] Run: which crictl
	I0814 00:50:48.150198   50563 ssh_runner.go:195] Run: which crictl
	I0814 00:50:48.192132   50563 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0814 00:50:48.192174   50563 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0814 00:50:48.192217   50563 ssh_runner.go:195] Run: which crictl
	I0814 00:50:48.195923   50563 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0814 00:50:48.195964   50563 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0814 00:50:48.195979   50563 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0814 00:50:48.196015   50563 ssh_runner.go:195] Run: which crictl
	I0814 00:50:48.196018   50563 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 00:50:48.196058   50563 ssh_runner.go:195] Run: which crictl
	I0814 00:50:48.213597   50563 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0814 00:50:48.213639   50563 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0814 00:50:48.213675   50563 ssh_runner.go:195] Run: which crictl
	I0814 00:50:48.217155   50563 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0814 00:50:48.217194   50563 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0814 00:50:48.217211   50563 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0814 00:50:48.217231   50563 ssh_runner.go:195] Run: which crictl
	I0814 00:50:48.217314   50563 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0814 00:50:48.217315   50563 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0814 00:50:48.217361   50563 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0814 00:50:48.217421   50563 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 00:50:48.220003   50563 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0814 00:50:48.344308   50563 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0814 00:50:48.344362   50563 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0814 00:50:48.344310   50563 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0814 00:50:48.344377   50563 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0814 00:50:48.344451   50563 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0814 00:50:48.344556   50563 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 00:50:48.349238   50563 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0814 00:50:48.480904   50563 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0814 00:50:48.480929   50563 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0814 00:50:48.480979   50563 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0814 00:50:48.481146   50563 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0814 00:50:48.482948   50563 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0814 00:50:48.490609   50563 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 00:50:48.490657   50563 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0814 00:50:48.613554   50563 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0814 00:50:48.613585   50563 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0814 00:50:48.617681   50563 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 00:50:48.652819   50563 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0814 00:50:48.652889   50563 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0814 00:50:48.652953   50563 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0814 00:50:48.653013   50563 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0814 00:50:48.653049   50563 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0814 00:50:48.705877   50563 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0814 00:50:48.819790   50563 cache_images.go:92] duration metric: took 1.085983207s to LoadCachedImages
	W0814 00:50:48.819899   50563 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0814 00:50:48.819919   50563 kubeadm.go:934] updating node { 192.168.50.136 8443 v1.20.0 crio true true} ...
	I0814 00:50:48.820034   50563 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-492920 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.136
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-492920 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0814 00:50:48.820100   50563 ssh_runner.go:195] Run: crio config
	I0814 00:50:48.863433   50563 cni.go:84] Creating CNI manager for ""
	I0814 00:50:48.863466   50563 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 00:50:48.863483   50563 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0814 00:50:48.863506   50563 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.136 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-492920 NodeName:kubernetes-upgrade-492920 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.136"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.136 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0814 00:50:48.863661   50563 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.136
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-492920"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.136
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.136"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0814 00:50:48.863733   50563 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0814 00:50:48.872915   50563 binaries.go:44] Found k8s binaries, skipping transfer
	I0814 00:50:48.872993   50563 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0814 00:50:48.881505   50563 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I0814 00:50:48.896660   50563 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0814 00:50:48.911704   50563 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I0814 00:50:48.927129   50563 ssh_runner.go:195] Run: grep 192.168.50.136	control-plane.minikube.internal$ /etc/hosts
	I0814 00:50:48.930873   50563 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.136	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 00:50:48.941863   50563 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 00:50:49.050672   50563 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 00:50:49.066754   50563 certs.go:68] Setting up /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/kubernetes-upgrade-492920 for IP: 192.168.50.136
	I0814 00:50:49.066776   50563 certs.go:194] generating shared ca certs ...
	I0814 00:50:49.066796   50563 certs.go:226] acquiring lock for ca certs: {Name:mke321171338291cf6d66f3acbfe43d3fabcafa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 00:50:49.066979   50563 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19429-9425/.minikube/ca.key
	I0814 00:50:49.067095   50563 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.key
	I0814 00:50:49.067114   50563 certs.go:256] generating profile certs ...
	I0814 00:50:49.067181   50563 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/kubernetes-upgrade-492920/client.key
	I0814 00:50:49.067205   50563 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/kubernetes-upgrade-492920/client.crt with IP's: []
	I0814 00:50:49.125749   50563 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/kubernetes-upgrade-492920/client.crt ...
	I0814 00:50:49.125782   50563 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/kubernetes-upgrade-492920/client.crt: {Name:mk8f06d659e0f3eb0f8f478162ea058b4f8d7996 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 00:50:49.125981   50563 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/kubernetes-upgrade-492920/client.key ...
	I0814 00:50:49.125998   50563 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/kubernetes-upgrade-492920/client.key: {Name:mke029b06f025c17f42a1eb839a0d7281a9c7ead Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 00:50:49.126147   50563 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/kubernetes-upgrade-492920/apiserver.key.d758bd7f
	I0814 00:50:49.126170   50563 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/kubernetes-upgrade-492920/apiserver.crt.d758bd7f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.136]
	I0814 00:50:49.208525   50563 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/kubernetes-upgrade-492920/apiserver.crt.d758bd7f ...
	I0814 00:50:49.208557   50563 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/kubernetes-upgrade-492920/apiserver.crt.d758bd7f: {Name:mk433ca3c8a553b04cffa0aa38c57ea7ce27c3bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 00:50:49.208711   50563 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/kubernetes-upgrade-492920/apiserver.key.d758bd7f ...
	I0814 00:50:49.208725   50563 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/kubernetes-upgrade-492920/apiserver.key.d758bd7f: {Name:mkd6cc2f7622e14431c60822831aa4fc9447bfdf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 00:50:49.208808   50563 certs.go:381] copying /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/kubernetes-upgrade-492920/apiserver.crt.d758bd7f -> /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/kubernetes-upgrade-492920/apiserver.crt
	I0814 00:50:49.208917   50563 certs.go:385] copying /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/kubernetes-upgrade-492920/apiserver.key.d758bd7f -> /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/kubernetes-upgrade-492920/apiserver.key
	I0814 00:50:49.208981   50563 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/kubernetes-upgrade-492920/proxy-client.key
	I0814 00:50:49.208997   50563 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/kubernetes-upgrade-492920/proxy-client.crt with IP's: []
	I0814 00:50:49.357684   50563 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/kubernetes-upgrade-492920/proxy-client.crt ...
	I0814 00:50:49.357721   50563 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/kubernetes-upgrade-492920/proxy-client.crt: {Name:mk71d564012af0004792ac327b1894c3e413f982 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 00:50:49.357891   50563 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/kubernetes-upgrade-492920/proxy-client.key ...
	I0814 00:50:49.357911   50563 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/kubernetes-upgrade-492920/proxy-client.key: {Name:mk3410686ed0684c084eef5fa899c50652a832ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 00:50:49.358158   50563 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589.pem (1338 bytes)
	W0814 00:50:49.358218   50563 certs.go:480] ignoring /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589_empty.pem, impossibly tiny 0 bytes
	I0814 00:50:49.358236   50563 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem (1679 bytes)
	I0814 00:50:49.358284   50563 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem (1082 bytes)
	I0814 00:50:49.358324   50563 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem (1123 bytes)
	I0814 00:50:49.358356   50563 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem (1675 bytes)
	I0814 00:50:49.358421   50563 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem (1708 bytes)
	I0814 00:50:49.359030   50563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0814 00:50:49.386184   50563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0814 00:50:49.408412   50563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0814 00:50:49.430896   50563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0814 00:50:49.454064   50563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/kubernetes-upgrade-492920/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0814 00:50:49.477451   50563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/kubernetes-upgrade-492920/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0814 00:50:49.498995   50563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/kubernetes-upgrade-492920/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0814 00:50:49.520631   50563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/kubernetes-upgrade-492920/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0814 00:50:49.543492   50563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem --> /usr/share/ca-certificates/165892.pem (1708 bytes)
	I0814 00:50:49.565632   50563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0814 00:50:49.588226   50563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589.pem --> /usr/share/ca-certificates/16589.pem (1338 bytes)
	I0814 00:50:49.611146   50563 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0814 00:50:49.626904   50563 ssh_runner.go:195] Run: openssl version
	I0814 00:50:49.632626   50563 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/165892.pem && ln -fs /usr/share/ca-certificates/165892.pem /etc/ssl/certs/165892.pem"
	I0814 00:50:49.642910   50563 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/165892.pem
	I0814 00:50:49.647167   50563 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 13 23:59 /usr/share/ca-certificates/165892.pem
	I0814 00:50:49.647226   50563 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/165892.pem
	I0814 00:50:49.652562   50563 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/165892.pem /etc/ssl/certs/3ec20f2e.0"
	I0814 00:50:49.662584   50563 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0814 00:50:49.672350   50563 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0814 00:50:49.676506   50563 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 13 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0814 00:50:49.676577   50563 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0814 00:50:49.681755   50563 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0814 00:50:49.691927   50563 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16589.pem && ln -fs /usr/share/ca-certificates/16589.pem /etc/ssl/certs/16589.pem"
	I0814 00:50:49.701900   50563 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16589.pem
	I0814 00:50:49.706025   50563 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 13 23:59 /usr/share/ca-certificates/16589.pem
	I0814 00:50:49.706097   50563 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16589.pem
	I0814 00:50:49.711946   50563 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16589.pem /etc/ssl/certs/51391683.0"
	I0814 00:50:49.722347   50563 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0814 00:50:49.726860   50563 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0814 00:50:49.726913   50563 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-492920 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-492920 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.136 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 00:50:49.727003   50563 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0814 00:50:49.727048   50563 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 00:50:49.765979   50563 cri.go:89] found id: ""
	I0814 00:50:49.766078   50563 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0814 00:50:49.775803   50563 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 00:50:49.785119   50563 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 00:50:49.794148   50563 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 00:50:49.794168   50563 kubeadm.go:157] found existing configuration files:
	
	I0814 00:50:49.794211   50563 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 00:50:49.802815   50563 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 00:50:49.802865   50563 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 00:50:49.811578   50563 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 00:50:49.820260   50563 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 00:50:49.820348   50563 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 00:50:49.829080   50563 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 00:50:49.837240   50563 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 00:50:49.837297   50563 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 00:50:49.845771   50563 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 00:50:49.853944   50563 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 00:50:49.854004   50563 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 00:50:49.862230   50563 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0814 00:50:50.134550   50563 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0814 00:52:48.052718   50563 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0814 00:52:48.052838   50563 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0814 00:52:48.054354   50563 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0814 00:52:48.054447   50563 kubeadm.go:310] [preflight] Running pre-flight checks
	I0814 00:52:48.054557   50563 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0814 00:52:48.054712   50563 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0814 00:52:48.054880   50563 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0814 00:52:48.054968   50563 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0814 00:52:48.056959   50563 out.go:204]   - Generating certificates and keys ...
	I0814 00:52:48.057056   50563 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0814 00:52:48.057144   50563 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0814 00:52:48.057233   50563 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0814 00:52:48.057313   50563 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0814 00:52:48.057394   50563 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0814 00:52:48.057463   50563 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0814 00:52:48.057575   50563 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0814 00:52:48.057796   50563 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-492920 localhost] and IPs [192.168.50.136 127.0.0.1 ::1]
	I0814 00:52:48.057888   50563 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0814 00:52:48.058110   50563 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-492920 localhost] and IPs [192.168.50.136 127.0.0.1 ::1]
	I0814 00:52:48.058218   50563 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0814 00:52:48.058291   50563 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0814 00:52:48.058338   50563 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0814 00:52:48.058427   50563 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0814 00:52:48.058482   50563 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0814 00:52:48.058548   50563 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0814 00:52:48.058666   50563 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0814 00:52:48.058740   50563 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0814 00:52:48.058882   50563 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0814 00:52:48.059006   50563 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0814 00:52:48.059049   50563 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0814 00:52:48.059133   50563 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0814 00:52:48.060735   50563 out.go:204]   - Booting up control plane ...
	I0814 00:52:48.060868   50563 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0814 00:52:48.060996   50563 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0814 00:52:48.061100   50563 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0814 00:52:48.061214   50563 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0814 00:52:48.061452   50563 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0814 00:52:48.061508   50563 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0814 00:52:48.061607   50563 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 00:52:48.061861   50563 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 00:52:48.061953   50563 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 00:52:48.062254   50563 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 00:52:48.062372   50563 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 00:52:48.062653   50563 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 00:52:48.062753   50563 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 00:52:48.063019   50563 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 00:52:48.063121   50563 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 00:52:48.063390   50563 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 00:52:48.063406   50563 kubeadm.go:310] 
	I0814 00:52:48.063439   50563 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0814 00:52:48.063474   50563 kubeadm.go:310] 		timed out waiting for the condition
	I0814 00:52:48.063482   50563 kubeadm.go:310] 
	I0814 00:52:48.063511   50563 kubeadm.go:310] 	This error is likely caused by:
	I0814 00:52:48.063544   50563 kubeadm.go:310] 		- The kubelet is not running
	I0814 00:52:48.063656   50563 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0814 00:52:48.063666   50563 kubeadm.go:310] 
	I0814 00:52:48.063749   50563 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0814 00:52:48.063779   50563 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0814 00:52:48.063808   50563 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0814 00:52:48.063815   50563 kubeadm.go:310] 
	I0814 00:52:48.063923   50563 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0814 00:52:48.064002   50563 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0814 00:52:48.064011   50563 kubeadm.go:310] 
	I0814 00:52:48.064091   50563 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0814 00:52:48.064164   50563 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0814 00:52:48.064244   50563 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0814 00:52:48.064314   50563 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	W0814 00:52:48.064442   50563 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-492920 localhost] and IPs [192.168.50.136 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-492920 localhost] and IPs [192.168.50.136 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-492920 localhost] and IPs [192.168.50.136 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-492920 localhost] and IPs [192.168.50.136 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0814 00:52:48.064497   50563 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0814 00:52:48.064810   50563 kubeadm.go:310] 
	I0814 00:52:49.222144   50563 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.157614982s)
	I0814 00:52:49.222231   50563 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 00:52:49.238232   50563 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 00:52:49.249164   50563 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 00:52:49.249194   50563 kubeadm.go:157] found existing configuration files:
	
	I0814 00:52:49.249253   50563 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 00:52:49.258402   50563 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 00:52:49.258473   50563 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 00:52:49.267715   50563 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 00:52:49.276372   50563 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 00:52:49.276444   50563 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 00:52:49.287786   50563 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 00:52:49.296341   50563 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 00:52:49.296410   50563 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 00:52:49.305903   50563 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 00:52:49.315433   50563 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 00:52:49.315502   50563 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 00:52:49.325162   50563 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0814 00:52:49.417692   50563 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0814 00:52:49.417748   50563 kubeadm.go:310] [preflight] Running pre-flight checks
	I0814 00:52:49.578125   50563 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0814 00:52:49.578317   50563 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0814 00:52:49.578467   50563 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0814 00:52:49.811758   50563 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0814 00:52:49.813644   50563 out.go:204]   - Generating certificates and keys ...
	I0814 00:52:49.813752   50563 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0814 00:52:49.813826   50563 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0814 00:52:49.813924   50563 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0814 00:52:49.814000   50563 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0814 00:52:49.814101   50563 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0814 00:52:49.814180   50563 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0814 00:52:49.814261   50563 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0814 00:52:49.814342   50563 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0814 00:52:49.814439   50563 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0814 00:52:49.814534   50563 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0814 00:52:49.814590   50563 kubeadm.go:310] [certs] Using the existing "sa" key
	I0814 00:52:49.814666   50563 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0814 00:52:50.011917   50563 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0814 00:52:50.174488   50563 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0814 00:52:50.463539   50563 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0814 00:52:50.511243   50563 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0814 00:52:50.528512   50563 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0814 00:52:50.529864   50563 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0814 00:52:50.529931   50563 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0814 00:52:50.696334   50563 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0814 00:52:50.697955   50563 out.go:204]   - Booting up control plane ...
	I0814 00:52:50.698100   50563 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0814 00:52:50.708614   50563 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0814 00:52:50.708721   50563 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0814 00:52:50.708837   50563 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0814 00:52:50.713461   50563 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0814 00:53:30.711805   50563 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0814 00:53:30.711901   50563 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 00:53:30.712083   50563 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 00:53:35.712276   50563 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 00:53:35.712564   50563 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 00:53:45.712727   50563 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 00:53:45.712999   50563 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 00:54:05.713391   50563 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 00:54:05.713654   50563 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 00:54:45.715451   50563 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 00:54:45.715739   50563 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 00:54:45.715765   50563 kubeadm.go:310] 
	I0814 00:54:45.715805   50563 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0814 00:54:45.715853   50563 kubeadm.go:310] 		timed out waiting for the condition
	I0814 00:54:45.715874   50563 kubeadm.go:310] 
	I0814 00:54:45.715926   50563 kubeadm.go:310] 	This error is likely caused by:
	I0814 00:54:45.715973   50563 kubeadm.go:310] 		- The kubelet is not running
	I0814 00:54:45.716098   50563 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0814 00:54:45.716113   50563 kubeadm.go:310] 
	I0814 00:54:45.716277   50563 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0814 00:54:45.716331   50563 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0814 00:54:45.716362   50563 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0814 00:54:45.716370   50563 kubeadm.go:310] 
	I0814 00:54:45.716521   50563 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0814 00:54:45.716635   50563 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0814 00:54:45.716652   50563 kubeadm.go:310] 
	I0814 00:54:45.716809   50563 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0814 00:54:45.716917   50563 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0814 00:54:45.717008   50563 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0814 00:54:45.717102   50563 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0814 00:54:45.717113   50563 kubeadm.go:310] 
	I0814 00:54:45.717841   50563 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0814 00:54:45.717957   50563 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0814 00:54:45.718076   50563 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0814 00:54:45.718174   50563 kubeadm.go:394] duration metric: took 3m55.991262591s to StartCluster
	I0814 00:54:45.718235   50563 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 00:54:45.718314   50563 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 00:54:45.769828   50563 cri.go:89] found id: ""
	I0814 00:54:45.769859   50563 logs.go:276] 0 containers: []
	W0814 00:54:45.769869   50563 logs.go:278] No container was found matching "kube-apiserver"
	I0814 00:54:45.769878   50563 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 00:54:45.769958   50563 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 00:54:45.812380   50563 cri.go:89] found id: ""
	I0814 00:54:45.812413   50563 logs.go:276] 0 containers: []
	W0814 00:54:45.812425   50563 logs.go:278] No container was found matching "etcd"
	I0814 00:54:45.812433   50563 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 00:54:45.812499   50563 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 00:54:45.855394   50563 cri.go:89] found id: ""
	I0814 00:54:45.855421   50563 logs.go:276] 0 containers: []
	W0814 00:54:45.855432   50563 logs.go:278] No container was found matching "coredns"
	I0814 00:54:45.855439   50563 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 00:54:45.855500   50563 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 00:54:45.896915   50563 cri.go:89] found id: ""
	I0814 00:54:45.896946   50563 logs.go:276] 0 containers: []
	W0814 00:54:45.896956   50563 logs.go:278] No container was found matching "kube-scheduler"
	I0814 00:54:45.896964   50563 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 00:54:45.897028   50563 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 00:54:45.933932   50563 cri.go:89] found id: ""
	I0814 00:54:45.933959   50563 logs.go:276] 0 containers: []
	W0814 00:54:45.933971   50563 logs.go:278] No container was found matching "kube-proxy"
	I0814 00:54:45.933978   50563 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 00:54:45.934062   50563 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 00:54:45.968012   50563 cri.go:89] found id: ""
	I0814 00:54:45.968040   50563 logs.go:276] 0 containers: []
	W0814 00:54:45.968051   50563 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 00:54:45.968059   50563 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 00:54:45.968145   50563 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 00:54:46.000017   50563 cri.go:89] found id: ""
	I0814 00:54:46.000046   50563 logs.go:276] 0 containers: []
	W0814 00:54:46.000060   50563 logs.go:278] No container was found matching "kindnet"
	I0814 00:54:46.000072   50563 logs.go:123] Gathering logs for container status ...
	I0814 00:54:46.000087   50563 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 00:54:46.043797   50563 logs.go:123] Gathering logs for kubelet ...
	I0814 00:54:46.043832   50563 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 00:54:46.111322   50563 logs.go:123] Gathering logs for dmesg ...
	I0814 00:54:46.111356   50563 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 00:54:46.123996   50563 logs.go:123] Gathering logs for describe nodes ...
	I0814 00:54:46.124020   50563 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 00:54:46.273715   50563 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 00:54:46.273745   50563 logs.go:123] Gathering logs for CRI-O ...
	I0814 00:54:46.273761   50563 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0814 00:54:46.424847   50563 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0814 00:54:46.424914   50563 out.go:239] * 
	* 
	W0814 00:54:46.424991   50563 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0814 00:54:46.425025   50563 out.go:239] * 
	* 
	W0814 00:54:46.426157   50563 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0814 00:54:46.429674   50563 out.go:177] 
	W0814 00:54:46.430927   50563 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0814 00:54:46.431001   50563 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0814 00:54:46.431031   50563 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0814 00:54:46.432515   50563 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-492920 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-492920
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-492920: (1.598967439s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-492920 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-492920 status --format={{.Host}}: exit status 7 (73.923248ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-492920 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-492920 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m11.528301102s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-492920 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-492920 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-492920 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (111.280773ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-492920] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19429
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19429-9425/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19429-9425/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-492920
	    minikube start -p kubernetes-upgrade-492920 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4929202 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0, by running:
	    
	    minikube start -p kubernetes-upgrade-492920 --kubernetes-version=v1.31.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-492920 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-492920 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (23.009835681s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-08-14 00:56:22.903245914 +0000 UTC m=+4173.066743117
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-492920 -n kubernetes-upgrade-492920
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-492920 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-492920 logs -n 25: (1.428783914s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                  Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-612440 sudo cat              | cilium-612440             | jenkins | v1.33.1 | 14 Aug 24 00:53 UTC |                     |
	|         | /lib/systemd/system/containerd.service |                           |         |         |                     |                     |
	| ssh     | -p cilium-612440 sudo cat              | cilium-612440             | jenkins | v1.33.1 | 14 Aug 24 00:53 UTC |                     |
	|         | /etc/containerd/config.toml            |                           |         |         |                     |                     |
	| ssh     | -p cilium-612440 sudo                  | cilium-612440             | jenkins | v1.33.1 | 14 Aug 24 00:53 UTC |                     |
	|         | containerd config dump                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-612440 sudo                  | cilium-612440             | jenkins | v1.33.1 | 14 Aug 24 00:53 UTC |                     |
	|         | systemctl status crio --all            |                           |         |         |                     |                     |
	|         | --full --no-pager                      |                           |         |         |                     |                     |
	| ssh     | -p cilium-612440 sudo                  | cilium-612440             | jenkins | v1.33.1 | 14 Aug 24 00:53 UTC |                     |
	|         | systemctl cat crio --no-pager          |                           |         |         |                     |                     |
	| ssh     | -p cilium-612440 sudo find             | cilium-612440             | jenkins | v1.33.1 | 14 Aug 24 00:53 UTC |                     |
	|         | /etc/crio -type f -exec sh -c          |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                   |                           |         |         |                     |                     |
	| ssh     | -p cilium-612440 sudo crio             | cilium-612440             | jenkins | v1.33.1 | 14 Aug 24 00:53 UTC |                     |
	|         | config                                 |                           |         |         |                     |                     |
	| delete  | -p cilium-612440                       | cilium-612440             | jenkins | v1.33.1 | 14 Aug 24 00:53 UTC | 14 Aug 24 00:53 UTC |
	| start   | -p cert-expiration-769488              | cert-expiration-769488    | jenkins | v1.33.1 | 14 Aug 24 00:53 UTC | 14 Aug 24 00:54 UTC |
	|         | --memory=2048                          |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                   |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-095151              | running-upgrade-095151    | jenkins | v1.33.1 | 14 Aug 24 00:53 UTC | 14 Aug 24 00:53 UTC |
	| start   | -p force-systemd-flag-288470           | force-systemd-flag-288470 | jenkins | v1.33.1 | 14 Aug 24 00:53 UTC | 14 Aug 24 00:54 UTC |
	|         | --memory=2048 --force-systemd          |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-900037            | force-systemd-env-900037  | jenkins | v1.33.1 | 14 Aug 24 00:53 UTC | 14 Aug 24 00:53 UTC |
	| start   | -p cert-options-314451                 | cert-options-314451       | jenkins | v1.33.1 | 14 Aug 24 00:53 UTC | 14 Aug 24 00:55 UTC |
	|         | --memory=2048                          |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1              |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15          |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost            |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com       |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-288470 ssh cat      | force-systemd-flag-288470 | jenkins | v1.33.1 | 14 Aug 24 00:54 UTC | 14 Aug 24 00:54 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-288470           | force-systemd-flag-288470 | jenkins | v1.33.1 | 14 Aug 24 00:54 UTC | 14 Aug 24 00:54 UTC |
	| start   | -p pause-074686 --memory=2048          | pause-074686              | jenkins | v1.33.1 | 14 Aug 24 00:54 UTC | 14 Aug 24 00:56 UTC |
	|         | --install-addons=false                 |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2               |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-492920           | kubernetes-upgrade-492920 | jenkins | v1.33.1 | 14 Aug 24 00:54 UTC | 14 Aug 24 00:54 UTC |
	| start   | -p kubernetes-upgrade-492920           | kubernetes-upgrade-492920 | jenkins | v1.33.1 | 14 Aug 24 00:54 UTC | 14 Aug 24 00:55 UTC |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0           |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| ssh     | cert-options-314451 ssh                | cert-options-314451       | jenkins | v1.33.1 | 14 Aug 24 00:55 UTC | 14 Aug 24 00:55 UTC |
	|         | openssl x509 -text -noout -in          |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt  |                           |         |         |                     |                     |
	| ssh     | -p cert-options-314451 -- sudo         | cert-options-314451       | jenkins | v1.33.1 | 14 Aug 24 00:55 UTC | 14 Aug 24 00:55 UTC |
	|         | cat /etc/kubernetes/admin.conf         |                           |         |         |                     |                     |
	| delete  | -p cert-options-314451                 | cert-options-314451       | jenkins | v1.33.1 | 14 Aug 24 00:55 UTC | 14 Aug 24 00:55 UTC |
	| start   | -p old-k8s-version-179312              | old-k8s-version-179312    | jenkins | v1.33.1 | 14 Aug 24 00:55 UTC |                     |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true          |                           |         |         |                     |                     |
	|         | --kvm-network=default                  |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system          |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                |                           |         |         |                     |                     |
	|         | --keep-context=false                   |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0           |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-492920           | kubernetes-upgrade-492920 | jenkins | v1.33.1 | 14 Aug 24 00:55 UTC |                     |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0           |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-492920           | kubernetes-upgrade-492920 | jenkins | v1.33.1 | 14 Aug 24 00:55 UTC | 14 Aug 24 00:56 UTC |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0           |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p pause-074686                        | pause-074686              | jenkins | v1.33.1 | 14 Aug 24 00:56 UTC |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/14 00:56:09
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0814 00:56:09.906423   58216 out.go:291] Setting OutFile to fd 1 ...
	I0814 00:56:09.906579   58216 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 00:56:09.906592   58216 out.go:304] Setting ErrFile to fd 2...
	I0814 00:56:09.906598   58216 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 00:56:09.906903   58216 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19429-9425/.minikube/bin
	I0814 00:56:09.907662   58216 out.go:298] Setting JSON to false
	I0814 00:56:09.908904   58216 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5916,"bootTime":1723591054,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0814 00:56:09.908984   58216 start.go:139] virtualization: kvm guest
	I0814 00:56:09.911286   58216 out.go:177] * [pause-074686] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0814 00:56:09.912593   58216 out.go:177]   - MINIKUBE_LOCATION=19429
	I0814 00:56:09.912584   58216 notify.go:220] Checking for updates...
	I0814 00:56:09.915407   58216 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0814 00:56:09.916691   58216 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19429-9425/kubeconfig
	I0814 00:56:09.917820   58216 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19429-9425/.minikube
	I0814 00:56:09.918981   58216 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0814 00:56:09.920187   58216 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0814 00:56:09.922024   58216 config.go:182] Loaded profile config "pause-074686": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 00:56:09.922763   58216 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 00:56:09.922845   58216 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 00:56:09.939231   58216 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39793
	I0814 00:56:09.939713   58216 main.go:141] libmachine: () Calling .GetVersion
	I0814 00:56:09.940286   58216 main.go:141] libmachine: Using API Version  1
	I0814 00:56:09.940315   58216 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 00:56:09.940750   58216 main.go:141] libmachine: () Calling .GetMachineName
	I0814 00:56:09.940951   58216 main.go:141] libmachine: (pause-074686) Calling .DriverName
	I0814 00:56:09.941195   58216 driver.go:392] Setting default libvirt URI to qemu:///system
	I0814 00:56:09.941490   58216 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 00:56:09.941529   58216 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 00:56:09.956662   58216 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41155
	I0814 00:56:09.957091   58216 main.go:141] libmachine: () Calling .GetVersion
	I0814 00:56:09.957537   58216 main.go:141] libmachine: Using API Version  1
	I0814 00:56:09.957558   58216 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 00:56:09.957863   58216 main.go:141] libmachine: () Calling .GetMachineName
	I0814 00:56:09.958063   58216 main.go:141] libmachine: (pause-074686) Calling .DriverName
	I0814 00:56:09.992131   58216 out.go:177] * Using the kvm2 driver based on existing profile
	I0814 00:56:09.993175   58216 start.go:297] selected driver: kvm2
	I0814 00:56:09.993185   58216 start.go:901] validating driver "kvm2" against &{Name:pause-074686 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.31.0 ClusterName:pause-074686 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-devic
e-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 00:56:09.993346   58216 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0814 00:56:09.993711   58216 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 00:56:09.993783   58216 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19429-9425/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0814 00:56:10.009645   58216 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0814 00:56:10.010487   58216 cni.go:84] Creating CNI manager for ""
	I0814 00:56:10.010502   58216 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 00:56:10.010568   58216 start.go:340] cluster config:
	{Name:pause-074686 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:pause-074686 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:fa
lse registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 00:56:10.010695   58216 iso.go:125] acquiring lock: {Name:mk654171f0e78c238a265344dbbd1eacb21d0f1b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 00:56:10.012206   58216 out.go:177] * Starting "pause-074686" primary control-plane node in "pause-074686" cluster
	I0814 00:56:10.013493   58216 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0814 00:56:10.013531   58216 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0814 00:56:10.013538   58216 cache.go:56] Caching tarball of preloaded images
	I0814 00:56:10.013642   58216 preload.go:172] Found /home/jenkins/minikube-integration/19429-9425/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0814 00:56:10.013666   58216 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0814 00:56:10.013810   58216 profile.go:143] Saving config to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/pause-074686/config.json ...
	I0814 00:56:10.014070   58216 start.go:360] acquireMachinesLock for pause-074686: {Name:mk8ab1b491404dd6cc95a37a50c74a14d6d0e4c9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 00:56:10.806650   58216 start.go:364] duration metric: took 792.540074ms to acquireMachinesLock for "pause-074686"
	I0814 00:56:10.806704   58216 start.go:96] Skipping create...Using existing machine configuration
	I0814 00:56:10.806715   58216 fix.go:54] fixHost starting: 
	I0814 00:56:10.807087   58216 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 00:56:10.807145   58216 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 00:56:10.826408   58216 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36007
	I0814 00:56:10.826888   58216 main.go:141] libmachine: () Calling .GetVersion
	I0814 00:56:10.827369   58216 main.go:141] libmachine: Using API Version  1
	I0814 00:56:10.827394   58216 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 00:56:10.827723   58216 main.go:141] libmachine: () Calling .GetMachineName
	I0814 00:56:10.827911   58216 main.go:141] libmachine: (pause-074686) Calling .DriverName
	I0814 00:56:10.828071   58216 main.go:141] libmachine: (pause-074686) Calling .GetState
	I0814 00:56:10.829600   58216 fix.go:112] recreateIfNeeded on pause-074686: state=Running err=<nil>
	W0814 00:56:10.829622   58216 fix.go:138] unexpected machine state, will restart: <nil>
	I0814 00:56:10.832590   58216 out.go:177] * Updating the running kvm2 "pause-074686" VM ...
	I0814 00:56:10.589791   58134 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0814 00:56:10.589817   58134 machine.go:97] duration metric: took 10.484884081s to provisionDockerMachine
	I0814 00:56:10.589827   58134 start.go:293] postStartSetup for "kubernetes-upgrade-492920" (driver="kvm2")
	I0814 00:56:10.589838   58134 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0814 00:56:10.589856   58134 main.go:141] libmachine: (kubernetes-upgrade-492920) Calling .DriverName
	I0814 00:56:10.590192   58134 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0814 00:56:10.590222   58134 main.go:141] libmachine: (kubernetes-upgrade-492920) Calling .GetSSHHostname
	I0814 00:56:10.592869   58134 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | domain kubernetes-upgrade-492920 has defined MAC address 52:54:00:39:60:27 in network mk-kubernetes-upgrade-492920
	I0814 00:56:10.593338   58134 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:60:27", ip: ""} in network mk-kubernetes-upgrade-492920: {Iface:virbr2 ExpiryTime:2024-08-14 01:55:20 +0000 UTC Type:0 Mac:52:54:00:39:60:27 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:kubernetes-upgrade-492920 Clientid:01:52:54:00:39:60:27}
	I0814 00:56:10.593380   58134 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | domain kubernetes-upgrade-492920 has defined IP address 192.168.50.136 and MAC address 52:54:00:39:60:27 in network mk-kubernetes-upgrade-492920
	I0814 00:56:10.593522   58134 main.go:141] libmachine: (kubernetes-upgrade-492920) Calling .GetSSHPort
	I0814 00:56:10.593729   58134 main.go:141] libmachine: (kubernetes-upgrade-492920) Calling .GetSSHKeyPath
	I0814 00:56:10.593915   58134 main.go:141] libmachine: (kubernetes-upgrade-492920) Calling .GetSSHUsername
	I0814 00:56:10.594068   58134 sshutil.go:53] new ssh client: &{IP:192.168.50.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/kubernetes-upgrade-492920/id_rsa Username:docker}
	I0814 00:56:10.672026   58134 ssh_runner.go:195] Run: cat /etc/os-release
	I0814 00:56:10.675835   58134 info.go:137] Remote host: Buildroot 2023.02.9
	I0814 00:56:10.675864   58134 filesync.go:126] Scanning /home/jenkins/minikube-integration/19429-9425/.minikube/addons for local assets ...
	I0814 00:56:10.675930   58134 filesync.go:126] Scanning /home/jenkins/minikube-integration/19429-9425/.minikube/files for local assets ...
	I0814 00:56:10.676010   58134 filesync.go:149] local asset: /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem -> 165892.pem in /etc/ssl/certs
	I0814 00:56:10.676097   58134 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0814 00:56:10.685343   58134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem --> /etc/ssl/certs/165892.pem (1708 bytes)
	I0814 00:56:10.706940   58134 start.go:296] duration metric: took 117.102435ms for postStartSetup
	I0814 00:56:10.706969   58134 fix.go:56] duration metric: took 10.627698697s for fixHost
	I0814 00:56:10.706990   58134 main.go:141] libmachine: (kubernetes-upgrade-492920) Calling .GetSSHHostname
	I0814 00:56:10.709553   58134 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | domain kubernetes-upgrade-492920 has defined MAC address 52:54:00:39:60:27 in network mk-kubernetes-upgrade-492920
	I0814 00:56:10.709932   58134 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:60:27", ip: ""} in network mk-kubernetes-upgrade-492920: {Iface:virbr2 ExpiryTime:2024-08-14 01:55:20 +0000 UTC Type:0 Mac:52:54:00:39:60:27 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:kubernetes-upgrade-492920 Clientid:01:52:54:00:39:60:27}
	I0814 00:56:10.709971   58134 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | domain kubernetes-upgrade-492920 has defined IP address 192.168.50.136 and MAC address 52:54:00:39:60:27 in network mk-kubernetes-upgrade-492920
	I0814 00:56:10.710168   58134 main.go:141] libmachine: (kubernetes-upgrade-492920) Calling .GetSSHPort
	I0814 00:56:10.710373   58134 main.go:141] libmachine: (kubernetes-upgrade-492920) Calling .GetSSHKeyPath
	I0814 00:56:10.710521   58134 main.go:141] libmachine: (kubernetes-upgrade-492920) Calling .GetSSHKeyPath
	I0814 00:56:10.710667   58134 main.go:141] libmachine: (kubernetes-upgrade-492920) Calling .GetSSHUsername
	I0814 00:56:10.710824   58134 main.go:141] libmachine: Using SSH client type: native
	I0814 00:56:10.710984   58134 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.136 22 <nil> <nil>}
	I0814 00:56:10.710994   58134 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0814 00:56:10.806501   58134 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723596970.797743286
	
	I0814 00:56:10.806521   58134 fix.go:216] guest clock: 1723596970.797743286
	I0814 00:56:10.806529   58134 fix.go:229] Guest: 2024-08-14 00:56:10.797743286 +0000 UTC Remote: 2024-08-14 00:56:10.706972805 +0000 UTC m=+10.809253508 (delta=90.770481ms)
	I0814 00:56:10.806552   58134 fix.go:200] guest clock delta is within tolerance: 90.770481ms
	I0814 00:56:10.806558   58134 start.go:83] releasing machines lock for "kubernetes-upgrade-492920", held for 10.727299839s
	I0814 00:56:10.806586   58134 main.go:141] libmachine: (kubernetes-upgrade-492920) Calling .DriverName
	I0814 00:56:10.806866   58134 main.go:141] libmachine: (kubernetes-upgrade-492920) Calling .GetIP
	I0814 00:56:10.809481   58134 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | domain kubernetes-upgrade-492920 has defined MAC address 52:54:00:39:60:27 in network mk-kubernetes-upgrade-492920
	I0814 00:56:10.809868   58134 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:60:27", ip: ""} in network mk-kubernetes-upgrade-492920: {Iface:virbr2 ExpiryTime:2024-08-14 01:55:20 +0000 UTC Type:0 Mac:52:54:00:39:60:27 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:kubernetes-upgrade-492920 Clientid:01:52:54:00:39:60:27}
	I0814 00:56:10.809897   58134 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | domain kubernetes-upgrade-492920 has defined IP address 192.168.50.136 and MAC address 52:54:00:39:60:27 in network mk-kubernetes-upgrade-492920
	I0814 00:56:10.810087   58134 main.go:141] libmachine: (kubernetes-upgrade-492920) Calling .DriverName
	I0814 00:56:10.810552   58134 main.go:141] libmachine: (kubernetes-upgrade-492920) Calling .DriverName
	I0814 00:56:10.810719   58134 main.go:141] libmachine: (kubernetes-upgrade-492920) Calling .DriverName
	I0814 00:56:10.810830   58134 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0814 00:56:10.810869   58134 main.go:141] libmachine: (kubernetes-upgrade-492920) Calling .GetSSHHostname
	I0814 00:56:10.810913   58134 ssh_runner.go:195] Run: cat /version.json
	I0814 00:56:10.810939   58134 main.go:141] libmachine: (kubernetes-upgrade-492920) Calling .GetSSHHostname
	I0814 00:56:10.813577   58134 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | domain kubernetes-upgrade-492920 has defined MAC address 52:54:00:39:60:27 in network mk-kubernetes-upgrade-492920
	I0814 00:56:10.813817   58134 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | domain kubernetes-upgrade-492920 has defined MAC address 52:54:00:39:60:27 in network mk-kubernetes-upgrade-492920
	I0814 00:56:10.813981   58134 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:60:27", ip: ""} in network mk-kubernetes-upgrade-492920: {Iface:virbr2 ExpiryTime:2024-08-14 01:55:20 +0000 UTC Type:0 Mac:52:54:00:39:60:27 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:kubernetes-upgrade-492920 Clientid:01:52:54:00:39:60:27}
	I0814 00:56:10.814011   58134 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | domain kubernetes-upgrade-492920 has defined IP address 192.168.50.136 and MAC address 52:54:00:39:60:27 in network mk-kubernetes-upgrade-492920
	I0814 00:56:10.814124   58134 main.go:141] libmachine: (kubernetes-upgrade-492920) Calling .GetSSHPort
	I0814 00:56:10.814239   58134 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:60:27", ip: ""} in network mk-kubernetes-upgrade-492920: {Iface:virbr2 ExpiryTime:2024-08-14 01:55:20 +0000 UTC Type:0 Mac:52:54:00:39:60:27 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:kubernetes-upgrade-492920 Clientid:01:52:54:00:39:60:27}
	I0814 00:56:10.814262   58134 main.go:141] libmachine: (kubernetes-upgrade-492920) Calling .GetSSHKeyPath
	I0814 00:56:10.814279   58134 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | domain kubernetes-upgrade-492920 has defined IP address 192.168.50.136 and MAC address 52:54:00:39:60:27 in network mk-kubernetes-upgrade-492920
	I0814 00:56:10.814405   58134 main.go:141] libmachine: (kubernetes-upgrade-492920) Calling .GetSSHPort
	I0814 00:56:10.814472   58134 main.go:141] libmachine: (kubernetes-upgrade-492920) Calling .GetSSHUsername
	I0814 00:56:10.814559   58134 main.go:141] libmachine: (kubernetes-upgrade-492920) Calling .GetSSHKeyPath
	I0814 00:56:10.814678   58134 sshutil.go:53] new ssh client: &{IP:192.168.50.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/kubernetes-upgrade-492920/id_rsa Username:docker}
	I0814 00:56:10.814731   58134 main.go:141] libmachine: (kubernetes-upgrade-492920) Calling .GetSSHUsername
	I0814 00:56:10.814851   58134 sshutil.go:53] new ssh client: &{IP:192.168.50.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/kubernetes-upgrade-492920/id_rsa Username:docker}
	I0814 00:56:10.891979   58134 ssh_runner.go:195] Run: systemctl --version
	I0814 00:56:10.925839   58134 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0814 00:56:11.081692   58134 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0814 00:56:11.088968   58134 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0814 00:56:11.089033   58134 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0814 00:56:11.098909   58134 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0814 00:56:11.098932   58134 start.go:495] detecting cgroup driver to use...
	I0814 00:56:11.098987   58134 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0814 00:56:11.116529   58134 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0814 00:56:11.130323   58134 docker.go:217] disabling cri-docker service (if available) ...
	I0814 00:56:11.130397   58134 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0814 00:56:11.143616   58134 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0814 00:56:11.156126   58134 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0814 00:56:11.287733   58134 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0814 00:56:11.432523   58134 docker.go:233] disabling docker service ...
	I0814 00:56:11.432592   58134 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0814 00:56:11.452589   58134 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0814 00:56:11.466630   58134 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0814 00:56:11.626400   58134 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0814 00:56:11.779490   58134 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0814 00:56:11.793584   58134 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 00:56:11.811170   58134 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0814 00:56:11.811237   58134 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 00:56:11.821354   58134 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0814 00:56:11.821405   58134 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 00:56:11.831771   58134 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 00:56:11.842317   58134 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 00:56:11.852192   58134 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0814 00:56:11.862723   58134 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 00:56:11.872899   58134 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 00:56:11.883112   58134 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 00:56:11.892908   58134 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0814 00:56:11.901767   58134 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0814 00:56:11.910631   58134 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 00:56:12.068429   58134 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0814 00:56:12.346280   58134 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0814 00:56:12.346359   58134 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0814 00:56:12.350734   58134 start.go:563] Will wait 60s for crictl version
	I0814 00:56:12.350794   58134 ssh_runner.go:195] Run: which crictl
	I0814 00:56:12.354413   58134 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0814 00:56:12.388377   58134 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0814 00:56:12.388466   58134 ssh_runner.go:195] Run: crio --version
	I0814 00:56:12.414579   58134 ssh_runner.go:195] Run: crio --version
	I0814 00:56:12.444281   58134 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0814 00:56:10.833639   58216 machine.go:94] provisionDockerMachine start ...
	I0814 00:56:10.833661   58216 main.go:141] libmachine: (pause-074686) Calling .DriverName
	I0814 00:56:10.833868   58216 main.go:141] libmachine: (pause-074686) Calling .GetSSHHostname
	I0814 00:56:10.836508   58216 main.go:141] libmachine: (pause-074686) DBG | domain pause-074686 has defined MAC address 52:54:00:79:4b:38 in network mk-pause-074686
	I0814 00:56:10.836984   58216 main.go:141] libmachine: (pause-074686) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:4b:38", ip: ""} in network mk-pause-074686: {Iface:virbr3 ExpiryTime:2024-08-14 01:54:59 +0000 UTC Type:0 Mac:52:54:00:79:4b:38 Iaid: IPaddr:192.168.83.6 Prefix:24 Hostname:pause-074686 Clientid:01:52:54:00:79:4b:38}
	I0814 00:56:10.837012   58216 main.go:141] libmachine: (pause-074686) DBG | domain pause-074686 has defined IP address 192.168.83.6 and MAC address 52:54:00:79:4b:38 in network mk-pause-074686
	I0814 00:56:10.837121   58216 main.go:141] libmachine: (pause-074686) Calling .GetSSHPort
	I0814 00:56:10.837280   58216 main.go:141] libmachine: (pause-074686) Calling .GetSSHKeyPath
	I0814 00:56:10.837424   58216 main.go:141] libmachine: (pause-074686) Calling .GetSSHKeyPath
	I0814 00:56:10.837548   58216 main.go:141] libmachine: (pause-074686) Calling .GetSSHUsername
	I0814 00:56:10.837715   58216 main.go:141] libmachine: Using SSH client type: native
	I0814 00:56:10.837936   58216 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.83.6 22 <nil> <nil>}
	I0814 00:56:10.837951   58216 main.go:141] libmachine: About to run SSH command:
	hostname
	I0814 00:56:10.954592   58216 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-074686
	
	I0814 00:56:10.954629   58216 main.go:141] libmachine: (pause-074686) Calling .GetMachineName
	I0814 00:56:10.954908   58216 buildroot.go:166] provisioning hostname "pause-074686"
	I0814 00:56:10.954937   58216 main.go:141] libmachine: (pause-074686) Calling .GetMachineName
	I0814 00:56:10.955135   58216 main.go:141] libmachine: (pause-074686) Calling .GetSSHHostname
	I0814 00:56:10.957675   58216 main.go:141] libmachine: (pause-074686) DBG | domain pause-074686 has defined MAC address 52:54:00:79:4b:38 in network mk-pause-074686
	I0814 00:56:10.957998   58216 main.go:141] libmachine: (pause-074686) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:4b:38", ip: ""} in network mk-pause-074686: {Iface:virbr3 ExpiryTime:2024-08-14 01:54:59 +0000 UTC Type:0 Mac:52:54:00:79:4b:38 Iaid: IPaddr:192.168.83.6 Prefix:24 Hostname:pause-074686 Clientid:01:52:54:00:79:4b:38}
	I0814 00:56:10.958025   58216 main.go:141] libmachine: (pause-074686) DBG | domain pause-074686 has defined IP address 192.168.83.6 and MAC address 52:54:00:79:4b:38 in network mk-pause-074686
	I0814 00:56:10.958209   58216 main.go:141] libmachine: (pause-074686) Calling .GetSSHPort
	I0814 00:56:10.958391   58216 main.go:141] libmachine: (pause-074686) Calling .GetSSHKeyPath
	I0814 00:56:10.958547   58216 main.go:141] libmachine: (pause-074686) Calling .GetSSHKeyPath
	I0814 00:56:10.958728   58216 main.go:141] libmachine: (pause-074686) Calling .GetSSHUsername
	I0814 00:56:10.958878   58216 main.go:141] libmachine: Using SSH client type: native
	I0814 00:56:10.959042   58216 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.83.6 22 <nil> <nil>}
	I0814 00:56:10.959060   58216 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-074686 && echo "pause-074686" | sudo tee /etc/hostname
	I0814 00:56:11.086416   58216 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-074686
	
	I0814 00:56:11.086446   58216 main.go:141] libmachine: (pause-074686) Calling .GetSSHHostname
	I0814 00:56:11.089593   58216 main.go:141] libmachine: (pause-074686) DBG | domain pause-074686 has defined MAC address 52:54:00:79:4b:38 in network mk-pause-074686
	I0814 00:56:11.089999   58216 main.go:141] libmachine: (pause-074686) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:4b:38", ip: ""} in network mk-pause-074686: {Iface:virbr3 ExpiryTime:2024-08-14 01:54:59 +0000 UTC Type:0 Mac:52:54:00:79:4b:38 Iaid: IPaddr:192.168.83.6 Prefix:24 Hostname:pause-074686 Clientid:01:52:54:00:79:4b:38}
	I0814 00:56:11.090035   58216 main.go:141] libmachine: (pause-074686) DBG | domain pause-074686 has defined IP address 192.168.83.6 and MAC address 52:54:00:79:4b:38 in network mk-pause-074686
	I0814 00:56:11.090246   58216 main.go:141] libmachine: (pause-074686) Calling .GetSSHPort
	I0814 00:56:11.090425   58216 main.go:141] libmachine: (pause-074686) Calling .GetSSHKeyPath
	I0814 00:56:11.090633   58216 main.go:141] libmachine: (pause-074686) Calling .GetSSHKeyPath
	I0814 00:56:11.090810   58216 main.go:141] libmachine: (pause-074686) Calling .GetSSHUsername
	I0814 00:56:11.091032   58216 main.go:141] libmachine: Using SSH client type: native
	I0814 00:56:11.091265   58216 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.83.6 22 <nil> <nil>}
	I0814 00:56:11.091285   58216 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-074686' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-074686/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-074686' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0814 00:56:11.206458   58216 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 00:56:11.206489   58216 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19429-9425/.minikube CaCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19429-9425/.minikube}
	I0814 00:56:11.206540   58216 buildroot.go:174] setting up certificates
	I0814 00:56:11.206551   58216 provision.go:84] configureAuth start
	I0814 00:56:11.206563   58216 main.go:141] libmachine: (pause-074686) Calling .GetMachineName
	I0814 00:56:11.206846   58216 main.go:141] libmachine: (pause-074686) Calling .GetIP
	I0814 00:56:11.209508   58216 main.go:141] libmachine: (pause-074686) DBG | domain pause-074686 has defined MAC address 52:54:00:79:4b:38 in network mk-pause-074686
	I0814 00:56:11.209834   58216 main.go:141] libmachine: (pause-074686) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:4b:38", ip: ""} in network mk-pause-074686: {Iface:virbr3 ExpiryTime:2024-08-14 01:54:59 +0000 UTC Type:0 Mac:52:54:00:79:4b:38 Iaid: IPaddr:192.168.83.6 Prefix:24 Hostname:pause-074686 Clientid:01:52:54:00:79:4b:38}
	I0814 00:56:11.209862   58216 main.go:141] libmachine: (pause-074686) DBG | domain pause-074686 has defined IP address 192.168.83.6 and MAC address 52:54:00:79:4b:38 in network mk-pause-074686
	I0814 00:56:11.209991   58216 main.go:141] libmachine: (pause-074686) Calling .GetSSHHostname
	I0814 00:56:11.212416   58216 main.go:141] libmachine: (pause-074686) DBG | domain pause-074686 has defined MAC address 52:54:00:79:4b:38 in network mk-pause-074686
	I0814 00:56:11.212801   58216 main.go:141] libmachine: (pause-074686) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:4b:38", ip: ""} in network mk-pause-074686: {Iface:virbr3 ExpiryTime:2024-08-14 01:54:59 +0000 UTC Type:0 Mac:52:54:00:79:4b:38 Iaid: IPaddr:192.168.83.6 Prefix:24 Hostname:pause-074686 Clientid:01:52:54:00:79:4b:38}
	I0814 00:56:11.212828   58216 main.go:141] libmachine: (pause-074686) DBG | domain pause-074686 has defined IP address 192.168.83.6 and MAC address 52:54:00:79:4b:38 in network mk-pause-074686
	I0814 00:56:11.212955   58216 provision.go:143] copyHostCerts
	I0814 00:56:11.213008   58216 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem, removing ...
	I0814 00:56:11.213017   58216 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem
	I0814 00:56:11.213071   58216 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem (1675 bytes)
	I0814 00:56:11.213167   58216 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem, removing ...
	I0814 00:56:11.213175   58216 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem
	I0814 00:56:11.213194   58216 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem (1082 bytes)
	I0814 00:56:11.213252   58216 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem, removing ...
	I0814 00:56:11.213258   58216 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem
	I0814 00:56:11.213275   58216 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem (1123 bytes)
	I0814 00:56:11.213337   58216 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem org=jenkins.pause-074686 san=[127.0.0.1 192.168.83.6 localhost minikube pause-074686]
	I0814 00:56:11.280104   58216 provision.go:177] copyRemoteCerts
	I0814 00:56:11.280176   58216 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0814 00:56:11.280205   58216 main.go:141] libmachine: (pause-074686) Calling .GetSSHHostname
	I0814 00:56:11.283324   58216 main.go:141] libmachine: (pause-074686) DBG | domain pause-074686 has defined MAC address 52:54:00:79:4b:38 in network mk-pause-074686
	I0814 00:56:11.283717   58216 main.go:141] libmachine: (pause-074686) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:4b:38", ip: ""} in network mk-pause-074686: {Iface:virbr3 ExpiryTime:2024-08-14 01:54:59 +0000 UTC Type:0 Mac:52:54:00:79:4b:38 Iaid: IPaddr:192.168.83.6 Prefix:24 Hostname:pause-074686 Clientid:01:52:54:00:79:4b:38}
	I0814 00:56:11.283745   58216 main.go:141] libmachine: (pause-074686) DBG | domain pause-074686 has defined IP address 192.168.83.6 and MAC address 52:54:00:79:4b:38 in network mk-pause-074686
	I0814 00:56:11.284004   58216 main.go:141] libmachine: (pause-074686) Calling .GetSSHPort
	I0814 00:56:11.284190   58216 main.go:141] libmachine: (pause-074686) Calling .GetSSHKeyPath
	I0814 00:56:11.284331   58216 main.go:141] libmachine: (pause-074686) Calling .GetSSHUsername
	I0814 00:56:11.284475   58216 sshutil.go:53] new ssh client: &{IP:192.168.83.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/pause-074686/id_rsa Username:docker}
	I0814 00:56:11.372345   58216 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0814 00:56:11.398634   58216 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0814 00:56:11.424035   58216 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0814 00:56:11.448306   58216 provision.go:87] duration metric: took 241.74338ms to configureAuth
	I0814 00:56:11.448333   58216 buildroot.go:189] setting minikube options for container-runtime
	I0814 00:56:11.448605   58216 config.go:182] Loaded profile config "pause-074686": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 00:56:11.448713   58216 main.go:141] libmachine: (pause-074686) Calling .GetSSHHostname
	I0814 00:56:11.451290   58216 main.go:141] libmachine: (pause-074686) DBG | domain pause-074686 has defined MAC address 52:54:00:79:4b:38 in network mk-pause-074686
	I0814 00:56:11.451674   58216 main.go:141] libmachine: (pause-074686) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:4b:38", ip: ""} in network mk-pause-074686: {Iface:virbr3 ExpiryTime:2024-08-14 01:54:59 +0000 UTC Type:0 Mac:52:54:00:79:4b:38 Iaid: IPaddr:192.168.83.6 Prefix:24 Hostname:pause-074686 Clientid:01:52:54:00:79:4b:38}
	I0814 00:56:11.451704   58216 main.go:141] libmachine: (pause-074686) DBG | domain pause-074686 has defined IP address 192.168.83.6 and MAC address 52:54:00:79:4b:38 in network mk-pause-074686
	I0814 00:56:11.451890   58216 main.go:141] libmachine: (pause-074686) Calling .GetSSHPort
	I0814 00:56:11.452091   58216 main.go:141] libmachine: (pause-074686) Calling .GetSSHKeyPath
	I0814 00:56:11.452301   58216 main.go:141] libmachine: (pause-074686) Calling .GetSSHKeyPath
	I0814 00:56:11.452477   58216 main.go:141] libmachine: (pause-074686) Calling .GetSSHUsername
	I0814 00:56:11.452647   58216 main.go:141] libmachine: Using SSH client type: native
	I0814 00:56:11.452860   58216 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.83.6 22 <nil> <nil>}
	I0814 00:56:11.452879   58216 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0814 00:56:12.445517   58134 main.go:141] libmachine: (kubernetes-upgrade-492920) Calling .GetIP
	I0814 00:56:12.448234   58134 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | domain kubernetes-upgrade-492920 has defined MAC address 52:54:00:39:60:27 in network mk-kubernetes-upgrade-492920
	I0814 00:56:12.448591   58134 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:60:27", ip: ""} in network mk-kubernetes-upgrade-492920: {Iface:virbr2 ExpiryTime:2024-08-14 01:55:20 +0000 UTC Type:0 Mac:52:54:00:39:60:27 Iaid: IPaddr:192.168.50.136 Prefix:24 Hostname:kubernetes-upgrade-492920 Clientid:01:52:54:00:39:60:27}
	I0814 00:56:12.448619   58134 main.go:141] libmachine: (kubernetes-upgrade-492920) DBG | domain kubernetes-upgrade-492920 has defined IP address 192.168.50.136 and MAC address 52:54:00:39:60:27 in network mk-kubernetes-upgrade-492920
	I0814 00:56:12.448794   58134 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0814 00:56:12.452784   58134 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-492920 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:kubernetes-upgrade-492920 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.136 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0814 00:56:12.452914   58134 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0814 00:56:12.452972   58134 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 00:56:12.496332   58134 crio.go:514] all images are preloaded for cri-o runtime.
	I0814 00:56:12.496355   58134 crio.go:433] Images already preloaded, skipping extraction
	I0814 00:56:12.496409   58134 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 00:56:12.526756   58134 crio.go:514] all images are preloaded for cri-o runtime.
	I0814 00:56:12.526786   58134 cache_images.go:84] Images are preloaded, skipping loading
	I0814 00:56:12.526794   58134 kubeadm.go:934] updating node { 192.168.50.136 8443 v1.31.0 crio true true} ...
	I0814 00:56:12.526910   58134 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-492920 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.136
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:kubernetes-upgrade-492920 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0814 00:56:12.526990   58134 ssh_runner.go:195] Run: crio config
	I0814 00:56:12.571846   58134 cni.go:84] Creating CNI manager for ""
	I0814 00:56:12.571866   58134 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 00:56:12.571878   58134 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0814 00:56:12.571897   58134 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.136 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-492920 NodeName:kubernetes-upgrade-492920 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.136"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.136 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0814 00:56:12.572026   58134 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.136
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-492920"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.136
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.136"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0814 00:56:12.572079   58134 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0814 00:56:12.581468   58134 binaries.go:44] Found k8s binaries, skipping transfer
	I0814 00:56:12.581535   58134 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0814 00:56:12.590937   58134 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (325 bytes)
	I0814 00:56:12.608301   58134 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0814 00:56:12.624898   58134 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0814 00:56:12.641622   58134 ssh_runner.go:195] Run: grep 192.168.50.136	control-plane.minikube.internal$ /etc/hosts
	I0814 00:56:12.645446   58134 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 00:56:12.789627   58134 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 00:56:12.804588   58134 certs.go:68] Setting up /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/kubernetes-upgrade-492920 for IP: 192.168.50.136
	I0814 00:56:12.804605   58134 certs.go:194] generating shared ca certs ...
	I0814 00:56:12.804619   58134 certs.go:226] acquiring lock for ca certs: {Name:mke321171338291cf6d66f3acbfe43d3fabcafa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 00:56:12.804749   58134 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19429-9425/.minikube/ca.key
	I0814 00:56:12.804786   58134 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.key
	I0814 00:56:12.804795   58134 certs.go:256] generating profile certs ...
	I0814 00:56:12.804863   58134 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/kubernetes-upgrade-492920/client.key
	I0814 00:56:12.804904   58134 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/kubernetes-upgrade-492920/apiserver.key.d758bd7f
	I0814 00:56:12.804936   58134 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/kubernetes-upgrade-492920/proxy-client.key
	I0814 00:56:12.805037   58134 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589.pem (1338 bytes)
	W0814 00:56:12.805067   58134 certs.go:480] ignoring /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589_empty.pem, impossibly tiny 0 bytes
	I0814 00:56:12.805077   58134 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem (1679 bytes)
	I0814 00:56:12.805101   58134 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem (1082 bytes)
	I0814 00:56:12.805122   58134 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem (1123 bytes)
	I0814 00:56:12.805142   58134 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem (1675 bytes)
	I0814 00:56:12.805183   58134 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem (1708 bytes)
	I0814 00:56:12.805766   58134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0814 00:56:12.830439   58134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0814 00:56:12.853755   58134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0814 00:56:12.875508   58134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0814 00:56:12.896771   58134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/kubernetes-upgrade-492920/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0814 00:56:12.919487   58134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/kubernetes-upgrade-492920/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0814 00:56:12.944128   58134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/kubernetes-upgrade-492920/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0814 00:56:12.965291   58134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/kubernetes-upgrade-492920/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0814 00:56:12.987396   58134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem --> /usr/share/ca-certificates/165892.pem (1708 bytes)
	I0814 00:56:13.008671   58134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0814 00:56:13.031207   58134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589.pem --> /usr/share/ca-certificates/16589.pem (1338 bytes)
	I0814 00:56:13.052639   58134 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0814 00:56:13.068329   58134 ssh_runner.go:195] Run: openssl version
	I0814 00:56:13.073763   58134 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0814 00:56:13.084198   58134 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0814 00:56:13.088289   58134 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 13 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0814 00:56:13.088360   58134 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0814 00:56:13.094378   58134 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0814 00:56:13.103835   58134 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16589.pem && ln -fs /usr/share/ca-certificates/16589.pem /etc/ssl/certs/16589.pem"
	I0814 00:56:13.114281   58134 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16589.pem
	I0814 00:56:13.118323   58134 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 13 23:59 /usr/share/ca-certificates/16589.pem
	I0814 00:56:13.118377   58134 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16589.pem
	I0814 00:56:13.123653   58134 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16589.pem /etc/ssl/certs/51391683.0"
	I0814 00:56:13.132870   58134 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/165892.pem && ln -fs /usr/share/ca-certificates/165892.pem /etc/ssl/certs/165892.pem"
	I0814 00:56:13.142867   58134 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/165892.pem
	I0814 00:56:13.146743   58134 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 13 23:59 /usr/share/ca-certificates/165892.pem
	I0814 00:56:13.146782   58134 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/165892.pem
	I0814 00:56:13.152105   58134 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/165892.pem /etc/ssl/certs/3ec20f2e.0"
	I0814 00:56:13.161759   58134 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0814 00:56:13.165895   58134 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0814 00:56:13.171744   58134 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0814 00:56:13.177239   58134 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0814 00:56:13.182320   58134 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0814 00:56:13.187601   58134 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0814 00:56:13.192808   58134 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0814 00:56:13.197836   58134 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-492920 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.31.0 ClusterName:kubernetes-upgrade-492920 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.136 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 00:56:13.197918   58134 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0814 00:56:13.197977   58134 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 00:56:13.233914   58134 cri.go:89] found id: "bdb320848e91f1334caf6ec702b4e566517b2f64bef99a01092dff27ff838678"
	I0814 00:56:13.233935   58134 cri.go:89] found id: "ec6ff27c4d4120093f6b6c442248a64bdd1252d8dab0e13cf399fbcdf110f21a"
	I0814 00:56:13.233939   58134 cri.go:89] found id: "755146c9f89fa8e6df8c958ec44bd66f4c8e32748526080bff8821fbb64e30cc"
	I0814 00:56:13.233941   58134 cri.go:89] found id: "b4bfa9024df5d2ba0cda78ca025b3fd28e872bf728e7a9481268301942c91739"
	I0814 00:56:13.233961   58134 cri.go:89] found id: "b155a186b4af21fe83af97c74389938bfabb42190b520f1c1acc7e117c4198f9"
	I0814 00:56:13.233966   58134 cri.go:89] found id: "c21a0d0ea6df11f978cc3723b913762e23858a76be555ffcc7931776feab26bc"
	I0814 00:56:13.233970   58134 cri.go:89] found id: "0a0a85675b9629cc20805d9cac462a99baf5c0e25c75042d24960287abcf2bd6"
	I0814 00:56:13.233975   58134 cri.go:89] found id: "521e3111e56cb4d1ad24b27db3daaca497291219122453fcaa52e241b1ff788c"
	I0814 00:56:13.233979   58134 cri.go:89] found id: ""
	I0814 00:56:13.234025   58134 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 14 00:56:23 kubernetes-upgrade-492920 crio[2271]: time="2024-08-14 00:56:23.560112412Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723596983560089883,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e922b9f4-13d9-4035-8354-340021c2f257 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 00:56:23 kubernetes-upgrade-492920 crio[2271]: time="2024-08-14 00:56:23.560745214Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e78f90bd-3c9b-441f-9f69-7d82e16e529e name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 00:56:23 kubernetes-upgrade-492920 crio[2271]: time="2024-08-14 00:56:23.560798705Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e78f90bd-3c9b-441f-9f69-7d82e16e529e name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 00:56:23 kubernetes-upgrade-492920 crio[2271]: time="2024-08-14 00:56:23.561227755Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:669df276bd23e487040c42b161a5e331a39201325ff69612c18b8807afadad3a,PodSandboxId:f38bbcc18f3cb64789ac881c0404321890511561a5fdd51174fe445e184047c0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723596982274634694,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-btgqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f7261a4-3b02-40b3-b203-c839b9bed864,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbaee43ef60b0d20f01511a8b5563ace3a277715d759e7344b6dcb79bb0b9663,PodSandboxId:85ecd35ec876202b78f2004357f39afe5835efa7e5a53560ea5cc08a0997abe4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723596982323112211,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5474l,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: b87ae798-5cfa-47da-8b61-172d11d0c2ae,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b600da323159d69d60d182bfb6b4bc09cbff9849fd29eac54465668ba3c0706,PodSandboxId:25c2260e16c9f7916302d7be323a53273771e8090a897635697ab28fc89b42f7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAIN
ER_RUNNING,CreatedAt:1723596981102835323,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rwr5b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2f71a8d-d8b6-4cf9-882c-05d89fd4d353,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa66d26e41c6f9675eb7e860408c21d315ef70334d8bbc45dbd535aabefbcb8b,PodSandboxId:b27eafc9aca2eaa9b538895c4b3cb4d271c4db686d3e4576c8fb202733917261,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723
596981125621235,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48f9db41-4857-4e76-acc8-09e3fdb2c279,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1317230fc0fd9ddd1d02001360e322a965474d6b8746d3fc1d582abff0e12092,PodSandboxId:f95649b237e1028722faf729ecdb19056180f3e302bda5f45c9802f0e7da12bb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:172359697583
0870930,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-492920,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb420282a245ce817725930be9426c87,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c46341c392809adc970c5b68e7984d1c63e6ca5f486375f736dde0d2b1a8705,PodSandboxId:60f7f03c22c977eadb7ff3211b1885c7749b9a47ad6e848c215311e6f65df80d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt
:1723596975790091745,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-492920,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57886eb92b3253a4d0af89159f23d110,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce87480dd8b16d58207ec3ca2df878b51b7cefbddb683998fb946b4d8ffa3e2e,PodSandboxId:7d3015e5af8284350877f00e51f4c7be2cde3f9b309bb7d9e7d4c272d19566d6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723
596975738927136,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-492920,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b182ab397a2e6f954c863cc9a283c18f,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8363ea0c886a5440f3cb21dacb36a668a8d9fefa9fded1aaa7e6f9f37cd58118,PodSandboxId:23275c8804516244e38c22852d262bf042596a2f650acbe1100da045796c8007,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723596975724081258
,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-492920,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17999a6d4585726354b50905a548da8d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdb320848e91f1334caf6ec702b4e566517b2f64bef99a01092dff27ff838678,PodSandboxId:20f48771fce8c37f685ae5e156ebf51abe335ecaa207c57db5b84d20548f7158,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723596960068296558,Labels:map[string]st
ring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48f9db41-4857-4e76-acc8-09e3fdb2c279,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec6ff27c4d4120093f6b6c442248a64bdd1252d8dab0e13cf399fbcdf110f21a,PodSandboxId:adaefb29ae904ce645079c61054fc6b96bd9511fba839a2a84bf524c175dd7be,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723596959773088200,Labels:map[string]string{io.kubernetes.con
tainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rwr5b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2f71a8d-d8b6-4cf9-882c-05d89fd4d353,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:755146c9f89fa8e6df8c958ec44bd66f4c8e32748526080bff8821fbb64e30cc,PodSandboxId:843b081171af3559e16c8e0895f3d821bc0087f45411d9a76cf3a567960fb737,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723596959519295824,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod
.name: coredns-6f6b679f8f-5474l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b87ae798-5cfa-47da-8b61-172d11d0c2ae,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4bfa9024df5d2ba0cda78ca025b3fd28e872bf728e7a9481268301942c91739,PodSandboxId:4aad8c0d35f37353bf30536f2cec6cde82a7c637225e95bb3c2db2526657685b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb0
1a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723596959062595486,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-btgqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f7261a4-3b02-40b3-b203-c839b9bed864,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b155a186b4af21fe83af97c74389938bfabb42190b520f1c1acc7e117c4198f9,PodSandboxId:f1ecddcb960794eb28ad0a8181b9fbd5618f8906052096531c641acfeef8d85c,Metadata:&ContainerMetadata{Na
me:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723596942469428265,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-492920,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b182ab397a2e6f954c863cc9a283c18f,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c21a0d0ea6df11f978cc3723b913762e23858a76be555ffcc7931776feab26bc,PodSandboxId:785e2b05f2cb602e2f1368039d5a4d41d5871f02b474b178505177c66ee48fe3,Metadata:&ContainerMetadata{Name:kub
e-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723596942463001367,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-492920,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb420282a245ce817725930be9426c87,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:521e3111e56cb4d1ad24b27db3daaca497291219122453fcaa52e241b1ff788c,PodSandboxId:62e98fa442b0283fbd30c95911cf8c3a9720ee39c54662613e0f6a6edd65c7e4,Metadata:&Conta
inerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723596942440688906,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-492920,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17999a6d4585726354b50905a548da8d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a0a85675b9629cc20805d9cac462a99baf5c0e25c75042d24960287abcf2bd6,PodSandboxId:a383d0900f211862be82872c6b6e91f61a6bd516ae9dfbffeeb0d46550cee17b,Metadata:&ContainerMetadata{Name:kube-scheduler,Att
empt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723596942459227798,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-492920,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57886eb92b3253a4d0af89159f23d110,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e78f90bd-3c9b-441f-9f69-7d82e16e529e name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 00:56:23 kubernetes-upgrade-492920 crio[2271]: time="2024-08-14 00:56:23.603666381Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=769cc014-89eb-40e3-94fd-5a82d72f1a5e name=/runtime.v1.RuntimeService/Version
	Aug 14 00:56:23 kubernetes-upgrade-492920 crio[2271]: time="2024-08-14 00:56:23.603743674Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=769cc014-89eb-40e3-94fd-5a82d72f1a5e name=/runtime.v1.RuntimeService/Version
	Aug 14 00:56:23 kubernetes-upgrade-492920 crio[2271]: time="2024-08-14 00:56:23.604716800Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a1797737-2903-4833-a144-9c6a4e9c3a12 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 00:56:23 kubernetes-upgrade-492920 crio[2271]: time="2024-08-14 00:56:23.605102017Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723596983605073971,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a1797737-2903-4833-a144-9c6a4e9c3a12 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 00:56:23 kubernetes-upgrade-492920 crio[2271]: time="2024-08-14 00:56:23.605696531Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a87df7df-45ff-4246-ab07-c487dce857a1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 00:56:23 kubernetes-upgrade-492920 crio[2271]: time="2024-08-14 00:56:23.605757971Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a87df7df-45ff-4246-ab07-c487dce857a1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 00:56:23 kubernetes-upgrade-492920 crio[2271]: time="2024-08-14 00:56:23.606211438Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:669df276bd23e487040c42b161a5e331a39201325ff69612c18b8807afadad3a,PodSandboxId:f38bbcc18f3cb64789ac881c0404321890511561a5fdd51174fe445e184047c0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723596982274634694,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-btgqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f7261a4-3b02-40b3-b203-c839b9bed864,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbaee43ef60b0d20f01511a8b5563ace3a277715d759e7344b6dcb79bb0b9663,PodSandboxId:85ecd35ec876202b78f2004357f39afe5835efa7e5a53560ea5cc08a0997abe4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723596982323112211,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5474l,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: b87ae798-5cfa-47da-8b61-172d11d0c2ae,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b600da323159d69d60d182bfb6b4bc09cbff9849fd29eac54465668ba3c0706,PodSandboxId:25c2260e16c9f7916302d7be323a53273771e8090a897635697ab28fc89b42f7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAIN
ER_RUNNING,CreatedAt:1723596981102835323,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rwr5b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2f71a8d-d8b6-4cf9-882c-05d89fd4d353,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa66d26e41c6f9675eb7e860408c21d315ef70334d8bbc45dbd535aabefbcb8b,PodSandboxId:b27eafc9aca2eaa9b538895c4b3cb4d271c4db686d3e4576c8fb202733917261,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723
596981125621235,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48f9db41-4857-4e76-acc8-09e3fdb2c279,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1317230fc0fd9ddd1d02001360e322a965474d6b8746d3fc1d582abff0e12092,PodSandboxId:f95649b237e1028722faf729ecdb19056180f3e302bda5f45c9802f0e7da12bb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:172359697583
0870930,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-492920,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb420282a245ce817725930be9426c87,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c46341c392809adc970c5b68e7984d1c63e6ca5f486375f736dde0d2b1a8705,PodSandboxId:60f7f03c22c977eadb7ff3211b1885c7749b9a47ad6e848c215311e6f65df80d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt
:1723596975790091745,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-492920,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57886eb92b3253a4d0af89159f23d110,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce87480dd8b16d58207ec3ca2df878b51b7cefbddb683998fb946b4d8ffa3e2e,PodSandboxId:7d3015e5af8284350877f00e51f4c7be2cde3f9b309bb7d9e7d4c272d19566d6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723
596975738927136,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-492920,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b182ab397a2e6f954c863cc9a283c18f,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8363ea0c886a5440f3cb21dacb36a668a8d9fefa9fded1aaa7e6f9f37cd58118,PodSandboxId:23275c8804516244e38c22852d262bf042596a2f650acbe1100da045796c8007,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723596975724081258
,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-492920,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17999a6d4585726354b50905a548da8d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdb320848e91f1334caf6ec702b4e566517b2f64bef99a01092dff27ff838678,PodSandboxId:20f48771fce8c37f685ae5e156ebf51abe335ecaa207c57db5b84d20548f7158,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723596960068296558,Labels:map[string]st
ring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48f9db41-4857-4e76-acc8-09e3fdb2c279,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec6ff27c4d4120093f6b6c442248a64bdd1252d8dab0e13cf399fbcdf110f21a,PodSandboxId:adaefb29ae904ce645079c61054fc6b96bd9511fba839a2a84bf524c175dd7be,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723596959773088200,Labels:map[string]string{io.kubernetes.con
tainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rwr5b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2f71a8d-d8b6-4cf9-882c-05d89fd4d353,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:755146c9f89fa8e6df8c958ec44bd66f4c8e32748526080bff8821fbb64e30cc,PodSandboxId:843b081171af3559e16c8e0895f3d821bc0087f45411d9a76cf3a567960fb737,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723596959519295824,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod
.name: coredns-6f6b679f8f-5474l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b87ae798-5cfa-47da-8b61-172d11d0c2ae,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4bfa9024df5d2ba0cda78ca025b3fd28e872bf728e7a9481268301942c91739,PodSandboxId:4aad8c0d35f37353bf30536f2cec6cde82a7c637225e95bb3c2db2526657685b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb0
1a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723596959062595486,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-btgqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f7261a4-3b02-40b3-b203-c839b9bed864,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b155a186b4af21fe83af97c74389938bfabb42190b520f1c1acc7e117c4198f9,PodSandboxId:f1ecddcb960794eb28ad0a8181b9fbd5618f8906052096531c641acfeef8d85c,Metadata:&ContainerMetadata{Na
me:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723596942469428265,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-492920,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b182ab397a2e6f954c863cc9a283c18f,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c21a0d0ea6df11f978cc3723b913762e23858a76be555ffcc7931776feab26bc,PodSandboxId:785e2b05f2cb602e2f1368039d5a4d41d5871f02b474b178505177c66ee48fe3,Metadata:&ContainerMetadata{Name:kub
e-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723596942463001367,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-492920,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb420282a245ce817725930be9426c87,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:521e3111e56cb4d1ad24b27db3daaca497291219122453fcaa52e241b1ff788c,PodSandboxId:62e98fa442b0283fbd30c95911cf8c3a9720ee39c54662613e0f6a6edd65c7e4,Metadata:&Conta
inerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723596942440688906,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-492920,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17999a6d4585726354b50905a548da8d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a0a85675b9629cc20805d9cac462a99baf5c0e25c75042d24960287abcf2bd6,PodSandboxId:a383d0900f211862be82872c6b6e91f61a6bd516ae9dfbffeeb0d46550cee17b,Metadata:&ContainerMetadata{Name:kube-scheduler,Att
empt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723596942459227798,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-492920,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57886eb92b3253a4d0af89159f23d110,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a87df7df-45ff-4246-ab07-c487dce857a1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 00:56:23 kubernetes-upgrade-492920 crio[2271]: time="2024-08-14 00:56:23.647404841Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=301a98ee-0a2d-4822-9858-e48d4c3d1780 name=/runtime.v1.RuntimeService/Version
	Aug 14 00:56:23 kubernetes-upgrade-492920 crio[2271]: time="2024-08-14 00:56:23.647480048Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=301a98ee-0a2d-4822-9858-e48d4c3d1780 name=/runtime.v1.RuntimeService/Version
	Aug 14 00:56:23 kubernetes-upgrade-492920 crio[2271]: time="2024-08-14 00:56:23.649238807Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4e8c9e37-cf03-4679-8280-a6d1139f5721 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 00:56:23 kubernetes-upgrade-492920 crio[2271]: time="2024-08-14 00:56:23.650067738Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723596983650042716,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4e8c9e37-cf03-4679-8280-a6d1139f5721 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 00:56:23 kubernetes-upgrade-492920 crio[2271]: time="2024-08-14 00:56:23.650721117Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7c932cd9-04b1-40d0-9c5c-36892ffdfe46 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 00:56:23 kubernetes-upgrade-492920 crio[2271]: time="2024-08-14 00:56:23.650787936Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7c932cd9-04b1-40d0-9c5c-36892ffdfe46 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 00:56:23 kubernetes-upgrade-492920 crio[2271]: time="2024-08-14 00:56:23.651396436Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:669df276bd23e487040c42b161a5e331a39201325ff69612c18b8807afadad3a,PodSandboxId:f38bbcc18f3cb64789ac881c0404321890511561a5fdd51174fe445e184047c0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723596982274634694,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-btgqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f7261a4-3b02-40b3-b203-c839b9bed864,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbaee43ef60b0d20f01511a8b5563ace3a277715d759e7344b6dcb79bb0b9663,PodSandboxId:85ecd35ec876202b78f2004357f39afe5835efa7e5a53560ea5cc08a0997abe4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723596982323112211,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5474l,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: b87ae798-5cfa-47da-8b61-172d11d0c2ae,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b600da323159d69d60d182bfb6b4bc09cbff9849fd29eac54465668ba3c0706,PodSandboxId:25c2260e16c9f7916302d7be323a53273771e8090a897635697ab28fc89b42f7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAIN
ER_RUNNING,CreatedAt:1723596981102835323,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rwr5b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2f71a8d-d8b6-4cf9-882c-05d89fd4d353,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa66d26e41c6f9675eb7e860408c21d315ef70334d8bbc45dbd535aabefbcb8b,PodSandboxId:b27eafc9aca2eaa9b538895c4b3cb4d271c4db686d3e4576c8fb202733917261,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723
596981125621235,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48f9db41-4857-4e76-acc8-09e3fdb2c279,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1317230fc0fd9ddd1d02001360e322a965474d6b8746d3fc1d582abff0e12092,PodSandboxId:f95649b237e1028722faf729ecdb19056180f3e302bda5f45c9802f0e7da12bb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:172359697583
0870930,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-492920,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb420282a245ce817725930be9426c87,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c46341c392809adc970c5b68e7984d1c63e6ca5f486375f736dde0d2b1a8705,PodSandboxId:60f7f03c22c977eadb7ff3211b1885c7749b9a47ad6e848c215311e6f65df80d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt
:1723596975790091745,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-492920,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57886eb92b3253a4d0af89159f23d110,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce87480dd8b16d58207ec3ca2df878b51b7cefbddb683998fb946b4d8ffa3e2e,PodSandboxId:7d3015e5af8284350877f00e51f4c7be2cde3f9b309bb7d9e7d4c272d19566d6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723
596975738927136,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-492920,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b182ab397a2e6f954c863cc9a283c18f,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8363ea0c886a5440f3cb21dacb36a668a8d9fefa9fded1aaa7e6f9f37cd58118,PodSandboxId:23275c8804516244e38c22852d262bf042596a2f650acbe1100da045796c8007,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723596975724081258
,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-492920,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17999a6d4585726354b50905a548da8d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdb320848e91f1334caf6ec702b4e566517b2f64bef99a01092dff27ff838678,PodSandboxId:20f48771fce8c37f685ae5e156ebf51abe335ecaa207c57db5b84d20548f7158,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723596960068296558,Labels:map[string]st
ring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48f9db41-4857-4e76-acc8-09e3fdb2c279,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec6ff27c4d4120093f6b6c442248a64bdd1252d8dab0e13cf399fbcdf110f21a,PodSandboxId:adaefb29ae904ce645079c61054fc6b96bd9511fba839a2a84bf524c175dd7be,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723596959773088200,Labels:map[string]string{io.kubernetes.con
tainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rwr5b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2f71a8d-d8b6-4cf9-882c-05d89fd4d353,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:755146c9f89fa8e6df8c958ec44bd66f4c8e32748526080bff8821fbb64e30cc,PodSandboxId:843b081171af3559e16c8e0895f3d821bc0087f45411d9a76cf3a567960fb737,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723596959519295824,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod
.name: coredns-6f6b679f8f-5474l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b87ae798-5cfa-47da-8b61-172d11d0c2ae,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4bfa9024df5d2ba0cda78ca025b3fd28e872bf728e7a9481268301942c91739,PodSandboxId:4aad8c0d35f37353bf30536f2cec6cde82a7c637225e95bb3c2db2526657685b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb0
1a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723596959062595486,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-btgqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f7261a4-3b02-40b3-b203-c839b9bed864,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b155a186b4af21fe83af97c74389938bfabb42190b520f1c1acc7e117c4198f9,PodSandboxId:f1ecddcb960794eb28ad0a8181b9fbd5618f8906052096531c641acfeef8d85c,Metadata:&ContainerMetadata{Na
me:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723596942469428265,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-492920,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b182ab397a2e6f954c863cc9a283c18f,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c21a0d0ea6df11f978cc3723b913762e23858a76be555ffcc7931776feab26bc,PodSandboxId:785e2b05f2cb602e2f1368039d5a4d41d5871f02b474b178505177c66ee48fe3,Metadata:&ContainerMetadata{Name:kub
e-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723596942463001367,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-492920,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb420282a245ce817725930be9426c87,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:521e3111e56cb4d1ad24b27db3daaca497291219122453fcaa52e241b1ff788c,PodSandboxId:62e98fa442b0283fbd30c95911cf8c3a9720ee39c54662613e0f6a6edd65c7e4,Metadata:&Conta
inerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723596942440688906,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-492920,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17999a6d4585726354b50905a548da8d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a0a85675b9629cc20805d9cac462a99baf5c0e25c75042d24960287abcf2bd6,PodSandboxId:a383d0900f211862be82872c6b6e91f61a6bd516ae9dfbffeeb0d46550cee17b,Metadata:&ContainerMetadata{Name:kube-scheduler,Att
empt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723596942459227798,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-492920,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57886eb92b3253a4d0af89159f23d110,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7c932cd9-04b1-40d0-9c5c-36892ffdfe46 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 00:56:23 kubernetes-upgrade-492920 crio[2271]: time="2024-08-14 00:56:23.685534950Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6e743535-1e2f-44e7-b8bc-b87da92258cb name=/runtime.v1.RuntimeService/Version
	Aug 14 00:56:23 kubernetes-upgrade-492920 crio[2271]: time="2024-08-14 00:56:23.685604558Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6e743535-1e2f-44e7-b8bc-b87da92258cb name=/runtime.v1.RuntimeService/Version
	Aug 14 00:56:23 kubernetes-upgrade-492920 crio[2271]: time="2024-08-14 00:56:23.686687819Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b36c3e63-d6e5-407d-81b4-e7d8a3adf325 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 00:56:23 kubernetes-upgrade-492920 crio[2271]: time="2024-08-14 00:56:23.687045633Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723596983687026333,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b36c3e63-d6e5-407d-81b4-e7d8a3adf325 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 00:56:23 kubernetes-upgrade-492920 crio[2271]: time="2024-08-14 00:56:23.687568693Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2a7b52de-e121-4cb4-86b0-1efa4d88a9a5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 00:56:23 kubernetes-upgrade-492920 crio[2271]: time="2024-08-14 00:56:23.687620287Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2a7b52de-e121-4cb4-86b0-1efa4d88a9a5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 00:56:23 kubernetes-upgrade-492920 crio[2271]: time="2024-08-14 00:56:23.688525779Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:669df276bd23e487040c42b161a5e331a39201325ff69612c18b8807afadad3a,PodSandboxId:f38bbcc18f3cb64789ac881c0404321890511561a5fdd51174fe445e184047c0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723596982274634694,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-btgqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f7261a4-3b02-40b3-b203-c839b9bed864,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbaee43ef60b0d20f01511a8b5563ace3a277715d759e7344b6dcb79bb0b9663,PodSandboxId:85ecd35ec876202b78f2004357f39afe5835efa7e5a53560ea5cc08a0997abe4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723596982323112211,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5474l,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: b87ae798-5cfa-47da-8b61-172d11d0c2ae,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b600da323159d69d60d182bfb6b4bc09cbff9849fd29eac54465668ba3c0706,PodSandboxId:25c2260e16c9f7916302d7be323a53273771e8090a897635697ab28fc89b42f7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAIN
ER_RUNNING,CreatedAt:1723596981102835323,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rwr5b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2f71a8d-d8b6-4cf9-882c-05d89fd4d353,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa66d26e41c6f9675eb7e860408c21d315ef70334d8bbc45dbd535aabefbcb8b,PodSandboxId:b27eafc9aca2eaa9b538895c4b3cb4d271c4db686d3e4576c8fb202733917261,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723
596981125621235,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48f9db41-4857-4e76-acc8-09e3fdb2c279,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1317230fc0fd9ddd1d02001360e322a965474d6b8746d3fc1d582abff0e12092,PodSandboxId:f95649b237e1028722faf729ecdb19056180f3e302bda5f45c9802f0e7da12bb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:172359697583
0870930,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-492920,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb420282a245ce817725930be9426c87,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c46341c392809adc970c5b68e7984d1c63e6ca5f486375f736dde0d2b1a8705,PodSandboxId:60f7f03c22c977eadb7ff3211b1885c7749b9a47ad6e848c215311e6f65df80d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt
:1723596975790091745,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-492920,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57886eb92b3253a4d0af89159f23d110,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce87480dd8b16d58207ec3ca2df878b51b7cefbddb683998fb946b4d8ffa3e2e,PodSandboxId:7d3015e5af8284350877f00e51f4c7be2cde3f9b309bb7d9e7d4c272d19566d6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723
596975738927136,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-492920,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b182ab397a2e6f954c863cc9a283c18f,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8363ea0c886a5440f3cb21dacb36a668a8d9fefa9fded1aaa7e6f9f37cd58118,PodSandboxId:23275c8804516244e38c22852d262bf042596a2f650acbe1100da045796c8007,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723596975724081258
,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-492920,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17999a6d4585726354b50905a548da8d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdb320848e91f1334caf6ec702b4e566517b2f64bef99a01092dff27ff838678,PodSandboxId:20f48771fce8c37f685ae5e156ebf51abe335ecaa207c57db5b84d20548f7158,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723596960068296558,Labels:map[string]st
ring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48f9db41-4857-4e76-acc8-09e3fdb2c279,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec6ff27c4d4120093f6b6c442248a64bdd1252d8dab0e13cf399fbcdf110f21a,PodSandboxId:adaefb29ae904ce645079c61054fc6b96bd9511fba839a2a84bf524c175dd7be,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723596959773088200,Labels:map[string]string{io.kubernetes.con
tainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rwr5b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2f71a8d-d8b6-4cf9-882c-05d89fd4d353,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:755146c9f89fa8e6df8c958ec44bd66f4c8e32748526080bff8821fbb64e30cc,PodSandboxId:843b081171af3559e16c8e0895f3d821bc0087f45411d9a76cf3a567960fb737,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723596959519295824,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod
.name: coredns-6f6b679f8f-5474l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b87ae798-5cfa-47da-8b61-172d11d0c2ae,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4bfa9024df5d2ba0cda78ca025b3fd28e872bf728e7a9481268301942c91739,PodSandboxId:4aad8c0d35f37353bf30536f2cec6cde82a7c637225e95bb3c2db2526657685b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb0
1a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723596959062595486,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-btgqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f7261a4-3b02-40b3-b203-c839b9bed864,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b155a186b4af21fe83af97c74389938bfabb42190b520f1c1acc7e117c4198f9,PodSandboxId:f1ecddcb960794eb28ad0a8181b9fbd5618f8906052096531c641acfeef8d85c,Metadata:&ContainerMetadata{Na
me:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723596942469428265,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-492920,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b182ab397a2e6f954c863cc9a283c18f,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c21a0d0ea6df11f978cc3723b913762e23858a76be555ffcc7931776feab26bc,PodSandboxId:785e2b05f2cb602e2f1368039d5a4d41d5871f02b474b178505177c66ee48fe3,Metadata:&ContainerMetadata{Name:kub
e-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723596942463001367,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-492920,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb420282a245ce817725930be9426c87,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:521e3111e56cb4d1ad24b27db3daaca497291219122453fcaa52e241b1ff788c,PodSandboxId:62e98fa442b0283fbd30c95911cf8c3a9720ee39c54662613e0f6a6edd65c7e4,Metadata:&Conta
inerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723596942440688906,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-492920,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17999a6d4585726354b50905a548da8d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a0a85675b9629cc20805d9cac462a99baf5c0e25c75042d24960287abcf2bd6,PodSandboxId:a383d0900f211862be82872c6b6e91f61a6bd516ae9dfbffeeb0d46550cee17b,Metadata:&ContainerMetadata{Name:kube-scheduler,Att
empt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723596942459227798,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-492920,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57886eb92b3253a4d0af89159f23d110,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2a7b52de-e121-4cb4-86b0-1efa4d88a9a5 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	fbaee43ef60b0       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   1 second ago        Running             coredns                   1                   85ecd35ec8762       coredns-6f6b679f8f-5474l
	669df276bd23e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   1 second ago        Running             coredns                   1                   f38bbcc18f3cb       coredns-6f6b679f8f-btgqk
	aa66d26e41c6f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   2 seconds ago       Running             storage-provisioner       1                   b27eafc9aca2e       storage-provisioner
	9b600da323159       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   2 seconds ago       Running             kube-proxy                1                   25c2260e16c9f       kube-proxy-rwr5b
	1317230fc0fd9       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   7 seconds ago       Running             kube-controller-manager   1                   f95649b237e10       kube-controller-manager-kubernetes-upgrade-492920
	7c46341c39280       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   7 seconds ago       Running             kube-scheduler            1                   60f7f03c22c97       kube-scheduler-kubernetes-upgrade-492920
	ce87480dd8b16       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   8 seconds ago       Running             kube-apiserver            1                   7d3015e5af828       kube-apiserver-kubernetes-upgrade-492920
	8363ea0c886a5       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   8 seconds ago       Running             etcd                      1                   23275c8804516       etcd-kubernetes-upgrade-492920
	bdb320848e91f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   23 seconds ago      Exited              storage-provisioner       0                   20f48771fce8c       storage-provisioner
	ec6ff27c4d412       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   23 seconds ago      Exited              kube-proxy                0                   adaefb29ae904       kube-proxy-rwr5b
	755146c9f89fa       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   24 seconds ago      Exited              coredns                   0                   843b081171af3       coredns-6f6b679f8f-5474l
	b4bfa9024df5d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   24 seconds ago      Exited              coredns                   0                   4aad8c0d35f37       coredns-6f6b679f8f-btgqk
	b155a186b4af2       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   41 seconds ago      Exited              kube-apiserver            0                   f1ecddcb96079       kube-apiserver-kubernetes-upgrade-492920
	c21a0d0ea6df1       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   41 seconds ago      Exited              kube-controller-manager   0                   785e2b05f2cb6       kube-controller-manager-kubernetes-upgrade-492920
	0a0a85675b962       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   41 seconds ago      Exited              kube-scheduler            0                   a383d0900f211       kube-scheduler-kubernetes-upgrade-492920
	521e3111e56cb       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   41 seconds ago      Exited              etcd                      0                   62e98fa442b02       etcd-kubernetes-upgrade-492920
	
	
	==> coredns [669df276bd23e487040c42b161a5e331a39201325ff69612c18b8807afadad3a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [755146c9f89fa8e6df8c958ec44bd66f4c8e32748526080bff8821fbb64e30cc] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [b4bfa9024df5d2ba0cda78ca025b3fd28e872bf728e7a9481268301942c91739] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [fbaee43ef60b0d20f01511a8b5563ace3a277715d759e7344b6dcb79bb0b9663] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-492920
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-492920
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 14 Aug 2024 00:55:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-492920
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 14 Aug 2024 00:56:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 14 Aug 2024 00:56:19 +0000   Wed, 14 Aug 2024 00:55:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 14 Aug 2024 00:56:19 +0000   Wed, 14 Aug 2024 00:55:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 14 Aug 2024 00:56:19 +0000   Wed, 14 Aug 2024 00:55:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 14 Aug 2024 00:56:19 +0000   Wed, 14 Aug 2024 00:55:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.136
	  Hostname:    kubernetes-upgrade-492920
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1373c3b0ca3e45a4a945e4b43627df26
	  System UUID:                1373c3b0-ca3e-45a4-a945-e4b43627df26
	  Boot ID:                    8ad1a666-77c0-46e6-ae1d-2203bc26a181
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-5474l                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     26s
	  kube-system                 coredns-6f6b679f8f-btgqk                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     26s
	  kube-system                 etcd-kubernetes-upgrade-492920                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         28s
	  kube-system                 kube-apiserver-kubernetes-upgrade-492920             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-492920    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27s
	  kube-system                 kube-proxy-rwr5b                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  kube-system                 kube-scheduler-kubernetes-upgrade-492920             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27s
	  kube-system                 storage-provisioner                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 2s                 kube-proxy       
	  Normal  Starting                 23s                kube-proxy       
	  Normal  Starting                 46s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  44s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  43s (x8 over 46s)  kubelet          Node kubernetes-upgrade-492920 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     43s (x7 over 46s)  kubelet          Node kubernetes-upgrade-492920 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    43s (x8 over 46s)  kubelet          Node kubernetes-upgrade-492920 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           33s                node-controller  Node kubernetes-upgrade-492920 event: Registered Node kubernetes-upgrade-492920 in Controller
	  Normal  Starting                 9s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9s (x8 over 9s)    kubelet          Node kubernetes-upgrade-492920 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9s (x8 over 9s)    kubelet          Node kubernetes-upgrade-492920 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9s (x7 over 9s)    kubelet          Node kubernetes-upgrade-492920 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2s                 node-controller  Node kubernetes-upgrade-492920 event: Registered Node kubernetes-upgrade-492920 in Controller
	
	
	==> dmesg <==
	[  +1.830776] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.444887] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.165181] systemd-fstab-generator[567]: Ignoring "noauto" option for root device
	[  +0.059108] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.045025] systemd-fstab-generator[579]: Ignoring "noauto" option for root device
	[  +0.172818] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.182274] systemd-fstab-generator[605]: Ignoring "noauto" option for root device
	[  +0.280907] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +4.012011] systemd-fstab-generator[728]: Ignoring "noauto" option for root device
	[  +1.791056] systemd-fstab-generator[850]: Ignoring "noauto" option for root device
	[  +0.067443] kauditd_printk_skb: 158 callbacks suppressed
	[ +13.298667] kauditd_printk_skb: 69 callbacks suppressed
	[  +6.387542] systemd-fstab-generator[1247]: Ignoring "noauto" option for root device
	[Aug14 00:56] systemd-fstab-generator[2190]: Ignoring "noauto" option for root device
	[  +0.081494] kauditd_printk_skb: 94 callbacks suppressed
	[  +0.056884] systemd-fstab-generator[2202]: Ignoring "noauto" option for root device
	[  +0.173540] systemd-fstab-generator[2216]: Ignoring "noauto" option for root device
	[  +0.174416] systemd-fstab-generator[2228]: Ignoring "noauto" option for root device
	[  +0.287109] systemd-fstab-generator[2256]: Ignoring "noauto" option for root device
	[  +0.723600] systemd-fstab-generator[2409]: Ignoring "noauto" option for root device
	[  +2.178107] systemd-fstab-generator[2531]: Ignoring "noauto" option for root device
	[  +6.064833] kauditd_printk_skb: 184 callbacks suppressed
	[  +0.224074] systemd-fstab-generator[3007]: Ignoring "noauto" option for root device
	
	
	==> etcd [521e3111e56cb4d1ad24b27db3daaca497291219122453fcaa52e241b1ff788c] <==
	{"level":"info","ts":"2024-08-14T00:55:43.787900Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"247e73b5d65300e1 became leader at term 2"}
	{"level":"info","ts":"2024-08-14T00:55:43.787925Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 247e73b5d65300e1 elected leader 247e73b5d65300e1 at term 2"}
	{"level":"info","ts":"2024-08-14T00:55:43.791670Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-14T00:55:43.792636Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"247e73b5d65300e1","local-member-attributes":"{Name:kubernetes-upgrade-492920 ClientURLs:[https://192.168.50.136:2379]}","request-path":"/0/members/247e73b5d65300e1/attributes","cluster-id":"736953c025287a25","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-14T00:55:43.792761Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-14T00:55:43.793033Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-14T00:55:43.793190Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-14T00:55:43.793212Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-14T00:55:43.793831Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-14T00:55:43.793921Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"736953c025287a25","local-member-id":"247e73b5d65300e1","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-14T00:55:43.794023Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-14T00:55:43.794072Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-14T00:55:43.794721Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-14T00:55:43.794894Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-14T00:55:43.795644Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.136:2379"}
	{"level":"info","ts":"2024-08-14T00:56:01.010820Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-14T00:56:01.010910Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"kubernetes-upgrade-492920","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.136:2380"],"advertise-client-urls":["https://192.168.50.136:2379"]}
	{"level":"warn","ts":"2024-08-14T00:56:01.011023Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-14T00:56:01.011204Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-14T00:56:01.112817Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.50.136:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-14T00:56:01.112959Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.50.136:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-14T00:56:01.114546Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"247e73b5d65300e1","current-leader-member-id":"247e73b5d65300e1"}
	{"level":"info","ts":"2024-08-14T00:56:01.120412Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.50.136:2380"}
	{"level":"info","ts":"2024-08-14T00:56:01.120518Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.50.136:2380"}
	{"level":"info","ts":"2024-08-14T00:56:01.120543Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"kubernetes-upgrade-492920","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.136:2380"],"advertise-client-urls":["https://192.168.50.136:2379"]}
	
	
	==> etcd [8363ea0c886a5440f3cb21dacb36a668a8d9fefa9fded1aaa7e6f9f37cd58118] <==
	{"level":"info","ts":"2024-08-14T00:56:16.051913Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"247e73b5d65300e1 switched to configuration voters=(2629666457252987105)"}
	{"level":"info","ts":"2024-08-14T00:56:16.051973Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"736953c025287a25","local-member-id":"247e73b5d65300e1","added-peer-id":"247e73b5d65300e1","added-peer-peer-urls":["https://192.168.50.136:2380"]}
	{"level":"info","ts":"2024-08-14T00:56:16.052048Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"736953c025287a25","local-member-id":"247e73b5d65300e1","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-14T00:56:16.052090Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-14T00:56:16.053630Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-14T00:56:16.054462Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"247e73b5d65300e1","initial-advertise-peer-urls":["https://192.168.50.136:2380"],"listen-peer-urls":["https://192.168.50.136:2380"],"advertise-client-urls":["https://192.168.50.136:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.136:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-14T00:56:16.054953Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-14T00:56:16.055083Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.50.136:2380"}
	{"level":"info","ts":"2024-08-14T00:56:16.055279Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.50.136:2380"}
	{"level":"info","ts":"2024-08-14T00:56:17.820337Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"247e73b5d65300e1 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-14T00:56:17.820394Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"247e73b5d65300e1 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-14T00:56:17.820433Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"247e73b5d65300e1 received MsgPreVoteResp from 247e73b5d65300e1 at term 2"}
	{"level":"info","ts":"2024-08-14T00:56:17.820444Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"247e73b5d65300e1 became candidate at term 3"}
	{"level":"info","ts":"2024-08-14T00:56:17.820450Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"247e73b5d65300e1 received MsgVoteResp from 247e73b5d65300e1 at term 3"}
	{"level":"info","ts":"2024-08-14T00:56:17.820459Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"247e73b5d65300e1 became leader at term 3"}
	{"level":"info","ts":"2024-08-14T00:56:17.820466Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 247e73b5d65300e1 elected leader 247e73b5d65300e1 at term 3"}
	{"level":"info","ts":"2024-08-14T00:56:17.826587Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"247e73b5d65300e1","local-member-attributes":"{Name:kubernetes-upgrade-492920 ClientURLs:[https://192.168.50.136:2379]}","request-path":"/0/members/247e73b5d65300e1/attributes","cluster-id":"736953c025287a25","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-14T00:56:17.826592Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-14T00:56:17.826783Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-14T00:56:17.827027Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-14T00:56:17.827039Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-14T00:56:17.827612Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-14T00:56:17.827740Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-14T00:56:17.828532Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.136:2379"}
	{"level":"info","ts":"2024-08-14T00:56:17.828600Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 00:56:24 up 1 min,  0 users,  load average: 1.06, 0.34, 0.12
	Linux kubernetes-upgrade-492920 5.10.207 #1 SMP Tue Aug 13 22:05:29 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [b155a186b4af21fe83af97c74389938bfabb42190b520f1c1acc7e117c4198f9] <==
	I0814 00:55:45.091882       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0814 00:55:45.109268       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0814 00:55:45.109367       1 policy_source.go:224] refreshing policies
	E0814 00:55:45.117406       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I0814 00:55:45.166450       1 controller.go:615] quota admission added evaluator for: namespaces
	E0814 00:55:45.183046       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I0814 00:55:45.299040       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0814 00:55:45.981988       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0814 00:55:45.988479       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0814 00:55:45.988509       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0814 00:55:46.654932       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0814 00:55:46.716941       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0814 00:55:46.876227       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0814 00:55:46.882739       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.50.136]
	I0814 00:55:46.884618       1 controller.go:615] quota admission added evaluator for: endpoints
	I0814 00:55:46.890595       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0814 00:55:46.993892       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0814 00:55:58.041570       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0814 00:55:58.059580       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0814 00:55:58.064245       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0814 00:55:58.076392       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0814 00:55:58.125121       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0814 00:56:01.013630       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	W0814 00:56:01.049318       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 00:56:01.049490       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [ce87480dd8b16d58207ec3ca2df878b51b7cefbddb683998fb946b4d8ffa3e2e] <==
	I0814 00:56:19.084406       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0814 00:56:19.099839       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0814 00:56:19.099944       1 shared_informer.go:320] Caches are synced for configmaps
	I0814 00:56:19.100768       1 aggregator.go:171] initial CRD sync complete...
	I0814 00:56:19.100801       1 autoregister_controller.go:144] Starting autoregister controller
	I0814 00:56:19.100807       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0814 00:56:19.100812       1 cache.go:39] Caches are synced for autoregister controller
	I0814 00:56:19.100917       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0814 00:56:19.100948       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0814 00:56:19.101156       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0814 00:56:19.105293       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0814 00:56:19.105312       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0814 00:56:19.150490       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0814 00:56:19.157736       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0814 00:56:19.157776       1 policy_source.go:224] refreshing policies
	I0814 00:56:19.159365       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	E0814 00:56:19.185666       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0814 00:56:19.978333       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0814 00:56:20.783502       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0814 00:56:20.793722       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0814 00:56:20.828959       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0814 00:56:20.988953       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0814 00:56:21.005625       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0814 00:56:21.597792       1 controller.go:615] quota admission added evaluator for: endpoints
	I0814 00:56:22.666143       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [1317230fc0fd9ddd1d02001360e322a965474d6b8746d3fc1d582abff0e12092] <==
	I0814 00:56:22.456288       1 shared_informer.go:320] Caches are synced for PVC protection
	I0814 00:56:22.457855       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0814 00:56:22.461631       1 shared_informer.go:320] Caches are synced for crt configmap
	I0814 00:56:22.461717       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="kubernetes-upgrade-492920"
	I0814 00:56:22.463247       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0814 00:56:22.463354       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0814 00:56:22.463536       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="101.214µs"
	I0814 00:56:22.465002       1 shared_informer.go:320] Caches are synced for daemon sets
	I0814 00:56:22.469970       1 shared_informer.go:320] Caches are synced for GC
	I0814 00:56:22.470070       1 shared_informer.go:320] Caches are synced for expand
	I0814 00:56:22.470858       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0814 00:56:22.476060       1 shared_informer.go:320] Caches are synced for attach detach
	I0814 00:56:22.479219       1 shared_informer.go:320] Caches are synced for cronjob
	I0814 00:56:22.479255       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0814 00:56:22.482547       1 shared_informer.go:320] Caches are synced for persistent volume
	I0814 00:56:22.483927       1 shared_informer.go:320] Caches are synced for endpoint
	I0814 00:56:22.510624       1 shared_informer.go:320] Caches are synced for job
	I0814 00:56:22.610608       1 shared_informer.go:320] Caches are synced for resource quota
	I0814 00:56:22.613683       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0814 00:56:22.625141       1 shared_informer.go:320] Caches are synced for resource quota
	I0814 00:56:23.043795       1 shared_informer.go:320] Caches are synced for garbage collector
	I0814 00:56:23.043834       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0814 00:56:23.077526       1 shared_informer.go:320] Caches are synced for garbage collector
	I0814 00:56:23.272716       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="49.05µs"
	I0814 00:56:23.296245       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="87.632µs"
	
	
	==> kube-controller-manager [c21a0d0ea6df11f978cc3723b913762e23858a76be555ffcc7931776feab26bc] <==
	I0814 00:55:51.746502       1 shared_informer.go:320] Caches are synced for crt configmap
	I0814 00:55:51.755044       1 shared_informer.go:320] Caches are synced for expand
	I0814 00:55:51.762441       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0814 00:55:51.791985       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0814 00:55:51.832581       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="kubernetes-upgrade-492920"
	I0814 00:55:51.854464       1 shared_informer.go:320] Caches are synced for resource quota
	I0814 00:55:51.869719       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0814 00:55:51.869898       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="kubernetes-upgrade-492920"
	I0814 00:55:51.877355       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0814 00:55:51.891522       1 shared_informer.go:320] Caches are synced for deployment
	I0814 00:55:51.896022       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0814 00:55:51.897202       1 shared_informer.go:320] Caches are synced for resource quota
	I0814 00:55:51.939980       1 shared_informer.go:320] Caches are synced for disruption
	I0814 00:55:51.994244       1 shared_informer.go:320] Caches are synced for persistent volume
	I0814 00:55:52.002545       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="kubernetes-upgrade-492920"
	I0814 00:55:52.377009       1 shared_informer.go:320] Caches are synced for garbage collector
	I0814 00:55:52.419150       1 shared_informer.go:320] Caches are synced for garbage collector
	I0814 00:55:52.419282       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0814 00:55:55.442416       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="kubernetes-upgrade-492920"
	I0814 00:55:58.161448       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="79.948263ms"
	I0814 00:55:58.197310       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="35.799541ms"
	I0814 00:55:58.332544       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="134.837284ms"
	I0814 00:55:58.332836       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="220.413µs"
	I0814 00:55:59.931342       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="108.857µs"
	I0814 00:55:59.972219       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="164.284µs"
	
	
	==> kube-proxy [9b600da323159d69d60d182bfb6b4bc09cbff9849fd29eac54465668ba3c0706] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0814 00:56:21.738973       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0814 00:56:21.756771       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.136"]
	E0814 00:56:21.756841       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0814 00:56:21.816271       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0814 00:56:21.816317       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0814 00:56:21.816342       1 server_linux.go:169] "Using iptables Proxier"
	I0814 00:56:21.826330       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0814 00:56:21.830335       1 server.go:483] "Version info" version="v1.31.0"
	I0814 00:56:21.830361       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0814 00:56:21.832890       1 config.go:197] "Starting service config controller"
	I0814 00:56:21.832942       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0814 00:56:21.832961       1 config.go:104] "Starting endpoint slice config controller"
	I0814 00:56:21.832965       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0814 00:56:21.833374       1 config.go:326] "Starting node config controller"
	I0814 00:56:21.833397       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0814 00:56:21.933694       1 shared_informer.go:320] Caches are synced for node config
	I0814 00:56:21.933734       1 shared_informer.go:320] Caches are synced for service config
	I0814 00:56:21.933775       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [ec6ff27c4d4120093f6b6c442248a64bdd1252d8dab0e13cf399fbcdf110f21a] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0814 00:56:00.146089       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0814 00:56:00.166661       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.136"]
	E0814 00:56:00.166904       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0814 00:56:00.244638       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0814 00:56:00.244683       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0814 00:56:00.244725       1 server_linux.go:169] "Using iptables Proxier"
	I0814 00:56:00.247213       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0814 00:56:00.247603       1 server.go:483] "Version info" version="v1.31.0"
	I0814 00:56:00.247625       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0814 00:56:00.249394       1 config.go:326] "Starting node config controller"
	I0814 00:56:00.249423       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0814 00:56:00.249812       1 config.go:197] "Starting service config controller"
	I0814 00:56:00.249874       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0814 00:56:00.249950       1 config.go:104] "Starting endpoint slice config controller"
	I0814 00:56:00.249986       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0814 00:56:00.349905       1 shared_informer.go:320] Caches are synced for node config
	I0814 00:56:00.350234       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0814 00:56:00.350234       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [0a0a85675b9629cc20805d9cac462a99baf5c0e25c75042d24960287abcf2bd6] <==
	W0814 00:55:45.875115       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0814 00:55:45.875284       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0814 00:55:46.045989       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0814 00:55:46.046041       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0814 00:55:46.070144       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0814 00:55:46.070261       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0814 00:55:46.118676       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0814 00:55:46.118750       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0814 00:55:46.147652       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0814 00:55:46.147698       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0814 00:55:46.242476       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0814 00:55:46.242631       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0814 00:55:46.277486       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0814 00:55:46.277538       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0814 00:55:46.284603       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0814 00:55:46.284686       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0814 00:55:46.299588       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0814 00:55:46.299672       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0814 00:55:46.300107       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0814 00:55:46.300146       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0814 00:55:48.449240       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0814 00:56:01.030655       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0814 00:56:01.031026       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0814 00:56:01.031498       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0814 00:56:01.034204       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [7c46341c392809adc970c5b68e7984d1c63e6ca5f486375f736dde0d2b1a8705] <==
	I0814 00:56:16.614352       1 serving.go:386] Generated self-signed cert in-memory
	W0814 00:56:19.043941       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0814 00:56:19.044006       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0814 00:56:19.044021       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0814 00:56:19.044115       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0814 00:56:19.104614       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0814 00:56:19.104753       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0814 00:56:19.110000       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0814 00:56:19.110222       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0814 00:56:19.110393       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0814 00:56:19.110475       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0814 00:56:19.211020       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 14 00:56:19 kubernetes-upgrade-492920 kubelet[2539]: W0814 00:56:19.090755    2539 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:kubernetes-upgrade-492920" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'kubernetes-upgrade-492920' and this object
	Aug 14 00:56:19 kubernetes-upgrade-492920 kubelet[2539]: E0814 00:56:19.090840    2539 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:kubernetes-upgrade-492920\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'kubernetes-upgrade-492920' and this object" logger="UnhandledError"
	Aug 14 00:56:19 kubernetes-upgrade-492920 kubelet[2539]: W0814 00:56:19.090721    2539 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:kubernetes-upgrade-492920" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'kubernetes-upgrade-492920' and this object
	Aug 14 00:56:19 kubernetes-upgrade-492920 kubelet[2539]: E0814 00:56:19.090918    2539 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:kubernetes-upgrade-492920\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'kubernetes-upgrade-492920' and this object" logger="UnhandledError"
	Aug 14 00:56:19 kubernetes-upgrade-492920 kubelet[2539]: E0814 00:56:19.090792    2539 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:kubernetes-upgrade-492920\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'kubernetes-upgrade-492920' and this object" logger="UnhandledError"
	Aug 14 00:56:19 kubernetes-upgrade-492920 kubelet[2539]: I0814 00:56:19.161252    2539 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c2f71a8d-d8b6-4cf9-882c-05d89fd4d353-xtables-lock\") pod \"kube-proxy-rwr5b\" (UID: \"c2f71a8d-d8b6-4cf9-882c-05d89fd4d353\") " pod="kube-system/kube-proxy-rwr5b"
	Aug 14 00:56:19 kubernetes-upgrade-492920 kubelet[2539]: I0814 00:56:19.161439    2539 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/48f9db41-4857-4e76-acc8-09e3fdb2c279-tmp\") pod \"storage-provisioner\" (UID: \"48f9db41-4857-4e76-acc8-09e3fdb2c279\") " pod="kube-system/storage-provisioner"
	Aug 14 00:56:19 kubernetes-upgrade-492920 kubelet[2539]: I0814 00:56:19.161626    2539 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c2f71a8d-d8b6-4cf9-882c-05d89fd4d353-lib-modules\") pod \"kube-proxy-rwr5b\" (UID: \"c2f71a8d-d8b6-4cf9-882c-05d89fd4d353\") " pod="kube-system/kube-proxy-rwr5b"
	Aug 14 00:56:19 kubernetes-upgrade-492920 kubelet[2539]: I0814 00:56:19.233973    2539 kubelet_node_status.go:111] "Node was previously registered" node="kubernetes-upgrade-492920"
	Aug 14 00:56:19 kubernetes-upgrade-492920 kubelet[2539]: I0814 00:56:19.234260    2539 kubelet_node_status.go:75] "Successfully registered node" node="kubernetes-upgrade-492920"
	Aug 14 00:56:19 kubernetes-upgrade-492920 kubelet[2539]: I0814 00:56:19.234383    2539 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Aug 14 00:56:19 kubernetes-upgrade-492920 kubelet[2539]: I0814 00:56:19.235700    2539 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Aug 14 00:56:20 kubernetes-upgrade-492920 kubelet[2539]: E0814 00:56:20.176259    2539 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Aug 14 00:56:20 kubernetes-upgrade-492920 kubelet[2539]: E0814 00:56:20.176646    2539 projected.go:194] Error preparing data for projected volume kube-api-access-5xmhz for pod kube-system/storage-provisioner: failed to sync configmap cache: timed out waiting for the condition
	Aug 14 00:56:20 kubernetes-upgrade-492920 kubelet[2539]: E0814 00:56:20.176860    2539 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/48f9db41-4857-4e76-acc8-09e3fdb2c279-kube-api-access-5xmhz podName:48f9db41-4857-4e76-acc8-09e3fdb2c279 nodeName:}" failed. No retries permitted until 2024-08-14 00:56:20.676773298 +0000 UTC m=+5.708336120 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-5xmhz" (UniqueName: "kubernetes.io/projected/48f9db41-4857-4e76-acc8-09e3fdb2c279-kube-api-access-5xmhz") pod "storage-provisioner" (UID: "48f9db41-4857-4e76-acc8-09e3fdb2c279") : failed to sync configmap cache: timed out waiting for the condition
	Aug 14 00:56:20 kubernetes-upgrade-492920 kubelet[2539]: E0814 00:56:20.179379    2539 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Aug 14 00:56:20 kubernetes-upgrade-492920 kubelet[2539]: E0814 00:56:20.179416    2539 projected.go:194] Error preparing data for projected volume kube-api-access-nwzvl for pod kube-system/coredns-6f6b679f8f-5474l: failed to sync configmap cache: timed out waiting for the condition
	Aug 14 00:56:20 kubernetes-upgrade-492920 kubelet[2539]: E0814 00:56:20.179465    2539 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b87ae798-5cfa-47da-8b61-172d11d0c2ae-kube-api-access-nwzvl podName:b87ae798-5cfa-47da-8b61-172d11d0c2ae nodeName:}" failed. No retries permitted until 2024-08-14 00:56:20.679450926 +0000 UTC m=+5.711013749 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-nwzvl" (UniqueName: "kubernetes.io/projected/b87ae798-5cfa-47da-8b61-172d11d0c2ae-kube-api-access-nwzvl") pod "coredns-6f6b679f8f-5474l" (UID: "b87ae798-5cfa-47da-8b61-172d11d0c2ae") : failed to sync configmap cache: timed out waiting for the condition
	Aug 14 00:56:20 kubernetes-upgrade-492920 kubelet[2539]: E0814 00:56:20.180623    2539 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Aug 14 00:56:20 kubernetes-upgrade-492920 kubelet[2539]: E0814 00:56:20.180658    2539 projected.go:194] Error preparing data for projected volume kube-api-access-wxt8l for pod kube-system/coredns-6f6b679f8f-btgqk: failed to sync configmap cache: timed out waiting for the condition
	Aug 14 00:56:20 kubernetes-upgrade-492920 kubelet[2539]: E0814 00:56:20.180722    2539 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3f7261a4-3b02-40b3-b203-c839b9bed864-kube-api-access-wxt8l podName:3f7261a4-3b02-40b3-b203-c839b9bed864 nodeName:}" failed. No retries permitted until 2024-08-14 00:56:20.680708871 +0000 UTC m=+5.712271695 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-wxt8l" (UniqueName: "kubernetes.io/projected/3f7261a4-3b02-40b3-b203-c839b9bed864-kube-api-access-wxt8l") pod "coredns-6f6b679f8f-btgqk" (UID: "3f7261a4-3b02-40b3-b203-c839b9bed864") : failed to sync configmap cache: timed out waiting for the condition
	Aug 14 00:56:20 kubernetes-upgrade-492920 kubelet[2539]: E0814 00:56:20.181756    2539 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Aug 14 00:56:20 kubernetes-upgrade-492920 kubelet[2539]: E0814 00:56:20.181818    2539 projected.go:194] Error preparing data for projected volume kube-api-access-k68bg for pod kube-system/kube-proxy-rwr5b: failed to sync configmap cache: timed out waiting for the condition
	Aug 14 00:56:20 kubernetes-upgrade-492920 kubelet[2539]: E0814 00:56:20.181902    2539 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c2f71a8d-d8b6-4cf9-882c-05d89fd4d353-kube-api-access-k68bg podName:c2f71a8d-d8b6-4cf9-882c-05d89fd4d353 nodeName:}" failed. No retries permitted until 2024-08-14 00:56:20.681889971 +0000 UTC m=+5.713452797 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-k68bg" (UniqueName: "kubernetes.io/projected/c2f71a8d-d8b6-4cf9-882c-05d89fd4d353-kube-api-access-k68bg") pod "kube-proxy-rwr5b" (UID: "c2f71a8d-d8b6-4cf9-882c-05d89fd4d353") : failed to sync configmap cache: timed out waiting for the condition
	Aug 14 00:56:24 kubernetes-upgrade-492920 kubelet[2539]: I0814 00:56:24.271367    2539 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	
	
	==> storage-provisioner [aa66d26e41c6f9675eb7e860408c21d315ef70334d8bbc45dbd535aabefbcb8b] <==
	I0814 00:56:21.484632       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0814 00:56:21.546973       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0814 00:56:21.547313       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0814 00:56:21.612625       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0814 00:56:21.612777       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-492920_20ded131-c23a-4144-a140-568fd67a7081!
	I0814 00:56:21.613794       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8eb10877-8823-45eb-bf09-92be9027454b", APIVersion:"v1", ResourceVersion:"393", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kubernetes-upgrade-492920_20ded131-c23a-4144-a140-568fd67a7081 became leader
	I0814 00:56:21.713482       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-492920_20ded131-c23a-4144-a140-568fd67a7081!
	
	
	==> storage-provisioner [bdb320848e91f1334caf6ec702b4e566517b2f64bef99a01092dff27ff838678] <==
	I0814 00:56:00.205092       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0814 00:56:23.208688   58356 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19429-9425/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-492920 -n kubernetes-upgrade-492920
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-492920 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-492920" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-492920
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-492920: (1.105115691s)
--- FAIL: TestKubernetesUpgrade (389.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (297.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-179312 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0814 00:55:05.518634   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/addons-937866/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-179312 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m57.114939768s)

                                                
                                                
-- stdout --
	* [old-k8s-version-179312] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19429
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19429-9425/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19429-9425/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-179312" primary control-plane node in "old-k8s-version-179312" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 00:55:03.696355   57577 out.go:291] Setting OutFile to fd 1 ...
	I0814 00:55:03.696607   57577 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 00:55:03.696617   57577 out.go:304] Setting ErrFile to fd 2...
	I0814 00:55:03.696621   57577 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 00:55:03.696796   57577 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19429-9425/.minikube/bin
	I0814 00:55:03.697350   57577 out.go:298] Setting JSON to false
	I0814 00:55:03.698284   57577 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5850,"bootTime":1723591054,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0814 00:55:03.698338   57577 start.go:139] virtualization: kvm guest
	I0814 00:55:03.700908   57577 out.go:177] * [old-k8s-version-179312] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0814 00:55:03.702277   57577 notify.go:220] Checking for updates...
	I0814 00:55:03.702283   57577 out.go:177]   - MINIKUBE_LOCATION=19429
	I0814 00:55:03.703620   57577 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0814 00:55:03.704933   57577 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19429-9425/kubeconfig
	I0814 00:55:03.706334   57577 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19429-9425/.minikube
	I0814 00:55:03.707535   57577 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0814 00:55:03.708801   57577 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0814 00:55:03.710720   57577 config.go:182] Loaded profile config "cert-expiration-769488": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 00:55:03.710855   57577 config.go:182] Loaded profile config "kubernetes-upgrade-492920": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 00:55:03.710995   57577 config.go:182] Loaded profile config "pause-074686": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 00:55:03.711078   57577 driver.go:392] Setting default libvirt URI to qemu:///system
	I0814 00:55:03.745905   57577 out.go:177] * Using the kvm2 driver based on user configuration
	I0814 00:55:03.746994   57577 start.go:297] selected driver: kvm2
	I0814 00:55:03.747007   57577 start.go:901] validating driver "kvm2" against <nil>
	I0814 00:55:03.747018   57577 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0814 00:55:03.747674   57577 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 00:55:03.747754   57577 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19429-9425/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0814 00:55:03.762522   57577 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0814 00:55:03.762566   57577 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0814 00:55:03.762833   57577 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 00:55:03.762913   57577 cni.go:84] Creating CNI manager for ""
	I0814 00:55:03.762931   57577 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 00:55:03.762947   57577 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0814 00:55:03.763015   57577 start.go:340] cluster config:
	{Name:old-k8s-version-179312 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-179312 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 00:55:03.763150   57577 iso.go:125] acquiring lock: {Name:mk654171f0e78c238a265344dbbd1eacb21d0f1b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 00:55:03.764863   57577 out.go:177] * Starting "old-k8s-version-179312" primary control-plane node in "old-k8s-version-179312" cluster
	I0814 00:55:03.765919   57577 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0814 00:55:03.765946   57577 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0814 00:55:03.765959   57577 cache.go:56] Caching tarball of preloaded images
	I0814 00:55:03.766029   57577 preload.go:172] Found /home/jenkins/minikube-integration/19429-9425/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0814 00:55:03.766056   57577 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0814 00:55:03.766167   57577 profile.go:143] Saving config to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312/config.json ...
	I0814 00:55:03.766190   57577 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312/config.json: {Name:mk2a4eedf248d49e32909c6cfca37fd6ec8cf38d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 00:55:03.766332   57577 start.go:360] acquireMachinesLock for old-k8s-version-179312: {Name:mk8ab1b491404dd6cc95a37a50c74a14d6d0e4c9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 00:55:31.535320   57577 start.go:364] duration metric: took 27.768940506s to acquireMachinesLock for "old-k8s-version-179312"
	I0814 00:55:31.535388   57577 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-179312 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-179312 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0814 00:55:31.535510   57577 start.go:125] createHost starting for "" (driver="kvm2")
	I0814 00:55:31.537882   57577 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0814 00:55:31.538073   57577 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 00:55:31.538146   57577 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 00:55:31.554840   57577 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38325
	I0814 00:55:31.555283   57577 main.go:141] libmachine: () Calling .GetVersion
	I0814 00:55:31.555885   57577 main.go:141] libmachine: Using API Version  1
	I0814 00:55:31.555909   57577 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 00:55:31.556232   57577 main.go:141] libmachine: () Calling .GetMachineName
	I0814 00:55:31.556446   57577 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetMachineName
	I0814 00:55:31.556600   57577 main.go:141] libmachine: (old-k8s-version-179312) Calling .DriverName
	I0814 00:55:31.556774   57577 start.go:159] libmachine.API.Create for "old-k8s-version-179312" (driver="kvm2")
	I0814 00:55:31.556805   57577 client.go:168] LocalClient.Create starting
	I0814 00:55:31.556842   57577 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem
	I0814 00:55:31.556880   57577 main.go:141] libmachine: Decoding PEM data...
	I0814 00:55:31.556905   57577 main.go:141] libmachine: Parsing certificate...
	I0814 00:55:31.556975   57577 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem
	I0814 00:55:31.557002   57577 main.go:141] libmachine: Decoding PEM data...
	I0814 00:55:31.557022   57577 main.go:141] libmachine: Parsing certificate...
	I0814 00:55:31.557054   57577 main.go:141] libmachine: Running pre-create checks...
	I0814 00:55:31.557067   57577 main.go:141] libmachine: (old-k8s-version-179312) Calling .PreCreateCheck
	I0814 00:55:31.557420   57577 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetConfigRaw
	I0814 00:55:31.557832   57577 main.go:141] libmachine: Creating machine...
	I0814 00:55:31.557848   57577 main.go:141] libmachine: (old-k8s-version-179312) Calling .Create
	I0814 00:55:31.557980   57577 main.go:141] libmachine: (old-k8s-version-179312) Creating KVM machine...
	I0814 00:55:31.559337   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | found existing default KVM network
	I0814 00:55:31.560592   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 00:55:31.560438   57790 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:ff:ea:bc} reservation:<nil>}
	I0814 00:55:31.561573   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 00:55:31.561465   57790 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:8b:d6:4c} reservation:<nil>}
	I0814 00:55:31.562797   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 00:55:31.562719   57790 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000338ee0}
	I0814 00:55:31.562820   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | created network xml: 
	I0814 00:55:31.562830   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | <network>
	I0814 00:55:31.562841   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG |   <name>mk-old-k8s-version-179312</name>
	I0814 00:55:31.562849   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG |   <dns enable='no'/>
	I0814 00:55:31.562858   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG |   
	I0814 00:55:31.562867   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0814 00:55:31.562876   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG |     <dhcp>
	I0814 00:55:31.562896   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0814 00:55:31.562906   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG |     </dhcp>
	I0814 00:55:31.562913   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG |   </ip>
	I0814 00:55:31.562922   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG |   
	I0814 00:55:31.562929   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | </network>
	I0814 00:55:31.562938   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | 
	I0814 00:55:31.568868   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | trying to create private KVM network mk-old-k8s-version-179312 192.168.61.0/24...
	I0814 00:55:31.640300   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | private KVM network mk-old-k8s-version-179312 192.168.61.0/24 created
	I0814 00:55:31.640336   57577 main.go:141] libmachine: (old-k8s-version-179312) Setting up store path in /home/jenkins/minikube-integration/19429-9425/.minikube/machines/old-k8s-version-179312 ...
	I0814 00:55:31.640353   57577 main.go:141] libmachine: (old-k8s-version-179312) Building disk image from file:///home/jenkins/minikube-integration/19429-9425/.minikube/cache/iso/amd64/minikube-v1.33.1-1723567878-19429-amd64.iso
	I0814 00:55:31.640370   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 00:55:31.640303   57790 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19429-9425/.minikube
	I0814 00:55:31.640474   57577 main.go:141] libmachine: (old-k8s-version-179312) Downloading /home/jenkins/minikube-integration/19429-9425/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19429-9425/.minikube/cache/iso/amd64/minikube-v1.33.1-1723567878-19429-amd64.iso...
	I0814 00:55:31.877025   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 00:55:31.876886   57790 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19429-9425/.minikube/machines/old-k8s-version-179312/id_rsa...
	I0814 00:55:32.089893   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 00:55:32.089752   57790 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19429-9425/.minikube/machines/old-k8s-version-179312/old-k8s-version-179312.rawdisk...
	I0814 00:55:32.089944   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | Writing magic tar header
	I0814 00:55:32.089962   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | Writing SSH key tar header
	I0814 00:55:32.089976   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 00:55:32.089892   57790 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19429-9425/.minikube/machines/old-k8s-version-179312 ...
	I0814 00:55:32.090023   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19429-9425/.minikube/machines/old-k8s-version-179312
	I0814 00:55:32.090071   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19429-9425/.minikube/machines
	I0814 00:55:32.090098   57577 main.go:141] libmachine: (old-k8s-version-179312) Setting executable bit set on /home/jenkins/minikube-integration/19429-9425/.minikube/machines/old-k8s-version-179312 (perms=drwx------)
	I0814 00:55:32.090113   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19429-9425/.minikube
	I0814 00:55:32.090134   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19429-9425
	I0814 00:55:32.090145   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0814 00:55:32.090162   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | Checking permissions on dir: /home/jenkins
	I0814 00:55:32.090174   57577 main.go:141] libmachine: (old-k8s-version-179312) Setting executable bit set on /home/jenkins/minikube-integration/19429-9425/.minikube/machines (perms=drwxr-xr-x)
	I0814 00:55:32.090187   57577 main.go:141] libmachine: (old-k8s-version-179312) Setting executable bit set on /home/jenkins/minikube-integration/19429-9425/.minikube (perms=drwxr-xr-x)
	I0814 00:55:32.090199   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | Checking permissions on dir: /home
	I0814 00:55:32.090216   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | Skipping /home - not owner
	I0814 00:55:32.090233   57577 main.go:141] libmachine: (old-k8s-version-179312) Setting executable bit set on /home/jenkins/minikube-integration/19429-9425 (perms=drwxrwxr-x)
	I0814 00:55:32.090246   57577 main.go:141] libmachine: (old-k8s-version-179312) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0814 00:55:32.090270   57577 main.go:141] libmachine: (old-k8s-version-179312) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0814 00:55:32.090283   57577 main.go:141] libmachine: (old-k8s-version-179312) Creating domain...
	I0814 00:55:32.091407   57577 main.go:141] libmachine: (old-k8s-version-179312) define libvirt domain using xml: 
	I0814 00:55:32.091429   57577 main.go:141] libmachine: (old-k8s-version-179312) <domain type='kvm'>
	I0814 00:55:32.091442   57577 main.go:141] libmachine: (old-k8s-version-179312)   <name>old-k8s-version-179312</name>
	I0814 00:55:32.091457   57577 main.go:141] libmachine: (old-k8s-version-179312)   <memory unit='MiB'>2200</memory>
	I0814 00:55:32.091470   57577 main.go:141] libmachine: (old-k8s-version-179312)   <vcpu>2</vcpu>
	I0814 00:55:32.091482   57577 main.go:141] libmachine: (old-k8s-version-179312)   <features>
	I0814 00:55:32.091496   57577 main.go:141] libmachine: (old-k8s-version-179312)     <acpi/>
	I0814 00:55:32.091532   57577 main.go:141] libmachine: (old-k8s-version-179312)     <apic/>
	I0814 00:55:32.091546   57577 main.go:141] libmachine: (old-k8s-version-179312)     <pae/>
	I0814 00:55:32.091554   57577 main.go:141] libmachine: (old-k8s-version-179312)     
	I0814 00:55:32.091567   57577 main.go:141] libmachine: (old-k8s-version-179312)   </features>
	I0814 00:55:32.091580   57577 main.go:141] libmachine: (old-k8s-version-179312)   <cpu mode='host-passthrough'>
	I0814 00:55:32.091623   57577 main.go:141] libmachine: (old-k8s-version-179312)   
	I0814 00:55:32.091654   57577 main.go:141] libmachine: (old-k8s-version-179312)   </cpu>
	I0814 00:55:32.091730   57577 main.go:141] libmachine: (old-k8s-version-179312)   <os>
	I0814 00:55:32.091766   57577 main.go:141] libmachine: (old-k8s-version-179312)     <type>hvm</type>
	I0814 00:55:32.091781   57577 main.go:141] libmachine: (old-k8s-version-179312)     <boot dev='cdrom'/>
	I0814 00:55:32.091794   57577 main.go:141] libmachine: (old-k8s-version-179312)     <boot dev='hd'/>
	I0814 00:55:32.091808   57577 main.go:141] libmachine: (old-k8s-version-179312)     <bootmenu enable='no'/>
	I0814 00:55:32.091820   57577 main.go:141] libmachine: (old-k8s-version-179312)   </os>
	I0814 00:55:32.091834   57577 main.go:141] libmachine: (old-k8s-version-179312)   <devices>
	I0814 00:55:32.091848   57577 main.go:141] libmachine: (old-k8s-version-179312)     <disk type='file' device='cdrom'>
	I0814 00:55:32.091874   57577 main.go:141] libmachine: (old-k8s-version-179312)       <source file='/home/jenkins/minikube-integration/19429-9425/.minikube/machines/old-k8s-version-179312/boot2docker.iso'/>
	I0814 00:55:32.091894   57577 main.go:141] libmachine: (old-k8s-version-179312)       <target dev='hdc' bus='scsi'/>
	I0814 00:55:32.091906   57577 main.go:141] libmachine: (old-k8s-version-179312)       <readonly/>
	I0814 00:55:32.091919   57577 main.go:141] libmachine: (old-k8s-version-179312)     </disk>
	I0814 00:55:32.091932   57577 main.go:141] libmachine: (old-k8s-version-179312)     <disk type='file' device='disk'>
	I0814 00:55:32.091955   57577 main.go:141] libmachine: (old-k8s-version-179312)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0814 00:55:32.091984   57577 main.go:141] libmachine: (old-k8s-version-179312)       <source file='/home/jenkins/minikube-integration/19429-9425/.minikube/machines/old-k8s-version-179312/old-k8s-version-179312.rawdisk'/>
	I0814 00:55:32.091997   57577 main.go:141] libmachine: (old-k8s-version-179312)       <target dev='hda' bus='virtio'/>
	I0814 00:55:32.092012   57577 main.go:141] libmachine: (old-k8s-version-179312)     </disk>
	I0814 00:55:32.092030   57577 main.go:141] libmachine: (old-k8s-version-179312)     <interface type='network'>
	I0814 00:55:32.092046   57577 main.go:141] libmachine: (old-k8s-version-179312)       <source network='mk-old-k8s-version-179312'/>
	I0814 00:55:32.092059   57577 main.go:141] libmachine: (old-k8s-version-179312)       <model type='virtio'/>
	I0814 00:55:32.092071   57577 main.go:141] libmachine: (old-k8s-version-179312)     </interface>
	I0814 00:55:32.092081   57577 main.go:141] libmachine: (old-k8s-version-179312)     <interface type='network'>
	I0814 00:55:32.092096   57577 main.go:141] libmachine: (old-k8s-version-179312)       <source network='default'/>
	I0814 00:55:32.092109   57577 main.go:141] libmachine: (old-k8s-version-179312)       <model type='virtio'/>
	I0814 00:55:32.092123   57577 main.go:141] libmachine: (old-k8s-version-179312)     </interface>
	I0814 00:55:32.092135   57577 main.go:141] libmachine: (old-k8s-version-179312)     <serial type='pty'>
	I0814 00:55:32.092149   57577 main.go:141] libmachine: (old-k8s-version-179312)       <target port='0'/>
	I0814 00:55:32.092162   57577 main.go:141] libmachine: (old-k8s-version-179312)     </serial>
	I0814 00:55:32.092191   57577 main.go:141] libmachine: (old-k8s-version-179312)     <console type='pty'>
	I0814 00:55:32.092211   57577 main.go:141] libmachine: (old-k8s-version-179312)       <target type='serial' port='0'/>
	I0814 00:55:32.092225   57577 main.go:141] libmachine: (old-k8s-version-179312)     </console>
	I0814 00:55:32.092236   57577 main.go:141] libmachine: (old-k8s-version-179312)     <rng model='virtio'>
	I0814 00:55:32.092248   57577 main.go:141] libmachine: (old-k8s-version-179312)       <backend model='random'>/dev/random</backend>
	I0814 00:55:32.092258   57577 main.go:141] libmachine: (old-k8s-version-179312)     </rng>
	I0814 00:55:32.092270   57577 main.go:141] libmachine: (old-k8s-version-179312)     
	I0814 00:55:32.092278   57577 main.go:141] libmachine: (old-k8s-version-179312)     
	I0814 00:55:32.092289   57577 main.go:141] libmachine: (old-k8s-version-179312)   </devices>
	I0814 00:55:32.092297   57577 main.go:141] libmachine: (old-k8s-version-179312) </domain>
	I0814 00:55:32.092307   57577 main.go:141] libmachine: (old-k8s-version-179312) 
	I0814 00:55:32.098774   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:ae:96:3d in network default
	I0814 00:55:32.099299   57577 main.go:141] libmachine: (old-k8s-version-179312) Ensuring networks are active...
	I0814 00:55:32.099339   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 00:55:32.100028   57577 main.go:141] libmachine: (old-k8s-version-179312) Ensuring network default is active
	I0814 00:55:32.100306   57577 main.go:141] libmachine: (old-k8s-version-179312) Ensuring network mk-old-k8s-version-179312 is active
	I0814 00:55:32.100943   57577 main.go:141] libmachine: (old-k8s-version-179312) Getting domain xml...
	I0814 00:55:32.101591   57577 main.go:141] libmachine: (old-k8s-version-179312) Creating domain...
	I0814 00:55:33.472517   57577 main.go:141] libmachine: (old-k8s-version-179312) Waiting to get IP...
	I0814 00:55:33.473697   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 00:55:33.474308   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 00:55:33.474347   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 00:55:33.474293   57790 retry.go:31] will retry after 304.321693ms: waiting for machine to come up
	I0814 00:55:33.780870   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 00:55:33.781566   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 00:55:33.781594   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 00:55:33.781527   57790 retry.go:31] will retry after 264.143284ms: waiting for machine to come up
	I0814 00:55:34.047100   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 00:55:34.047634   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 00:55:34.047665   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 00:55:34.047592   57790 retry.go:31] will retry after 324.712825ms: waiting for machine to come up
	I0814 00:55:34.374081   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 00:55:34.374650   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 00:55:34.374679   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 00:55:34.374610   57790 retry.go:31] will retry after 526.975097ms: waiting for machine to come up
	I0814 00:55:34.903553   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 00:55:34.904114   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 00:55:34.904141   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 00:55:34.904078   57790 retry.go:31] will retry after 590.3132ms: waiting for machine to come up
	I0814 00:55:35.495952   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 00:55:35.496344   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 00:55:35.496366   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 00:55:35.496281   57790 retry.go:31] will retry after 594.301143ms: waiting for machine to come up
	I0814 00:55:36.091970   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 00:55:36.092495   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 00:55:36.092523   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 00:55:36.092441   57790 retry.go:31] will retry after 1.189840198s: waiting for machine to come up
	I0814 00:55:37.283941   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 00:55:37.284599   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 00:55:37.284619   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 00:55:37.284552   57790 retry.go:31] will retry after 1.349070391s: waiting for machine to come up
	I0814 00:55:38.635003   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 00:55:38.635604   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 00:55:38.635638   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 00:55:38.635553   57790 retry.go:31] will retry after 1.613537695s: waiting for machine to come up
	I0814 00:55:40.251072   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 00:55:40.251499   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 00:55:40.251531   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 00:55:40.251448   57790 retry.go:31] will retry after 1.60089539s: waiting for machine to come up
	I0814 00:55:41.854151   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 00:55:41.854754   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 00:55:41.854782   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 00:55:41.854702   57790 retry.go:31] will retry after 2.148243714s: waiting for machine to come up
	I0814 00:55:44.004437   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 00:55:44.005072   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 00:55:44.005099   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 00:55:44.005024   57790 retry.go:31] will retry after 2.619209472s: waiting for machine to come up
	I0814 00:55:46.625566   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 00:55:46.626161   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 00:55:46.626187   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 00:55:46.626115   57790 retry.go:31] will retry after 3.838715847s: waiting for machine to come up
	I0814 00:55:50.469166   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 00:55:50.469642   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 00:55:50.469666   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 00:55:50.469604   57790 retry.go:31] will retry after 4.767600404s: waiting for machine to come up
	I0814 00:55:55.239239   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 00:55:55.239811   57577 main.go:141] libmachine: (old-k8s-version-179312) Found IP for machine: 192.168.61.123
	I0814 00:55:55.239828   57577 main.go:141] libmachine: (old-k8s-version-179312) Reserving static IP address...
	I0814 00:55:55.239838   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has current primary IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 00:55:55.240308   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-179312", mac: "52:54:00:b2:76:73", ip: "192.168.61.123"} in network mk-old-k8s-version-179312
	I0814 00:55:55.316696   57577 main.go:141] libmachine: (old-k8s-version-179312) Reserved static IP address: 192.168.61.123
	I0814 00:55:55.316730   57577 main.go:141] libmachine: (old-k8s-version-179312) Waiting for SSH to be available...
	I0814 00:55:55.316741   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | Getting to WaitForSSH function...
	I0814 00:55:55.319399   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 00:55:55.319812   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 01:55:46 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:minikube Clientid:01:52:54:00:b2:76:73}
	I0814 00:55:55.319846   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 00:55:55.319928   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | Using SSH client type: external
	I0814 00:55:55.319954   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | Using SSH private key: /home/jenkins/minikube-integration/19429-9425/.minikube/machines/old-k8s-version-179312/id_rsa (-rw-------)
	I0814 00:55:55.319994   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.123 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19429-9425/.minikube/machines/old-k8s-version-179312/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0814 00:55:55.320009   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | About to run SSH command:
	I0814 00:55:55.320027   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | exit 0
	I0814 00:55:55.446445   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | SSH cmd err, output: <nil>: 
	I0814 00:55:55.446785   57577 main.go:141] libmachine: (old-k8s-version-179312) KVM machine creation complete!
	I0814 00:55:55.447112   57577 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetConfigRaw
	I0814 00:55:55.447786   57577 main.go:141] libmachine: (old-k8s-version-179312) Calling .DriverName
	I0814 00:55:55.448016   57577 main.go:141] libmachine: (old-k8s-version-179312) Calling .DriverName
	I0814 00:55:55.448175   57577 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0814 00:55:55.448192   57577 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetState
	I0814 00:55:55.449437   57577 main.go:141] libmachine: Detecting operating system of created instance...
	I0814 00:55:55.449454   57577 main.go:141] libmachine: Waiting for SSH to be available...
	I0814 00:55:55.449475   57577 main.go:141] libmachine: Getting to WaitForSSH function...
	I0814 00:55:55.449483   57577 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHHostname
	I0814 00:55:55.452075   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 00:55:55.452571   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 01:55:46 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 00:55:55.452602   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 00:55:55.452762   57577 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHPort
	I0814 00:55:55.452922   57577 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 00:55:55.453092   57577 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 00:55:55.453215   57577 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHUsername
	I0814 00:55:55.453370   57577 main.go:141] libmachine: Using SSH client type: native
	I0814 00:55:55.453620   57577 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I0814 00:55:55.453635   57577 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0814 00:55:55.557186   57577 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 00:55:55.557209   57577 main.go:141] libmachine: Detecting the provisioner...
	I0814 00:55:55.557219   57577 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHHostname
	I0814 00:55:55.560097   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 00:55:55.560604   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 01:55:46 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 00:55:55.560642   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 00:55:55.560843   57577 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHPort
	I0814 00:55:55.561053   57577 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 00:55:55.561209   57577 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 00:55:55.561351   57577 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHUsername
	I0814 00:55:55.561599   57577 main.go:141] libmachine: Using SSH client type: native
	I0814 00:55:55.561786   57577 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I0814 00:55:55.561798   57577 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0814 00:55:55.670388   57577 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0814 00:55:55.670466   57577 main.go:141] libmachine: found compatible host: buildroot
	I0814 00:55:55.670476   57577 main.go:141] libmachine: Provisioning with buildroot...
	I0814 00:55:55.670484   57577 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetMachineName
	I0814 00:55:55.670748   57577 buildroot.go:166] provisioning hostname "old-k8s-version-179312"
	I0814 00:55:55.670772   57577 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetMachineName
	I0814 00:55:55.670981   57577 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHHostname
	I0814 00:55:55.673322   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 00:55:55.673724   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 01:55:46 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 00:55:55.673752   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 00:55:55.673836   57577 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHPort
	I0814 00:55:55.673986   57577 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 00:55:55.674167   57577 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 00:55:55.674289   57577 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHUsername
	I0814 00:55:55.674470   57577 main.go:141] libmachine: Using SSH client type: native
	I0814 00:55:55.674643   57577 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I0814 00:55:55.674655   57577 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-179312 && echo "old-k8s-version-179312" | sudo tee /etc/hostname
	I0814 00:55:55.795046   57577 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-179312
	
	I0814 00:55:55.795087   57577 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHHostname
	I0814 00:55:55.798230   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 00:55:55.798723   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 01:55:46 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 00:55:55.798757   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 00:55:55.798954   57577 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHPort
	I0814 00:55:55.799167   57577 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 00:55:55.799377   57577 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 00:55:55.799538   57577 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHUsername
	I0814 00:55:55.799734   57577 main.go:141] libmachine: Using SSH client type: native
	I0814 00:55:55.799954   57577 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I0814 00:55:55.799981   57577 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-179312' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-179312/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-179312' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0814 00:55:55.914402   57577 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 00:55:55.914434   57577 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19429-9425/.minikube CaCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19429-9425/.minikube}
	I0814 00:55:55.914468   57577 buildroot.go:174] setting up certificates
	I0814 00:55:55.914478   57577 provision.go:84] configureAuth start
	I0814 00:55:55.914490   57577 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetMachineName
	I0814 00:55:55.914774   57577 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetIP
	I0814 00:55:55.917526   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 00:55:55.917875   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 01:55:46 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 00:55:55.917906   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 00:55:55.918062   57577 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHHostname
	I0814 00:55:55.920417   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 00:55:55.920799   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 01:55:46 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 00:55:55.920829   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 00:55:55.920975   57577 provision.go:143] copyHostCerts
	I0814 00:55:55.921043   57577 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem, removing ...
	I0814 00:55:55.921058   57577 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem
	I0814 00:55:55.921129   57577 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem (1675 bytes)
	I0814 00:55:55.921242   57577 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem, removing ...
	I0814 00:55:55.921250   57577 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem
	I0814 00:55:55.921271   57577 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem (1082 bytes)
	I0814 00:55:55.921337   57577 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem, removing ...
	I0814 00:55:55.921344   57577 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem
	I0814 00:55:55.921363   57577 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem (1123 bytes)
	I0814 00:55:55.921409   57577 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-179312 san=[127.0.0.1 192.168.61.123 localhost minikube old-k8s-version-179312]
	I0814 00:55:56.053872   57577 provision.go:177] copyRemoteCerts
	I0814 00:55:56.053930   57577 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0814 00:55:56.053954   57577 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHHostname
	I0814 00:55:56.057254   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 00:55:56.057603   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 01:55:46 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 00:55:56.057639   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 00:55:56.057816   57577 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHPort
	I0814 00:55:56.058015   57577 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 00:55:56.058188   57577 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHUsername
	I0814 00:55:56.058352   57577 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/old-k8s-version-179312/id_rsa Username:docker}
	I0814 00:55:56.139609   57577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0814 00:55:56.165593   57577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0814 00:55:56.188128   57577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0814 00:55:56.209868   57577 provision.go:87] duration metric: took 295.377794ms to configureAuth
	I0814 00:55:56.209893   57577 buildroot.go:189] setting minikube options for container-runtime
	I0814 00:55:56.210109   57577 config.go:182] Loaded profile config "old-k8s-version-179312": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0814 00:55:56.210217   57577 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHHostname
	I0814 00:55:56.212858   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 00:55:56.213112   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 01:55:46 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 00:55:56.213140   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 00:55:56.213288   57577 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHPort
	I0814 00:55:56.213441   57577 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 00:55:56.213623   57577 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 00:55:56.213790   57577 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHUsername
	I0814 00:55:56.213956   57577 main.go:141] libmachine: Using SSH client type: native
	I0814 00:55:56.214144   57577 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I0814 00:55:56.214158   57577 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0814 00:55:56.471965   57577 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0814 00:55:56.471996   57577 main.go:141] libmachine: Checking connection to Docker...
	I0814 00:55:56.472008   57577 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetURL
	I0814 00:55:56.473308   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | Using libvirt version 6000000
	I0814 00:55:56.475477   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 00:55:56.475851   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 01:55:46 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 00:55:56.475894   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 00:55:56.476065   57577 main.go:141] libmachine: Docker is up and running!
	I0814 00:55:56.476083   57577 main.go:141] libmachine: Reticulating splines...
	I0814 00:55:56.476090   57577 client.go:171] duration metric: took 24.919275834s to LocalClient.Create
	I0814 00:55:56.476115   57577 start.go:167] duration metric: took 24.919342328s to libmachine.API.Create "old-k8s-version-179312"
	I0814 00:55:56.476127   57577 start.go:293] postStartSetup for "old-k8s-version-179312" (driver="kvm2")
	I0814 00:55:56.476142   57577 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0814 00:55:56.476174   57577 main.go:141] libmachine: (old-k8s-version-179312) Calling .DriverName
	I0814 00:55:56.476440   57577 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0814 00:55:56.476483   57577 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHHostname
	I0814 00:55:56.478785   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 00:55:56.479119   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 01:55:46 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 00:55:56.479149   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 00:55:56.479303   57577 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHPort
	I0814 00:55:56.479474   57577 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 00:55:56.479630   57577 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHUsername
	I0814 00:55:56.479764   57577 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/old-k8s-version-179312/id_rsa Username:docker}
	I0814 00:55:56.559782   57577 ssh_runner.go:195] Run: cat /etc/os-release
	I0814 00:55:56.564052   57577 info.go:137] Remote host: Buildroot 2023.02.9
	I0814 00:55:56.564080   57577 filesync.go:126] Scanning /home/jenkins/minikube-integration/19429-9425/.minikube/addons for local assets ...
	I0814 00:55:56.564191   57577 filesync.go:126] Scanning /home/jenkins/minikube-integration/19429-9425/.minikube/files for local assets ...
	I0814 00:55:56.564302   57577 filesync.go:149] local asset: /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem -> 165892.pem in /etc/ssl/certs
	I0814 00:55:56.564407   57577 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0814 00:55:56.573190   57577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem --> /etc/ssl/certs/165892.pem (1708 bytes)
	I0814 00:55:56.598797   57577 start.go:296] duration metric: took 122.651964ms for postStartSetup
	I0814 00:55:56.598855   57577 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetConfigRaw
	I0814 00:55:56.599510   57577 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetIP
	I0814 00:55:56.602510   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 00:55:56.602913   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 01:55:46 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 00:55:56.602946   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 00:55:56.603216   57577 profile.go:143] Saving config to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312/config.json ...
	I0814 00:55:56.603446   57577 start.go:128] duration metric: took 25.067923008s to createHost
	I0814 00:55:56.603469   57577 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHHostname
	I0814 00:55:56.606135   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 00:55:56.606541   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 01:55:46 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 00:55:56.606575   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 00:55:56.606669   57577 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHPort
	I0814 00:55:56.606856   57577 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 00:55:56.607073   57577 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 00:55:56.607221   57577 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHUsername
	I0814 00:55:56.607482   57577 main.go:141] libmachine: Using SSH client type: native
	I0814 00:55:56.607718   57577 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I0814 00:55:56.607735   57577 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0814 00:55:56.717478   57577 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723596956.695296662
	
	I0814 00:55:56.717502   57577 fix.go:216] guest clock: 1723596956.695296662
	I0814 00:55:56.717510   57577 fix.go:229] Guest: 2024-08-14 00:55:56.695296662 +0000 UTC Remote: 2024-08-14 00:55:56.603459251 +0000 UTC m=+52.940484989 (delta=91.837411ms)
	I0814 00:55:56.717527   57577 fix.go:200] guest clock delta is within tolerance: 91.837411ms
	I0814 00:55:56.717532   57577 start.go:83] releasing machines lock for "old-k8s-version-179312", held for 25.182181346s
	I0814 00:55:56.717549   57577 main.go:141] libmachine: (old-k8s-version-179312) Calling .DriverName
	I0814 00:55:56.717793   57577 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetIP
	I0814 00:55:56.720883   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 00:55:56.721354   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 01:55:46 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 00:55:56.721387   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 00:55:56.721521   57577 main.go:141] libmachine: (old-k8s-version-179312) Calling .DriverName
	I0814 00:55:56.721967   57577 main.go:141] libmachine: (old-k8s-version-179312) Calling .DriverName
	I0814 00:55:56.722158   57577 main.go:141] libmachine: (old-k8s-version-179312) Calling .DriverName
	I0814 00:55:56.722263   57577 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0814 00:55:56.722317   57577 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHHostname
	I0814 00:55:56.722334   57577 ssh_runner.go:195] Run: cat /version.json
	I0814 00:55:56.722352   57577 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHHostname
	I0814 00:55:56.725323   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 00:55:56.725361   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 00:55:56.725667   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 01:55:46 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 00:55:56.725699   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 01:55:46 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 00:55:56.725721   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 00:55:56.725779   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 00:55:56.725870   57577 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHPort
	I0814 00:55:56.726065   57577 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHPort
	I0814 00:55:56.726092   57577 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 00:55:56.726243   57577 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHUsername
	I0814 00:55:56.726246   57577 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 00:55:56.726423   57577 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHUsername
	I0814 00:55:56.726463   57577 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/old-k8s-version-179312/id_rsa Username:docker}
	I0814 00:55:56.726585   57577 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/old-k8s-version-179312/id_rsa Username:docker}
	I0814 00:55:56.838436   57577 ssh_runner.go:195] Run: systemctl --version
	I0814 00:55:56.843828   57577 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0814 00:55:56.996037   57577 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0814 00:55:57.002509   57577 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0814 00:55:57.002586   57577 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0814 00:55:57.017414   57577 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0814 00:55:57.017443   57577 start.go:495] detecting cgroup driver to use...
	I0814 00:55:57.017522   57577 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0814 00:55:57.032500   57577 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0814 00:55:57.046374   57577 docker.go:217] disabling cri-docker service (if available) ...
	I0814 00:55:57.046440   57577 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0814 00:55:57.058995   57577 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0814 00:55:57.071701   57577 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0814 00:55:57.185591   57577 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0814 00:55:57.343302   57577 docker.go:233] disabling docker service ...
	I0814 00:55:57.343381   57577 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0814 00:55:57.357425   57577 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0814 00:55:57.369926   57577 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0814 00:55:57.485395   57577 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0814 00:55:57.607834   57577 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0814 00:55:57.620790   57577 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 00:55:57.639473   57577 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0814 00:55:57.639530   57577 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 00:55:57.650851   57577 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0814 00:55:57.650914   57577 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 00:55:57.661713   57577 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 00:55:57.672605   57577 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 00:55:57.682102   57577 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0814 00:55:57.691827   57577 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0814 00:55:57.701527   57577 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0814 00:55:57.701584   57577 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0814 00:55:57.713529   57577 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0814 00:55:57.723112   57577 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 00:55:57.843393   57577 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0814 00:55:57.987763   57577 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0814 00:55:57.987852   57577 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0814 00:55:57.992202   57577 start.go:563] Will wait 60s for crictl version
	I0814 00:55:57.992252   57577 ssh_runner.go:195] Run: which crictl
	I0814 00:55:57.995701   57577 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0814 00:55:58.038377   57577 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0814 00:55:58.038467   57577 ssh_runner.go:195] Run: crio --version
	I0814 00:55:58.066373   57577 ssh_runner.go:195] Run: crio --version
	I0814 00:55:58.102857   57577 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0814 00:55:58.104332   57577 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetIP
	I0814 00:55:58.107565   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 00:55:58.107935   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 01:55:46 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 00:55:58.107965   57577 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 00:55:58.108208   57577 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0814 00:55:58.112150   57577 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 00:55:58.126570   57577 kubeadm.go:883] updating cluster {Name:old-k8s-version-179312 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-179312 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.123 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0814 00:55:58.126722   57577 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0814 00:55:58.126792   57577 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 00:55:58.175458   57577 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0814 00:55:58.175519   57577 ssh_runner.go:195] Run: which lz4
	I0814 00:55:58.180536   57577 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0814 00:55:58.195215   57577 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0814 00:55:58.195253   57577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0814 00:55:59.699725   57577 crio.go:462] duration metric: took 1.519218747s to copy over tarball
	I0814 00:55:59.699806   57577 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0814 00:56:02.368801   57577 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.668952163s)
	I0814 00:56:02.368863   57577 crio.go:469] duration metric: took 2.669081459s to extract the tarball
	I0814 00:56:02.368878   57577 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0814 00:56:02.411828   57577 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 00:56:02.453872   57577 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0814 00:56:02.453892   57577 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0814 00:56:02.453965   57577 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 00:56:02.454002   57577 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0814 00:56:02.454089   57577 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0814 00:56:02.454133   57577 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0814 00:56:02.454003   57577 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0814 00:56:02.454092   57577 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0814 00:56:02.454022   57577 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0814 00:56:02.454100   57577 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 00:56:02.455443   57577 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0814 00:56:02.455707   57577 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 00:56:02.455716   57577 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0814 00:56:02.455729   57577 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 00:56:02.455707   57577 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0814 00:56:02.455712   57577 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0814 00:56:02.455765   57577 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0814 00:56:02.455768   57577 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0814 00:56:02.705688   57577 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0814 00:56:02.708466   57577 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 00:56:02.737526   57577 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0814 00:56:02.738442   57577 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0814 00:56:02.741480   57577 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0814 00:56:02.747395   57577 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0814 00:56:02.754786   57577 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0814 00:56:02.835638   57577 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0814 00:56:02.835688   57577 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0814 00:56:02.835698   57577 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0814 00:56:02.835730   57577 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 00:56:02.835742   57577 ssh_runner.go:195] Run: which crictl
	I0814 00:56:02.835771   57577 ssh_runner.go:195] Run: which crictl
	I0814 00:56:02.869410   57577 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0814 00:56:02.869475   57577 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0814 00:56:02.869541   57577 ssh_runner.go:195] Run: which crictl
	I0814 00:56:02.876483   57577 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0814 00:56:02.876519   57577 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0814 00:56:02.876558   57577 ssh_runner.go:195] Run: which crictl
	I0814 00:56:02.880498   57577 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0814 00:56:02.880530   57577 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0814 00:56:02.880573   57577 ssh_runner.go:195] Run: which crictl
	I0814 00:56:02.880627   57577 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0814 00:56:02.880657   57577 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0814 00:56:02.880696   57577 ssh_runner.go:195] Run: which crictl
	I0814 00:56:02.890776   57577 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0814 00:56:02.890805   57577 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0814 00:56:02.890830   57577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 00:56:02.890872   57577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0814 00:56:02.890896   57577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0814 00:56:02.890912   57577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0814 00:56:02.890834   57577 ssh_runner.go:195] Run: which crictl
	I0814 00:56:02.890951   57577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0814 00:56:02.890972   57577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0814 00:56:03.014624   57577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0814 00:56:03.014687   57577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0814 00:56:03.014716   57577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0814 00:56:03.014736   57577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 00:56:03.014749   57577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0814 00:56:03.014819   57577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0814 00:56:03.014824   57577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0814 00:56:03.145922   57577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0814 00:56:03.176030   57577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0814 00:56:03.176078   57577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0814 00:56:03.176090   57577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0814 00:56:03.176082   57577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 00:56:03.176171   57577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0814 00:56:03.176242   57577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0814 00:56:03.241951   57577 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0814 00:56:03.315946   57577 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0814 00:56:03.328366   57577 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0814 00:56:03.328402   57577 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0814 00:56:03.329654   57577 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0814 00:56:03.329704   57577 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0814 00:56:03.329726   57577 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0814 00:56:03.351904   57577 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0814 00:56:03.356472   57577 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 00:56:03.493524   57577 cache_images.go:92] duration metric: took 1.039613467s to LoadCachedImages
	W0814 00:56:03.493603   57577 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I0814 00:56:03.493616   57577 kubeadm.go:934] updating node { 192.168.61.123 8443 v1.20.0 crio true true} ...
	I0814 00:56:03.493756   57577 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-179312 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.123
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-179312 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0814 00:56:03.493843   57577 ssh_runner.go:195] Run: crio config
	I0814 00:56:03.539755   57577 cni.go:84] Creating CNI manager for ""
	I0814 00:56:03.539774   57577 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 00:56:03.539782   57577 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0814 00:56:03.539801   57577 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.123 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-179312 NodeName:old-k8s-version-179312 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.123"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.123 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0814 00:56:03.539924   57577 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.123
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-179312"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.123
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.123"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0814 00:56:03.539988   57577 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0814 00:56:03.549861   57577 binaries.go:44] Found k8s binaries, skipping transfer
	I0814 00:56:03.549918   57577 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0814 00:56:03.559393   57577 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0814 00:56:03.577657   57577 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0814 00:56:03.594580   57577 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0814 00:56:03.610515   57577 ssh_runner.go:195] Run: grep 192.168.61.123	control-plane.minikube.internal$ /etc/hosts
	I0814 00:56:03.614081   57577 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.123	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 00:56:03.626718   57577 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 00:56:03.761957   57577 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 00:56:03.781473   57577 certs.go:68] Setting up /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312 for IP: 192.168.61.123
	I0814 00:56:03.781496   57577 certs.go:194] generating shared ca certs ...
	I0814 00:56:03.781515   57577 certs.go:226] acquiring lock for ca certs: {Name:mke321171338291cf6d66f3acbfe43d3fabcafa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 00:56:03.781687   57577 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19429-9425/.minikube/ca.key
	I0814 00:56:03.781748   57577 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.key
	I0814 00:56:03.781763   57577 certs.go:256] generating profile certs ...
	I0814 00:56:03.781890   57577 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312/client.key
	I0814 00:56:03.781913   57577 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312/client.crt with IP's: []
	I0814 00:56:03.980926   57577 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312/client.crt ...
	I0814 00:56:03.980958   57577 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312/client.crt: {Name:mkb74cb566ecc9c5a569a38c531bfcb7e2fa270c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 00:56:03.981164   57577 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312/client.key ...
	I0814 00:56:03.981183   57577 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312/client.key: {Name:mke071999fcc1c986a35541dcdbecbdb3f0a3c8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 00:56:03.981308   57577 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312/apiserver.key.6e56bf34
	I0814 00:56:03.981339   57577 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312/apiserver.crt.6e56bf34 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.123]
	I0814 00:56:04.228446   57577 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312/apiserver.crt.6e56bf34 ...
	I0814 00:56:04.228477   57577 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312/apiserver.crt.6e56bf34: {Name:mk9d5f4661ffb9b248aad154ae065abe7d7ae6e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 00:56:04.281040   57577 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312/apiserver.key.6e56bf34 ...
	I0814 00:56:04.281093   57577 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312/apiserver.key.6e56bf34: {Name:mkce44877a313ffbc0e24a02a648d05ef5ad3c8e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 00:56:04.281222   57577 certs.go:381] copying /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312/apiserver.crt.6e56bf34 -> /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312/apiserver.crt
	I0814 00:56:04.281306   57577 certs.go:385] copying /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312/apiserver.key.6e56bf34 -> /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312/apiserver.key
	I0814 00:56:04.281365   57577 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312/proxy-client.key
	I0814 00:56:04.281382   57577 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312/proxy-client.crt with IP's: []
	I0814 00:56:04.481463   57577 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312/proxy-client.crt ...
	I0814 00:56:04.481502   57577 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312/proxy-client.crt: {Name:mk0a3985392c4b3f9d2dcdb4e600c734b7fbf930 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 00:56:04.481712   57577 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312/proxy-client.key ...
	I0814 00:56:04.481735   57577 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312/proxy-client.key: {Name:mk6b865d796e5fb66f46f951273d8956076ed788 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 00:56:04.481946   57577 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589.pem (1338 bytes)
	W0814 00:56:04.482001   57577 certs.go:480] ignoring /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589_empty.pem, impossibly tiny 0 bytes
	I0814 00:56:04.482019   57577 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem (1679 bytes)
	I0814 00:56:04.482099   57577 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem (1082 bytes)
	I0814 00:56:04.482146   57577 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem (1123 bytes)
	I0814 00:56:04.482185   57577 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem (1675 bytes)
	I0814 00:56:04.482250   57577 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem (1708 bytes)
	I0814 00:56:04.482832   57577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0814 00:56:04.509702   57577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0814 00:56:04.531888   57577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0814 00:56:04.554315   57577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0814 00:56:04.579852   57577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0814 00:56:04.609152   57577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0814 00:56:04.640436   57577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0814 00:56:04.671647   57577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0814 00:56:04.694331   57577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem --> /usr/share/ca-certificates/165892.pem (1708 bytes)
	I0814 00:56:04.718512   57577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0814 00:56:04.740020   57577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589.pem --> /usr/share/ca-certificates/16589.pem (1338 bytes)
	I0814 00:56:04.761453   57577 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0814 00:56:04.776988   57577 ssh_runner.go:195] Run: openssl version
	I0814 00:56:04.782382   57577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16589.pem && ln -fs /usr/share/ca-certificates/16589.pem /etc/ssl/certs/16589.pem"
	I0814 00:56:04.793108   57577 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16589.pem
	I0814 00:56:04.797164   57577 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 13 23:59 /usr/share/ca-certificates/16589.pem
	I0814 00:56:04.797233   57577 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16589.pem
	I0814 00:56:04.802709   57577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16589.pem /etc/ssl/certs/51391683.0"
	I0814 00:56:04.813098   57577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/165892.pem && ln -fs /usr/share/ca-certificates/165892.pem /etc/ssl/certs/165892.pem"
	I0814 00:56:04.823181   57577 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/165892.pem
	I0814 00:56:04.827389   57577 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 13 23:59 /usr/share/ca-certificates/165892.pem
	I0814 00:56:04.827443   57577 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/165892.pem
	I0814 00:56:04.832628   57577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/165892.pem /etc/ssl/certs/3ec20f2e.0"
	I0814 00:56:04.842021   57577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0814 00:56:04.851273   57577 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0814 00:56:04.855213   57577 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 13 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0814 00:56:04.855261   57577 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0814 00:56:04.860326   57577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0814 00:56:04.869613   57577 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0814 00:56:04.873244   57577 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0814 00:56:04.873294   57577 kubeadm.go:392] StartCluster: {Name:old-k8s-version-179312 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-179312 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.123 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 00:56:04.873380   57577 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0814 00:56:04.873450   57577 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 00:56:04.912777   57577 cri.go:89] found id: ""
	I0814 00:56:04.912843   57577 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0814 00:56:04.922011   57577 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 00:56:04.930898   57577 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 00:56:04.940059   57577 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 00:56:04.940084   57577 kubeadm.go:157] found existing configuration files:
	
	I0814 00:56:04.940135   57577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 00:56:04.948755   57577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 00:56:04.948809   57577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 00:56:04.957463   57577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 00:56:04.966129   57577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 00:56:04.966184   57577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 00:56:04.974945   57577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 00:56:04.983517   57577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 00:56:04.983564   57577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 00:56:04.992421   57577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 00:56:05.000852   57577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 00:56:05.000889   57577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 00:56:05.009622   57577 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0814 00:56:05.127686   57577 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0814 00:56:05.127793   57577 kubeadm.go:310] [preflight] Running pre-flight checks
	I0814 00:56:05.275570   57577 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0814 00:56:05.275740   57577 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0814 00:56:05.275879   57577 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0814 00:56:05.490496   57577 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0814 00:56:05.576702   57577 out.go:204]   - Generating certificates and keys ...
	I0814 00:56:05.576822   57577 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0814 00:56:05.576887   57577 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0814 00:56:05.594238   57577 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0814 00:56:05.772117   57577 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0814 00:56:06.143054   57577 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0814 00:56:06.312207   57577 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0814 00:56:06.512526   57577 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0814 00:56:06.512805   57577 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-179312] and IPs [192.168.61.123 127.0.0.1 ::1]
	I0814 00:56:06.983098   57577 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0814 00:56:06.983245   57577 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-179312] and IPs [192.168.61.123 127.0.0.1 ::1]
	I0814 00:56:07.077941   57577 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0814 00:56:07.202800   57577 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0814 00:56:07.506636   57577 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0814 00:56:07.506796   57577 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0814 00:56:07.686723   57577 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0814 00:56:07.854134   57577 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0814 00:56:07.917438   57577 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0814 00:56:08.037255   57577 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0814 00:56:08.056640   57577 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0814 00:56:08.058849   57577 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0814 00:56:08.058904   57577 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0814 00:56:08.178154   57577 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0814 00:56:08.180608   57577 out.go:204]   - Booting up control plane ...
	I0814 00:56:08.180714   57577 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0814 00:56:08.184958   57577 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0814 00:56:08.185903   57577 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0814 00:56:08.186784   57577 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0814 00:56:08.190837   57577 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0814 00:56:48.186493   57577 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0814 00:56:48.187251   57577 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 00:56:48.187481   57577 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 00:56:53.187782   57577 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 00:56:53.188006   57577 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 00:57:03.187342   57577 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 00:57:03.187609   57577 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 00:57:23.187031   57577 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 00:57:23.187318   57577 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 00:58:03.189250   57577 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 00:58:03.189533   57577 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 00:58:03.189566   57577 kubeadm.go:310] 
	I0814 00:58:03.189638   57577 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0814 00:58:03.189697   57577 kubeadm.go:310] 		timed out waiting for the condition
	I0814 00:58:03.189749   57577 kubeadm.go:310] 
	I0814 00:58:03.189827   57577 kubeadm.go:310] 	This error is likely caused by:
	I0814 00:58:03.189885   57577 kubeadm.go:310] 		- The kubelet is not running
	I0814 00:58:03.190099   57577 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0814 00:58:03.190115   57577 kubeadm.go:310] 
	I0814 00:58:03.190263   57577 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0814 00:58:03.190319   57577 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0814 00:58:03.190366   57577 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0814 00:58:03.190378   57577 kubeadm.go:310] 
	I0814 00:58:03.190524   57577 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0814 00:58:03.190643   57577 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0814 00:58:03.190653   57577 kubeadm.go:310] 
	I0814 00:58:03.190782   57577 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0814 00:58:03.190904   57577 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0814 00:58:03.191010   57577 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0814 00:58:03.191116   57577 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0814 00:58:03.191127   57577 kubeadm.go:310] 
	I0814 00:58:03.191570   57577 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0814 00:58:03.191700   57577 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0814 00:58:03.191813   57577 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0814 00:58:03.191991   57577 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-179312] and IPs [192.168.61.123 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-179312] and IPs [192.168.61.123 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-179312] and IPs [192.168.61.123 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-179312] and IPs [192.168.61.123 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0814 00:58:03.192047   57577 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0814 00:58:03.960822   57577 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 00:58:03.976854   57577 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 00:58:03.987718   57577 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 00:58:03.987744   57577 kubeadm.go:157] found existing configuration files:
	
	I0814 00:58:03.987794   57577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 00:58:03.998176   57577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 00:58:03.998245   57577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 00:58:04.010827   57577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 00:58:04.022352   57577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 00:58:04.022423   57577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 00:58:04.034666   57577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 00:58:04.046541   57577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 00:58:04.046610   57577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 00:58:04.057194   57577 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 00:58:04.067151   57577 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 00:58:04.067217   57577 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 00:58:04.077412   57577 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0814 00:58:04.153080   57577 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0814 00:58:04.153168   57577 kubeadm.go:310] [preflight] Running pre-flight checks
	I0814 00:58:04.312272   57577 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0814 00:58:04.312539   57577 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0814 00:58:04.312788   57577 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0814 00:58:04.497766   57577 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0814 00:58:04.499743   57577 out.go:204]   - Generating certificates and keys ...
	I0814 00:58:04.499855   57577 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0814 00:58:04.499948   57577 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0814 00:58:04.500084   57577 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0814 00:58:04.500184   57577 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0814 00:58:04.500293   57577 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0814 00:58:04.500392   57577 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0814 00:58:04.500674   57577 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0814 00:58:04.501025   57577 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0814 00:58:04.501791   57577 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0814 00:58:04.502600   57577 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0814 00:58:04.502832   57577 kubeadm.go:310] [certs] Using the existing "sa" key
	I0814 00:58:04.502906   57577 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0814 00:58:04.685205   57577 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0814 00:58:04.767607   57577 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0814 00:58:04.832792   57577 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0814 00:58:05.036223   57577 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0814 00:58:05.050133   57577 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0814 00:58:05.051115   57577 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0814 00:58:05.051212   57577 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0814 00:58:05.199026   57577 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0814 00:58:05.201019   57577 out.go:204]   - Booting up control plane ...
	I0814 00:58:05.201157   57577 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0814 00:58:05.210618   57577 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0814 00:58:05.211495   57577 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0814 00:58:05.212262   57577 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0814 00:58:05.214749   57577 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0814 00:58:45.218150   57577 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0814 00:58:45.218693   57577 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 00:58:45.218966   57577 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 00:58:50.219586   57577 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 00:58:50.219811   57577 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 00:59:00.220396   57577 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 00:59:00.220552   57577 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 00:59:20.219468   57577 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 00:59:20.219700   57577 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 01:00:00.219514   57577 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 01:00:00.219757   57577 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 01:00:00.219773   57577 kubeadm.go:310] 
	I0814 01:00:00.219843   57577 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0814 01:00:00.219900   57577 kubeadm.go:310] 		timed out waiting for the condition
	I0814 01:00:00.219910   57577 kubeadm.go:310] 
	I0814 01:00:00.219962   57577 kubeadm.go:310] 	This error is likely caused by:
	I0814 01:00:00.220021   57577 kubeadm.go:310] 		- The kubelet is not running
	I0814 01:00:00.220188   57577 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0814 01:00:00.220208   57577 kubeadm.go:310] 
	I0814 01:00:00.220349   57577 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0814 01:00:00.220408   57577 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0814 01:00:00.220454   57577 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0814 01:00:00.220463   57577 kubeadm.go:310] 
	I0814 01:00:00.220575   57577 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0814 01:00:00.220696   57577 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0814 01:00:00.220716   57577 kubeadm.go:310] 
	I0814 01:00:00.220836   57577 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0814 01:00:00.220949   57577 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0814 01:00:00.221060   57577 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0814 01:00:00.221172   57577 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0814 01:00:00.221189   57577 kubeadm.go:310] 
	I0814 01:00:00.221470   57577 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0814 01:00:00.221628   57577 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0814 01:00:00.221737   57577 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0814 01:00:00.221801   57577 kubeadm.go:394] duration metric: took 3m55.348510296s to StartCluster
	I0814 01:00:00.221837   57577 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:00:00.221889   57577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:00:00.260461   57577 cri.go:89] found id: ""
	I0814 01:00:00.260483   57577 logs.go:276] 0 containers: []
	W0814 01:00:00.260491   57577 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:00:00.260496   57577 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:00:00.260544   57577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:00:00.291841   57577 cri.go:89] found id: ""
	I0814 01:00:00.291871   57577 logs.go:276] 0 containers: []
	W0814 01:00:00.291882   57577 logs.go:278] No container was found matching "etcd"
	I0814 01:00:00.291890   57577 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:00:00.291948   57577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:00:00.322413   57577 cri.go:89] found id: ""
	I0814 01:00:00.322440   57577 logs.go:276] 0 containers: []
	W0814 01:00:00.322447   57577 logs.go:278] No container was found matching "coredns"
	I0814 01:00:00.322453   57577 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:00:00.322508   57577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:00:00.352283   57577 cri.go:89] found id: ""
	I0814 01:00:00.352332   57577 logs.go:276] 0 containers: []
	W0814 01:00:00.352343   57577 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:00:00.352351   57577 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:00:00.352421   57577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:00:00.382747   57577 cri.go:89] found id: ""
	I0814 01:00:00.382774   57577 logs.go:276] 0 containers: []
	W0814 01:00:00.382785   57577 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:00:00.382792   57577 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:00:00.382838   57577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:00:00.414134   57577 cri.go:89] found id: ""
	I0814 01:00:00.414160   57577 logs.go:276] 0 containers: []
	W0814 01:00:00.414172   57577 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:00:00.414181   57577 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:00:00.414244   57577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:00:00.446494   57577 cri.go:89] found id: ""
	I0814 01:00:00.446524   57577 logs.go:276] 0 containers: []
	W0814 01:00:00.446534   57577 logs.go:278] No container was found matching "kindnet"
	I0814 01:00:00.446546   57577 logs.go:123] Gathering logs for container status ...
	I0814 01:00:00.446576   57577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:00:00.481428   57577 logs.go:123] Gathering logs for kubelet ...
	I0814 01:00:00.481457   57577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:00:00.532708   57577 logs.go:123] Gathering logs for dmesg ...
	I0814 01:00:00.532742   57577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:00:00.546279   57577 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:00:00.546311   57577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:00:00.656441   57577 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:00:00.656469   57577 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:00:00.656484   57577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0814 01:00:00.762261   57577 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0814 01:00:00.762311   57577 out.go:239] * 
	* 
	W0814 01:00:00.762367   57577 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0814 01:00:00.762390   57577 out.go:239] * 
	* 
	W0814 01:00:00.763142   57577 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0814 01:00:00.766206   57577 out.go:177] 
	W0814 01:00:00.767331   57577 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0814 01:00:00.767388   57577 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0814 01:00:00.767409   57577 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0814 01:00:00.768868   57577 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-179312 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-179312 -n old-k8s-version-179312
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-179312 -n old-k8s-version-179312: exit status 6 (222.863707ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0814 01:00:01.039276   60760 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-179312" does not appear in /home/jenkins/minikube-integration/19429-9425/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-179312" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (297.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-901410 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-901410 --alsologtostderr -v=3: exit status 82 (2m0.522781123s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-901410"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 00:58:03.652019   60011 out.go:291] Setting OutFile to fd 1 ...
	I0814 00:58:03.652149   60011 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 00:58:03.652160   60011 out.go:304] Setting ErrFile to fd 2...
	I0814 00:58:03.652164   60011 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 00:58:03.652347   60011 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19429-9425/.minikube/bin
	I0814 00:58:03.652611   60011 out.go:298] Setting JSON to false
	I0814 00:58:03.652703   60011 mustload.go:65] Loading cluster: embed-certs-901410
	I0814 00:58:03.653042   60011 config.go:182] Loaded profile config "embed-certs-901410": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 00:58:03.653125   60011 profile.go:143] Saving config to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/embed-certs-901410/config.json ...
	I0814 00:58:03.653307   60011 mustload.go:65] Loading cluster: embed-certs-901410
	I0814 00:58:03.653442   60011 config.go:182] Loaded profile config "embed-certs-901410": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 00:58:03.653476   60011 stop.go:39] StopHost: embed-certs-901410
	I0814 00:58:03.653885   60011 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 00:58:03.653941   60011 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 00:58:03.668302   60011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41373
	I0814 00:58:03.668801   60011 main.go:141] libmachine: () Calling .GetVersion
	I0814 00:58:03.669370   60011 main.go:141] libmachine: Using API Version  1
	I0814 00:58:03.669393   60011 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 00:58:03.669800   60011 main.go:141] libmachine: () Calling .GetMachineName
	I0814 00:58:03.671981   60011 out.go:177] * Stopping node "embed-certs-901410"  ...
	I0814 00:58:03.673075   60011 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0814 00:58:03.673110   60011 main.go:141] libmachine: (embed-certs-901410) Calling .DriverName
	I0814 00:58:03.673339   60011 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0814 00:58:03.673384   60011 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHHostname
	I0814 00:58:03.676533   60011 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 00:58:03.676907   60011 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 01:56:39 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 00:58:03.676946   60011 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 00:58:03.677171   60011 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHPort
	I0814 00:58:03.677362   60011 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 00:58:03.677521   60011 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHUsername
	I0814 00:58:03.677656   60011 sshutil.go:53] new ssh client: &{IP:192.168.50.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/embed-certs-901410/id_rsa Username:docker}
	I0814 00:58:03.785613   60011 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0814 00:58:03.862422   60011 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0814 00:58:03.920566   60011 main.go:141] libmachine: Stopping "embed-certs-901410"...
	I0814 00:58:03.920609   60011 main.go:141] libmachine: (embed-certs-901410) Calling .GetState
	I0814 00:58:03.922704   60011 main.go:141] libmachine: (embed-certs-901410) Calling .Stop
	I0814 00:58:03.926867   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 0/120
	I0814 00:58:04.928822   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 1/120
	I0814 00:58:05.930710   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 2/120
	I0814 00:58:06.932584   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 3/120
	I0814 00:58:07.934767   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 4/120
	I0814 00:58:08.936880   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 5/120
	I0814 00:58:09.938409   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 6/120
	I0814 00:58:10.939997   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 7/120
	I0814 00:58:11.941376   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 8/120
	I0814 00:58:12.942695   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 9/120
	I0814 00:58:13.944988   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 10/120
	I0814 00:58:14.946450   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 11/120
	I0814 00:58:15.947825   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 12/120
	I0814 00:58:16.949261   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 13/120
	I0814 00:58:17.950674   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 14/120
	I0814 00:58:18.952804   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 15/120
	I0814 00:58:19.954480   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 16/120
	I0814 00:58:20.955768   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 17/120
	I0814 00:58:21.957343   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 18/120
	I0814 00:58:22.958708   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 19/120
	I0814 00:58:23.961041   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 20/120
	I0814 00:58:24.962668   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 21/120
	I0814 00:58:25.963972   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 22/120
	I0814 00:58:26.965390   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 23/120
	I0814 00:58:27.966793   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 24/120
	I0814 00:58:28.968702   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 25/120
	I0814 00:58:29.970449   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 26/120
	I0814 00:58:30.972578   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 27/120
	I0814 00:58:31.974013   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 28/120
	I0814 00:58:32.975535   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 29/120
	I0814 00:58:33.976953   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 30/120
	I0814 00:58:34.978375   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 31/120
	I0814 00:58:35.980645   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 32/120
	I0814 00:58:36.982081   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 33/120
	I0814 00:58:37.983712   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 34/120
	I0814 00:58:38.985026   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 35/120
	I0814 00:58:39.986390   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 36/120
	I0814 00:58:40.988630   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 37/120
	I0814 00:58:41.990260   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 38/120
	I0814 00:58:42.992580   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 39/120
	I0814 00:58:43.994636   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 40/120
	I0814 00:58:44.997132   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 41/120
	I0814 00:58:45.998655   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 42/120
	I0814 00:58:47.000803   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 43/120
	I0814 00:58:48.002271   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 44/120
	I0814 00:58:49.004307   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 45/120
	I0814 00:58:50.005702   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 46/120
	I0814 00:58:51.007069   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 47/120
	I0814 00:58:52.008455   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 48/120
	I0814 00:58:53.009688   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 49/120
	I0814 00:58:54.010856   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 50/120
	I0814 00:58:55.012304   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 51/120
	I0814 00:58:56.013472   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 52/120
	I0814 00:58:57.014932   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 53/120
	I0814 00:58:58.016764   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 54/120
	I0814 00:58:59.018596   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 55/120
	I0814 00:59:00.019910   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 56/120
	I0814 00:59:01.021787   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 57/120
	I0814 00:59:02.023223   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 58/120
	I0814 00:59:03.024450   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 59/120
	I0814 00:59:04.026431   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 60/120
	I0814 00:59:05.027853   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 61/120
	I0814 00:59:06.029029   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 62/120
	I0814 00:59:07.030639   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 63/120
	I0814 00:59:08.031976   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 64/120
	I0814 00:59:09.033700   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 65/120
	I0814 00:59:10.035329   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 66/120
	I0814 00:59:11.036675   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 67/120
	I0814 00:59:12.037970   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 68/120
	I0814 00:59:13.039220   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 69/120
	I0814 00:59:14.041451   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 70/120
	I0814 00:59:15.042632   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 71/120
	I0814 00:59:16.044500   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 72/120
	I0814 00:59:17.045974   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 73/120
	I0814 00:59:18.047523   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 74/120
	I0814 00:59:19.049486   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 75/120
	I0814 00:59:20.050968   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 76/120
	I0814 00:59:21.052495   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 77/120
	I0814 00:59:22.054340   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 78/120
	I0814 00:59:23.055749   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 79/120
	I0814 00:59:24.057855   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 80/120
	I0814 00:59:25.059476   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 81/120
	I0814 00:59:26.060911   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 82/120
	I0814 00:59:27.062199   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 83/120
	I0814 00:59:28.063527   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 84/120
	I0814 00:59:29.065466   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 85/120
	I0814 00:59:30.066784   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 86/120
	I0814 00:59:31.068126   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 87/120
	I0814 00:59:32.069696   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 88/120
	I0814 00:59:33.071093   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 89/120
	I0814 00:59:34.072618   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 90/120
	I0814 00:59:35.073850   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 91/120
	I0814 00:59:36.075281   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 92/120
	I0814 00:59:37.076750   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 93/120
	I0814 00:59:38.078147   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 94/120
	I0814 00:59:39.080303   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 95/120
	I0814 00:59:40.081739   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 96/120
	I0814 00:59:41.083305   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 97/120
	I0814 00:59:42.084664   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 98/120
	I0814 00:59:43.086196   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 99/120
	I0814 00:59:44.088325   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 100/120
	I0814 00:59:45.089894   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 101/120
	I0814 00:59:46.091200   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 102/120
	I0814 00:59:47.092856   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 103/120
	I0814 00:59:48.094296   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 104/120
	I0814 00:59:49.096310   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 105/120
	I0814 00:59:50.097401   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 106/120
	I0814 00:59:51.098674   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 107/120
	I0814 00:59:52.100071   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 108/120
	I0814 00:59:53.101323   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 109/120
	I0814 00:59:54.103332   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 110/120
	I0814 00:59:55.104854   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 111/120
	I0814 00:59:56.106395   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 112/120
	I0814 00:59:57.108027   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 113/120
	I0814 00:59:58.109489   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 114/120
	I0814 00:59:59.111579   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 115/120
	I0814 01:00:00.112930   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 116/120
	I0814 01:00:01.114396   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 117/120
	I0814 01:00:02.116190   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 118/120
	I0814 01:00:03.117452   60011 main.go:141] libmachine: (embed-certs-901410) Waiting for machine to stop 119/120
	I0814 01:00:04.117961   60011 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0814 01:00:04.118016   60011 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0814 01:00:04.120002   60011 out.go:177] 
	W0814 01:00:04.121370   60011 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0814 01:00:04.121395   60011 out.go:239] * 
	* 
	W0814 01:00:04.124207   60011 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0814 01:00:04.125659   60011 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-901410 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-901410 -n embed-certs-901410
E0814 01:00:05.519665   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/addons-937866/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-901410 -n embed-certs-901410: exit status 3 (18.486633285s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0814 01:00:22.614387   60891 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.210:22: connect: no route to host
	E0814 01:00:22.614405   60891 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.210:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-901410" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-776907 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-776907 --alsologtostderr -v=3: exit status 82 (2m0.47615739s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-776907"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 00:58:45.055235   60340 out.go:291] Setting OutFile to fd 1 ...
	I0814 00:58:45.055477   60340 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 00:58:45.055486   60340 out.go:304] Setting ErrFile to fd 2...
	I0814 00:58:45.055491   60340 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 00:58:45.055664   60340 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19429-9425/.minikube/bin
	I0814 00:58:45.055882   60340 out.go:298] Setting JSON to false
	I0814 00:58:45.055953   60340 mustload.go:65] Loading cluster: no-preload-776907
	I0814 00:58:45.056251   60340 config.go:182] Loaded profile config "no-preload-776907": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 00:58:45.056317   60340 profile.go:143] Saving config to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/no-preload-776907/config.json ...
	I0814 00:58:45.056480   60340 mustload.go:65] Loading cluster: no-preload-776907
	I0814 00:58:45.056575   60340 config.go:182] Loaded profile config "no-preload-776907": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 00:58:45.056605   60340 stop.go:39] StopHost: no-preload-776907
	I0814 00:58:45.056993   60340 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 00:58:45.057046   60340 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 00:58:45.071859   60340 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33049
	I0814 00:58:45.072295   60340 main.go:141] libmachine: () Calling .GetVersion
	I0814 00:58:45.072877   60340 main.go:141] libmachine: Using API Version  1
	I0814 00:58:45.072904   60340 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 00:58:45.073208   60340 main.go:141] libmachine: () Calling .GetMachineName
	I0814 00:58:45.075392   60340 out.go:177] * Stopping node "no-preload-776907"  ...
	I0814 00:58:45.076584   60340 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0814 00:58:45.076621   60340 main.go:141] libmachine: (no-preload-776907) Calling .DriverName
	I0814 00:58:45.076817   60340 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0814 00:58:45.076841   60340 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHHostname
	I0814 00:58:45.079616   60340 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 00:58:45.079999   60340 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 01:57:10 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 00:58:45.080021   60340 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 00:58:45.080158   60340 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHPort
	I0814 00:58:45.080337   60340 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 00:58:45.080492   60340 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHUsername
	I0814 00:58:45.080661   60340 sshutil.go:53] new ssh client: &{IP:192.168.72.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/no-preload-776907/id_rsa Username:docker}
	I0814 00:58:45.188917   60340 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0814 00:58:45.247728   60340 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0814 00:58:45.286914   60340 main.go:141] libmachine: Stopping "no-preload-776907"...
	I0814 00:58:45.286942   60340 main.go:141] libmachine: (no-preload-776907) Calling .GetState
	I0814 00:58:45.288619   60340 main.go:141] libmachine: (no-preload-776907) Calling .Stop
	I0814 00:58:45.292325   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 0/120
	I0814 00:58:46.293931   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 1/120
	I0814 00:58:47.295612   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 2/120
	I0814 00:58:48.297100   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 3/120
	I0814 00:58:49.298277   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 4/120
	I0814 00:58:50.299894   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 5/120
	I0814 00:58:51.301299   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 6/120
	I0814 00:58:52.302731   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 7/120
	I0814 00:58:53.304207   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 8/120
	I0814 00:58:54.305890   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 9/120
	I0814 00:58:55.307716   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 10/120
	I0814 00:58:56.309057   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 11/120
	I0814 00:58:57.310409   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 12/120
	I0814 00:58:58.312650   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 13/120
	I0814 00:58:59.313952   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 14/120
	I0814 00:59:00.315893   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 15/120
	I0814 00:59:01.317299   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 16/120
	I0814 00:59:02.318562   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 17/120
	I0814 00:59:03.320597   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 18/120
	I0814 00:59:04.321850   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 19/120
	I0814 00:59:05.323247   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 20/120
	I0814 00:59:06.324440   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 21/120
	I0814 00:59:07.325771   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 22/120
	I0814 00:59:08.327200   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 23/120
	I0814 00:59:09.329003   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 24/120
	I0814 00:59:10.330929   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 25/120
	I0814 00:59:11.332364   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 26/120
	I0814 00:59:12.333558   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 27/120
	I0814 00:59:13.334948   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 28/120
	I0814 00:59:14.336332   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 29/120
	I0814 00:59:15.338478   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 30/120
	I0814 00:59:16.339925   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 31/120
	I0814 00:59:17.341523   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 32/120
	I0814 00:59:18.343037   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 33/120
	I0814 00:59:19.344631   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 34/120
	I0814 00:59:20.346755   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 35/120
	I0814 00:59:21.348086   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 36/120
	I0814 00:59:22.349647   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 37/120
	I0814 00:59:23.351140   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 38/120
	I0814 00:59:24.352549   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 39/120
	I0814 00:59:25.354797   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 40/120
	I0814 00:59:26.356447   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 41/120
	I0814 00:59:27.357881   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 42/120
	I0814 00:59:28.359483   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 43/120
	I0814 00:59:29.360836   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 44/120
	I0814 00:59:30.362974   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 45/120
	I0814 00:59:31.364471   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 46/120
	I0814 00:59:32.365956   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 47/120
	I0814 00:59:33.367608   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 48/120
	I0814 00:59:34.368946   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 49/120
	I0814 00:59:35.371048   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 50/120
	I0814 00:59:36.372593   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 51/120
	I0814 00:59:37.373928   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 52/120
	I0814 00:59:38.375363   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 53/120
	I0814 00:59:39.376809   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 54/120
	I0814 00:59:40.379084   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 55/120
	I0814 00:59:41.380344   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 56/120
	I0814 00:59:42.381721   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 57/120
	I0814 00:59:43.383191   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 58/120
	I0814 00:59:44.384626   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 59/120
	I0814 00:59:45.387082   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 60/120
	I0814 00:59:46.388463   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 61/120
	I0814 00:59:47.389884   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 62/120
	I0814 00:59:48.391283   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 63/120
	I0814 00:59:49.393667   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 64/120
	I0814 00:59:50.395660   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 65/120
	I0814 00:59:51.397500   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 66/120
	I0814 00:59:52.398816   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 67/120
	I0814 00:59:53.400788   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 68/120
	I0814 00:59:54.402405   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 69/120
	I0814 00:59:55.404704   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 70/120
	I0814 00:59:56.406074   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 71/120
	I0814 00:59:57.407611   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 72/120
	I0814 00:59:58.408989   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 73/120
	I0814 00:59:59.410369   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 74/120
	I0814 01:00:00.412625   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 75/120
	I0814 01:00:01.413825   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 76/120
	I0814 01:00:02.415336   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 77/120
	I0814 01:00:03.416653   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 78/120
	I0814 01:00:04.418250   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 79/120
	I0814 01:00:05.420911   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 80/120
	I0814 01:00:06.422295   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 81/120
	I0814 01:00:07.423803   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 82/120
	I0814 01:00:08.425684   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 83/120
	I0814 01:00:09.427017   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 84/120
	I0814 01:00:10.428853   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 85/120
	I0814 01:00:11.430028   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 86/120
	I0814 01:00:12.431695   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 87/120
	I0814 01:00:13.433120   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 88/120
	I0814 01:00:14.434836   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 89/120
	I0814 01:00:15.437614   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 90/120
	I0814 01:00:16.439083   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 91/120
	I0814 01:00:17.440542   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 92/120
	I0814 01:00:18.441990   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 93/120
	I0814 01:00:19.443525   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 94/120
	I0814 01:00:20.445486   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 95/120
	I0814 01:00:21.446923   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 96/120
	I0814 01:00:22.448469   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 97/120
	I0814 01:00:23.449762   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 98/120
	I0814 01:00:24.451215   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 99/120
	I0814 01:00:25.453515   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 100/120
	I0814 01:00:26.454829   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 101/120
	I0814 01:00:27.456403   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 102/120
	I0814 01:00:28.457772   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 103/120
	I0814 01:00:29.459085   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 104/120
	I0814 01:00:30.461002   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 105/120
	I0814 01:00:31.462504   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 106/120
	I0814 01:00:32.463946   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 107/120
	I0814 01:00:33.465540   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 108/120
	I0814 01:00:34.466947   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 109/120
	I0814 01:00:35.468499   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 110/120
	I0814 01:00:36.470189   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 111/120
	I0814 01:00:37.471937   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 112/120
	I0814 01:00:38.474364   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 113/120
	I0814 01:00:39.475897   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 114/120
	I0814 01:00:40.477712   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 115/120
	I0814 01:00:41.479325   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 116/120
	I0814 01:00:42.480766   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 117/120
	I0814 01:00:43.482282   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 118/120
	I0814 01:00:44.483677   60340 main.go:141] libmachine: (no-preload-776907) Waiting for machine to stop 119/120
	I0814 01:00:45.484186   60340 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0814 01:00:45.484237   60340 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0814 01:00:45.486066   60340 out.go:177] 
	W0814 01:00:45.487266   60340 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0814 01:00:45.487292   60340 out.go:239] * 
	* 
	W0814 01:00:45.489853   60340 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0814 01:00:45.491211   60340 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-776907 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-776907 -n no-preload-776907
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-776907 -n no-preload-776907: exit status 3 (18.593589362s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0814 01:01:04.086342   61192 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.94:22: connect: no route to host
	E0814 01:01:04.086363   61192 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.94:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-776907" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-585256 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-585256 --alsologtostderr -v=3: exit status 82 (2m0.506820294s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-585256"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 00:59:06.501751   60523 out.go:291] Setting OutFile to fd 1 ...
	I0814 00:59:06.501999   60523 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 00:59:06.502008   60523 out.go:304] Setting ErrFile to fd 2...
	I0814 00:59:06.502012   60523 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 00:59:06.502232   60523 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19429-9425/.minikube/bin
	I0814 00:59:06.502469   60523 out.go:298] Setting JSON to false
	I0814 00:59:06.502546   60523 mustload.go:65] Loading cluster: default-k8s-diff-port-585256
	I0814 00:59:06.502861   60523 config.go:182] Loaded profile config "default-k8s-diff-port-585256": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 00:59:06.502922   60523 profile.go:143] Saving config to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/default-k8s-diff-port-585256/config.json ...
	I0814 00:59:06.503069   60523 mustload.go:65] Loading cluster: default-k8s-diff-port-585256
	I0814 00:59:06.503164   60523 config.go:182] Loaded profile config "default-k8s-diff-port-585256": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 00:59:06.503186   60523 stop.go:39] StopHost: default-k8s-diff-port-585256
	I0814 00:59:06.503522   60523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 00:59:06.503564   60523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 00:59:06.518949   60523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35955
	I0814 00:59:06.519453   60523 main.go:141] libmachine: () Calling .GetVersion
	I0814 00:59:06.519964   60523 main.go:141] libmachine: Using API Version  1
	I0814 00:59:06.519985   60523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 00:59:06.520319   60523 main.go:141] libmachine: () Calling .GetMachineName
	I0814 00:59:06.522430   60523 out.go:177] * Stopping node "default-k8s-diff-port-585256"  ...
	I0814 00:59:06.523507   60523 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0814 00:59:06.523530   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .DriverName
	I0814 00:59:06.523728   60523 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0814 00:59:06.523748   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHHostname
	I0814 00:59:06.526156   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 00:59:06.526532   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 01:58:15 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 00:59:06.526560   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 00:59:06.526679   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHPort
	I0814 00:59:06.526788   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 00:59:06.526941   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHUsername
	I0814 00:59:06.527078   60523 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/default-k8s-diff-port-585256/id_rsa Username:docker}
	I0814 00:59:06.635586   60523 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0814 00:59:06.695502   60523 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0814 00:59:06.760049   60523 main.go:141] libmachine: Stopping "default-k8s-diff-port-585256"...
	I0814 00:59:06.760078   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetState
	I0814 00:59:06.761534   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .Stop
	I0814 00:59:06.764772   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 0/120
	I0814 00:59:07.766217   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 1/120
	I0814 00:59:08.767633   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 2/120
	I0814 00:59:09.769314   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 3/120
	I0814 00:59:10.770790   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 4/120
	I0814 00:59:11.773007   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 5/120
	I0814 00:59:12.774976   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 6/120
	I0814 00:59:13.776131   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 7/120
	I0814 00:59:14.777537   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 8/120
	I0814 00:59:15.778736   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 9/120
	I0814 00:59:16.780878   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 10/120
	I0814 00:59:17.782240   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 11/120
	I0814 00:59:18.783639   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 12/120
	I0814 00:59:19.784977   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 13/120
	I0814 00:59:20.786364   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 14/120
	I0814 00:59:21.788393   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 15/120
	I0814 00:59:22.789872   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 16/120
	I0814 00:59:23.791229   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 17/120
	I0814 00:59:24.792864   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 18/120
	I0814 00:59:25.794317   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 19/120
	I0814 00:59:26.796655   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 20/120
	I0814 00:59:27.798016   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 21/120
	I0814 00:59:28.799517   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 22/120
	I0814 00:59:29.800915   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 23/120
	I0814 00:59:30.802472   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 24/120
	I0814 00:59:31.804598   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 25/120
	I0814 00:59:32.806184   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 26/120
	I0814 00:59:33.807681   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 27/120
	I0814 00:59:34.809087   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 28/120
	I0814 00:59:35.810505   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 29/120
	I0814 00:59:36.812762   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 30/120
	I0814 00:59:37.814079   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 31/120
	I0814 00:59:38.815646   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 32/120
	I0814 00:59:39.817054   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 33/120
	I0814 00:59:40.818661   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 34/120
	I0814 00:59:41.820705   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 35/120
	I0814 00:59:42.822241   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 36/120
	I0814 00:59:43.823583   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 37/120
	I0814 00:59:44.825115   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 38/120
	I0814 00:59:45.826544   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 39/120
	I0814 00:59:46.828724   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 40/120
	I0814 00:59:47.830288   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 41/120
	I0814 00:59:48.831579   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 42/120
	I0814 00:59:49.833147   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 43/120
	I0814 00:59:50.834388   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 44/120
	I0814 00:59:51.836469   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 45/120
	I0814 00:59:52.837763   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 46/120
	I0814 00:59:53.839306   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 47/120
	I0814 00:59:54.840629   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 48/120
	I0814 00:59:55.841859   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 49/120
	I0814 00:59:56.844356   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 50/120
	I0814 00:59:57.845720   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 51/120
	I0814 00:59:58.847041   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 52/120
	I0814 00:59:59.848505   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 53/120
	I0814 01:00:00.850080   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 54/120
	I0814 01:00:01.852051   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 55/120
	I0814 01:00:02.853529   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 56/120
	I0814 01:00:03.855162   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 57/120
	I0814 01:00:04.856936   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 58/120
	I0814 01:00:05.858275   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 59/120
	I0814 01:00:06.860615   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 60/120
	I0814 01:00:07.862223   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 61/120
	I0814 01:00:08.863487   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 62/120
	I0814 01:00:09.864800   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 63/120
	I0814 01:00:10.866172   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 64/120
	I0814 01:00:11.868262   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 65/120
	I0814 01:00:12.869788   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 66/120
	I0814 01:00:13.871409   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 67/120
	I0814 01:00:14.872986   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 68/120
	I0814 01:00:15.874335   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 69/120
	I0814 01:00:16.876601   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 70/120
	I0814 01:00:17.878019   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 71/120
	I0814 01:00:18.879529   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 72/120
	I0814 01:00:19.881425   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 73/120
	I0814 01:00:20.883073   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 74/120
	I0814 01:00:21.885012   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 75/120
	I0814 01:00:22.886472   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 76/120
	I0814 01:00:23.888109   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 77/120
	I0814 01:00:24.889799   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 78/120
	I0814 01:00:25.891197   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 79/120
	I0814 01:00:26.893471   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 80/120
	I0814 01:00:27.894819   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 81/120
	I0814 01:00:28.895972   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 82/120
	I0814 01:00:29.897571   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 83/120
	I0814 01:00:30.898857   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 84/120
	I0814 01:00:31.900894   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 85/120
	I0814 01:00:32.902325   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 86/120
	I0814 01:00:33.904030   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 87/120
	I0814 01:00:34.905366   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 88/120
	I0814 01:00:35.907005   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 89/120
	I0814 01:00:36.909488   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 90/120
	I0814 01:00:37.911032   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 91/120
	I0814 01:00:38.912681   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 92/120
	I0814 01:00:39.914294   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 93/120
	I0814 01:00:40.915961   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 94/120
	I0814 01:00:41.918207   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 95/120
	I0814 01:00:42.920017   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 96/120
	I0814 01:00:43.921528   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 97/120
	I0814 01:00:44.923162   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 98/120
	I0814 01:00:45.924647   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 99/120
	I0814 01:00:46.927105   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 100/120
	I0814 01:00:47.928661   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 101/120
	I0814 01:00:48.930334   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 102/120
	I0814 01:00:49.931796   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 103/120
	I0814 01:00:50.933345   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 104/120
	I0814 01:00:51.935500   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 105/120
	I0814 01:00:52.936983   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 106/120
	I0814 01:00:53.938692   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 107/120
	I0814 01:00:54.940018   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 108/120
	I0814 01:00:55.941484   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 109/120
	I0814 01:00:56.943667   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 110/120
	I0814 01:00:57.945084   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 111/120
	I0814 01:00:58.946717   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 112/120
	I0814 01:00:59.948355   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 113/120
	I0814 01:01:00.949824   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 114/120
	I0814 01:01:01.951951   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 115/120
	I0814 01:01:02.953408   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 116/120
	I0814 01:01:03.954877   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 117/120
	I0814 01:01:04.956438   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 118/120
	I0814 01:01:05.957971   60523 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for machine to stop 119/120
	I0814 01:01:06.959381   60523 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0814 01:01:06.959431   60523 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0814 01:01:06.961296   60523 out.go:177] 
	W0814 01:01:06.962482   60523 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0814 01:01:06.962507   60523 out.go:239] * 
	* 
	W0814 01:01:06.965138   60523 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0814 01:01:06.966445   60523 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-585256 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-585256 -n default-k8s-diff-port-585256
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-585256 -n default-k8s-diff-port-585256: exit status 3 (18.621755165s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0814 01:01:25.590453   61335 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.110:22: connect: no route to host
	E0814 01:01:25.590471   61335 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.110:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-585256" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-179312 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-179312 create -f testdata/busybox.yaml: exit status 1 (43.335105ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-179312" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-179312 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-179312 -n old-k8s-version-179312
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-179312 -n old-k8s-version-179312: exit status 6 (218.110766ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0814 01:00:01.299918   60801 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-179312" does not appear in /home/jenkins/minikube-integration/19429-9425/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-179312" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-179312 -n old-k8s-version-179312
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-179312 -n old-k8s-version-179312: exit status 6 (213.465273ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0814 01:00:01.515252   60831 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-179312" does not appear in /home/jenkins/minikube-integration/19429-9425/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-179312" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.48s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (91.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-179312 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-179312 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m31.207050976s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-179312 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-179312 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-179312 describe deploy/metrics-server -n kube-system: exit status 1 (45.031002ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-179312" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-179312 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-179312 -n old-k8s-version-179312
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-179312 -n old-k8s-version-179312: exit status 6 (221.592027ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0814 01:01:32.988582   61590 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-179312" does not appear in /home/jenkins/minikube-integration/19429-9425/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-179312" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (91.47s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-901410 -n embed-certs-901410
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-901410 -n embed-certs-901410: exit status 3 (3.167788641s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0814 01:00:25.782388   60987 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.210:22: connect: no route to host
	E0814 01:00:25.782420   60987 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.210:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-901410 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-901410 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152189015s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.210:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-901410 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-901410 -n embed-certs-901410
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-901410 -n embed-certs-901410: exit status 3 (3.063499615s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0814 01:00:34.998367   61068 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.210:22: connect: no route to host
	E0814 01:00:34.998389   61068 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.210:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-901410" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-776907 -n no-preload-776907
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-776907 -n no-preload-776907: exit status 3 (3.1680323s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0814 01:01:07.254363   61271 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.94:22: connect: no route to host
	E0814 01:01:07.254385   61271 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.94:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-776907 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-776907 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152395207s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.94:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-776907 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-776907 -n no-preload-776907
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-776907 -n no-preload-776907: exit status 3 (3.063199265s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0814 01:01:16.470385   61401 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.94:22: connect: no route to host
	E0814 01:01:16.470413   61401 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.94:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-776907" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-585256 -n default-k8s-diff-port-585256
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-585256 -n default-k8s-diff-port-585256: exit status 3 (3.16760389s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0814 01:01:28.758356   61516 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.110:22: connect: no route to host
	E0814 01:01:28.758384   61516 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.110:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-585256 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-585256 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152821243s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.110:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-585256 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-585256 -n default-k8s-diff-port-585256
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-585256 -n default-k8s-diff-port-585256: exit status 3 (3.067081551s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0814 01:01:37.978434   61659 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.110:22: connect: no route to host
	E0814 01:01:37.978458   61659 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.110:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-585256" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (770.86s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-179312 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0814 01:02:14.185390   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/functional-770612/client.crt: no such file or directory" logger="UnhandledError"
E0814 01:05:05.519806   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/addons-937866/client.crt: no such file or directory" logger="UnhandledError"
E0814 01:06:28.595863   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/addons-937866/client.crt: no such file or directory" logger="UnhandledError"
E0814 01:07:14.185850   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/functional-770612/client.crt: no such file or directory" logger="UnhandledError"
E0814 01:10:05.519470   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/addons-937866/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-179312 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (12m47.421362666s)

                                                
                                                
-- stdout --
	* [old-k8s-version-179312] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19429
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19429-9425/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19429-9425/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-179312" primary control-plane node in "old-k8s-version-179312" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-179312" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 01:01:39.512898   61804 out.go:291] Setting OutFile to fd 1 ...
	I0814 01:01:39.513038   61804 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 01:01:39.513051   61804 out.go:304] Setting ErrFile to fd 2...
	I0814 01:01:39.513057   61804 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 01:01:39.513259   61804 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19429-9425/.minikube/bin
	I0814 01:01:39.513864   61804 out.go:298] Setting JSON to false
	I0814 01:01:39.514866   61804 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6245,"bootTime":1723591054,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0814 01:01:39.514924   61804 start.go:139] virtualization: kvm guest
	I0814 01:01:39.516858   61804 out.go:177] * [old-k8s-version-179312] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0814 01:01:39.518018   61804 out.go:177]   - MINIKUBE_LOCATION=19429
	I0814 01:01:39.518036   61804 notify.go:220] Checking for updates...
	I0814 01:01:39.520190   61804 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0814 01:01:39.521372   61804 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19429-9425/kubeconfig
	I0814 01:01:39.522536   61804 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19429-9425/.minikube
	I0814 01:01:39.523748   61804 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0814 01:01:39.524905   61804 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0814 01:01:39.526506   61804 config.go:182] Loaded profile config "old-k8s-version-179312": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0814 01:01:39.526919   61804 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:01:39.526976   61804 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:01:39.541877   61804 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35025
	I0814 01:01:39.542250   61804 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:01:39.542776   61804 main.go:141] libmachine: Using API Version  1
	I0814 01:01:39.542796   61804 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:01:39.543149   61804 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:01:39.543304   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .DriverName
	I0814 01:01:39.544990   61804 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0814 01:01:39.546103   61804 driver.go:392] Setting default libvirt URI to qemu:///system
	I0814 01:01:39.546426   61804 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:01:39.546461   61804 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:01:39.561404   61804 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42995
	I0814 01:01:39.561820   61804 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:01:39.562277   61804 main.go:141] libmachine: Using API Version  1
	I0814 01:01:39.562305   61804 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:01:39.562609   61804 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:01:39.562824   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .DriverName
	I0814 01:01:39.598760   61804 out.go:177] * Using the kvm2 driver based on existing profile
	I0814 01:01:39.599899   61804 start.go:297] selected driver: kvm2
	I0814 01:01:39.599912   61804 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-179312 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-179312 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.123 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 01:01:39.600052   61804 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0814 01:01:39.600706   61804 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 01:01:39.600767   61804 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19429-9425/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0814 01:01:39.616316   61804 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0814 01:01:39.616678   61804 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 01:01:39.616712   61804 cni.go:84] Creating CNI manager for ""
	I0814 01:01:39.616719   61804 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 01:01:39.616748   61804 start.go:340] cluster config:
	{Name:old-k8s-version-179312 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-179312 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.123 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 01:01:39.616839   61804 iso.go:125] acquiring lock: {Name:mk654171f0e78c238a265344dbbd1eacb21d0f1b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 01:01:39.618491   61804 out.go:177] * Starting "old-k8s-version-179312" primary control-plane node in "old-k8s-version-179312" cluster
	I0814 01:01:39.619632   61804 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0814 01:01:39.619674   61804 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0814 01:01:39.619694   61804 cache.go:56] Caching tarball of preloaded images
	I0814 01:01:39.619767   61804 preload.go:172] Found /home/jenkins/minikube-integration/19429-9425/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0814 01:01:39.619781   61804 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0814 01:01:39.619899   61804 profile.go:143] Saving config to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312/config.json ...
	I0814 01:01:39.620085   61804 start.go:360] acquireMachinesLock for old-k8s-version-179312: {Name:mk8ab1b491404dd6cc95a37a50c74a14d6d0e4c9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 01:05:52.086727   61804 start.go:364] duration metric: took 4m12.466611913s to acquireMachinesLock for "old-k8s-version-179312"
	I0814 01:05:52.086801   61804 start.go:96] Skipping create...Using existing machine configuration
	I0814 01:05:52.086811   61804 fix.go:54] fixHost starting: 
	I0814 01:05:52.087240   61804 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:05:52.087282   61804 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:05:52.104210   61804 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42343
	I0814 01:05:52.104679   61804 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:05:52.105122   61804 main.go:141] libmachine: Using API Version  1
	I0814 01:05:52.105146   61804 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:05:52.105462   61804 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:05:52.105656   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .DriverName
	I0814 01:05:52.105804   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetState
	I0814 01:05:52.107362   61804 fix.go:112] recreateIfNeeded on old-k8s-version-179312: state=Stopped err=<nil>
	I0814 01:05:52.107399   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .DriverName
	W0814 01:05:52.107543   61804 fix.go:138] unexpected machine state, will restart: <nil>
	I0814 01:05:52.109460   61804 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-179312" ...
	I0814 01:05:52.110579   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .Start
	I0814 01:05:52.110744   61804 main.go:141] libmachine: (old-k8s-version-179312) Ensuring networks are active...
	I0814 01:05:52.111309   61804 main.go:141] libmachine: (old-k8s-version-179312) Ensuring network default is active
	I0814 01:05:52.111709   61804 main.go:141] libmachine: (old-k8s-version-179312) Ensuring network mk-old-k8s-version-179312 is active
	I0814 01:05:52.112094   61804 main.go:141] libmachine: (old-k8s-version-179312) Getting domain xml...
	I0814 01:05:52.112845   61804 main.go:141] libmachine: (old-k8s-version-179312) Creating domain...
	I0814 01:05:53.502995   61804 main.go:141] libmachine: (old-k8s-version-179312) Waiting to get IP...
	I0814 01:05:53.504003   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:05:53.504428   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 01:05:53.504496   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 01:05:53.504392   62858 retry.go:31] will retry after 197.24813ms: waiting for machine to come up
	I0814 01:05:53.702874   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:05:53.703413   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 01:05:53.703435   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 01:05:53.703362   62858 retry.go:31] will retry after 310.273767ms: waiting for machine to come up
	I0814 01:05:54.015867   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:05:54.016309   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 01:05:54.016343   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 01:05:54.016247   62858 retry.go:31] will retry after 401.494411ms: waiting for machine to come up
	I0814 01:05:54.419847   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:05:54.420305   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 01:05:54.420330   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 01:05:54.420256   62858 retry.go:31] will retry after 407.322632ms: waiting for machine to come up
	I0814 01:05:54.828943   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:05:54.829542   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 01:05:54.829567   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 01:05:54.829451   62858 retry.go:31] will retry after 761.368258ms: waiting for machine to come up
	I0814 01:05:55.592398   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:05:55.593051   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 01:05:55.593077   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 01:05:55.592959   62858 retry.go:31] will retry after 776.526082ms: waiting for machine to come up
	I0814 01:05:56.370701   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:05:56.371193   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 01:05:56.371214   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 01:05:56.371176   62858 retry.go:31] will retry after 1.033572565s: waiting for machine to come up
	I0814 01:05:57.407052   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:05:57.407572   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 01:05:57.407608   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 01:05:57.407514   62858 retry.go:31] will retry after 1.075443116s: waiting for machine to come up
	I0814 01:05:58.484020   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:05:58.484428   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 01:05:58.484450   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 01:05:58.484400   62858 retry.go:31] will retry after 1.753983606s: waiting for machine to come up
	I0814 01:06:00.239701   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:00.240210   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 01:06:00.240234   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 01:06:00.240157   62858 retry.go:31] will retry after 1.471169968s: waiting for machine to come up
	I0814 01:06:01.713921   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:01.714410   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 01:06:01.714449   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 01:06:01.714385   62858 retry.go:31] will retry after 2.509653415s: waiting for machine to come up
	I0814 01:06:04.225883   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:04.226391   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 01:06:04.226417   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 01:06:04.226346   62858 retry.go:31] will retry after 3.61921572s: waiting for machine to come up
	I0814 01:06:07.847343   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:07.847844   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 01:06:07.847879   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 01:06:07.847800   62858 retry.go:31] will retry after 2.983420512s: waiting for machine to come up
	I0814 01:06:10.834861   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:10.835358   61804 main.go:141] libmachine: (old-k8s-version-179312) Found IP for machine: 192.168.61.123
	I0814 01:06:10.835381   61804 main.go:141] libmachine: (old-k8s-version-179312) Reserving static IP address...
	I0814 01:06:10.835396   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has current primary IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:10.835795   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "old-k8s-version-179312", mac: "52:54:00:b2:76:73", ip: "192.168.61.123"} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:10.835827   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | skip adding static IP to network mk-old-k8s-version-179312 - found existing host DHCP lease matching {name: "old-k8s-version-179312", mac: "52:54:00:b2:76:73", ip: "192.168.61.123"}
	I0814 01:06:10.835846   61804 main.go:141] libmachine: (old-k8s-version-179312) Reserved static IP address: 192.168.61.123
	I0814 01:06:10.835866   61804 main.go:141] libmachine: (old-k8s-version-179312) Waiting for SSH to be available...
	I0814 01:06:10.835880   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | Getting to WaitForSSH function...
	I0814 01:06:10.837965   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:10.838336   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:10.838379   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:10.838482   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | Using SSH client type: external
	I0814 01:06:10.838520   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | Using SSH private key: /home/jenkins/minikube-integration/19429-9425/.minikube/machines/old-k8s-version-179312/id_rsa (-rw-------)
	I0814 01:06:10.838549   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.123 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19429-9425/.minikube/machines/old-k8s-version-179312/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0814 01:06:10.838568   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | About to run SSH command:
	I0814 01:06:10.838578   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | exit 0
	I0814 01:06:10.965836   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | SSH cmd err, output: <nil>: 
	I0814 01:06:10.966231   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetConfigRaw
	I0814 01:06:10.966912   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetIP
	I0814 01:06:10.969194   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:10.969535   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:10.969560   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:10.969789   61804 profile.go:143] Saving config to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312/config.json ...
	I0814 01:06:10.969969   61804 machine.go:94] provisionDockerMachine start ...
	I0814 01:06:10.969987   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .DriverName
	I0814 01:06:10.970183   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHHostname
	I0814 01:06:10.972010   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:10.972332   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:10.972361   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:10.972476   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHPort
	I0814 01:06:10.972658   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 01:06:10.972807   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 01:06:10.972942   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHUsername
	I0814 01:06:10.973088   61804 main.go:141] libmachine: Using SSH client type: native
	I0814 01:06:10.973257   61804 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I0814 01:06:10.973267   61804 main.go:141] libmachine: About to run SSH command:
	hostname
	I0814 01:06:11.074077   61804 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0814 01:06:11.074111   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetMachineName
	I0814 01:06:11.074328   61804 buildroot.go:166] provisioning hostname "old-k8s-version-179312"
	I0814 01:06:11.074364   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetMachineName
	I0814 01:06:11.074666   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHHostname
	I0814 01:06:11.077309   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.077697   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:11.077730   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.077803   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHPort
	I0814 01:06:11.077990   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 01:06:11.078161   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 01:06:11.078304   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHUsername
	I0814 01:06:11.078510   61804 main.go:141] libmachine: Using SSH client type: native
	I0814 01:06:11.078729   61804 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I0814 01:06:11.078743   61804 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-179312 && echo "old-k8s-version-179312" | sudo tee /etc/hostname
	I0814 01:06:11.193209   61804 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-179312
	
	I0814 01:06:11.193241   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHHostname
	I0814 01:06:11.195907   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.196315   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:11.196342   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.196569   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHPort
	I0814 01:06:11.196774   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 01:06:11.196936   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 01:06:11.197079   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHUsername
	I0814 01:06:11.197234   61804 main.go:141] libmachine: Using SSH client type: native
	I0814 01:06:11.197448   61804 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I0814 01:06:11.197477   61804 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-179312' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-179312/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-179312' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0814 01:06:11.312005   61804 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 01:06:11.312037   61804 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19429-9425/.minikube CaCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19429-9425/.minikube}
	I0814 01:06:11.312082   61804 buildroot.go:174] setting up certificates
	I0814 01:06:11.312093   61804 provision.go:84] configureAuth start
	I0814 01:06:11.312103   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetMachineName
	I0814 01:06:11.312396   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetIP
	I0814 01:06:11.315412   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.315909   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:11.315952   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.316043   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHHostname
	I0814 01:06:11.318283   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.318603   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:11.318630   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.318791   61804 provision.go:143] copyHostCerts
	I0814 01:06:11.318852   61804 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem, removing ...
	I0814 01:06:11.318875   61804 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem
	I0814 01:06:11.318944   61804 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem (1082 bytes)
	I0814 01:06:11.319073   61804 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem, removing ...
	I0814 01:06:11.319085   61804 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem
	I0814 01:06:11.319115   61804 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem (1123 bytes)
	I0814 01:06:11.319199   61804 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem, removing ...
	I0814 01:06:11.319209   61804 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem
	I0814 01:06:11.319262   61804 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem (1675 bytes)
	I0814 01:06:11.319351   61804 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-179312 san=[127.0.0.1 192.168.61.123 localhost minikube old-k8s-version-179312]
	I0814 01:06:11.396260   61804 provision.go:177] copyRemoteCerts
	I0814 01:06:11.396338   61804 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0814 01:06:11.396372   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHHostname
	I0814 01:06:11.399365   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.399788   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:11.399824   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.399989   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHPort
	I0814 01:06:11.400186   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 01:06:11.400349   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHUsername
	I0814 01:06:11.400555   61804 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/old-k8s-version-179312/id_rsa Username:docker}
	I0814 01:06:11.483862   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0814 01:06:11.506282   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0814 01:06:11.529014   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0814 01:06:11.550986   61804 provision.go:87] duration metric: took 238.880389ms to configureAuth
	I0814 01:06:11.551022   61804 buildroot.go:189] setting minikube options for container-runtime
	I0814 01:06:11.551253   61804 config.go:182] Loaded profile config "old-k8s-version-179312": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0814 01:06:11.551330   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHHostname
	I0814 01:06:11.554244   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.554622   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:11.554655   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.554880   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHPort
	I0814 01:06:11.555073   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 01:06:11.555249   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 01:06:11.555402   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHUsername
	I0814 01:06:11.555590   61804 main.go:141] libmachine: Using SSH client type: native
	I0814 01:06:11.555834   61804 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I0814 01:06:11.555856   61804 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0814 01:06:11.824529   61804 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0814 01:06:11.824553   61804 machine.go:97] duration metric: took 854.572333ms to provisionDockerMachine
	I0814 01:06:11.824569   61804 start.go:293] postStartSetup for "old-k8s-version-179312" (driver="kvm2")
	I0814 01:06:11.824581   61804 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0814 01:06:11.824626   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .DriverName
	I0814 01:06:11.824929   61804 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0814 01:06:11.824952   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHHostname
	I0814 01:06:11.828165   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.828510   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:11.828545   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.828693   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHPort
	I0814 01:06:11.828883   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 01:06:11.829032   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHUsername
	I0814 01:06:11.829206   61804 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/old-k8s-version-179312/id_rsa Username:docker}
	I0814 01:06:11.909667   61804 ssh_runner.go:195] Run: cat /etc/os-release
	I0814 01:06:11.913426   61804 info.go:137] Remote host: Buildroot 2023.02.9
	I0814 01:06:11.913452   61804 filesync.go:126] Scanning /home/jenkins/minikube-integration/19429-9425/.minikube/addons for local assets ...
	I0814 01:06:11.913530   61804 filesync.go:126] Scanning /home/jenkins/minikube-integration/19429-9425/.minikube/files for local assets ...
	I0814 01:06:11.913630   61804 filesync.go:149] local asset: /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem -> 165892.pem in /etc/ssl/certs
	I0814 01:06:11.913753   61804 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0814 01:06:11.923687   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem --> /etc/ssl/certs/165892.pem (1708 bytes)
	I0814 01:06:11.946123   61804 start.go:296] duration metric: took 121.53594ms for postStartSetup
	I0814 01:06:11.946172   61804 fix.go:56] duration metric: took 19.859362691s for fixHost
	I0814 01:06:11.946192   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHHostname
	I0814 01:06:11.948880   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.949241   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:11.949264   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.949490   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHPort
	I0814 01:06:11.949702   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 01:06:11.949889   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 01:06:11.950031   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHUsername
	I0814 01:06:11.950210   61804 main.go:141] libmachine: Using SSH client type: native
	I0814 01:06:11.950390   61804 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I0814 01:06:11.950403   61804 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0814 01:06:12.050230   61804 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723597572.007643909
	
	I0814 01:06:12.050252   61804 fix.go:216] guest clock: 1723597572.007643909
	I0814 01:06:12.050259   61804 fix.go:229] Guest: 2024-08-14 01:06:12.007643909 +0000 UTC Remote: 2024-08-14 01:06:11.946176003 +0000 UTC m=+272.466568091 (delta=61.467906ms)
	I0814 01:06:12.050292   61804 fix.go:200] guest clock delta is within tolerance: 61.467906ms
	I0814 01:06:12.050297   61804 start.go:83] releasing machines lock for "old-k8s-version-179312", held for 19.963518958s
	I0814 01:06:12.050328   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .DriverName
	I0814 01:06:12.050593   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetIP
	I0814 01:06:12.053723   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:12.054140   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:12.054170   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:12.054376   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .DriverName
	I0814 01:06:12.054804   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .DriverName
	I0814 01:06:12.054992   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .DriverName
	I0814 01:06:12.055076   61804 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0814 01:06:12.055137   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHHostname
	I0814 01:06:12.055191   61804 ssh_runner.go:195] Run: cat /version.json
	I0814 01:06:12.055216   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHHostname
	I0814 01:06:12.058027   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:12.058378   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:12.058404   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:12.058455   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:12.058684   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHPort
	I0814 01:06:12.058796   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:12.058828   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:12.058874   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 01:06:12.059041   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHUsername
	I0814 01:06:12.059107   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHPort
	I0814 01:06:12.059179   61804 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/old-k8s-version-179312/id_rsa Username:docker}
	I0814 01:06:12.059276   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 01:06:12.059582   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHUsername
	I0814 01:06:12.059721   61804 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/old-k8s-version-179312/id_rsa Username:docker}
	I0814 01:06:12.169671   61804 ssh_runner.go:195] Run: systemctl --version
	I0814 01:06:12.175640   61804 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0814 01:06:12.326156   61804 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0814 01:06:12.332951   61804 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0814 01:06:12.333015   61804 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0814 01:06:12.351706   61804 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0814 01:06:12.351737   61804 start.go:495] detecting cgroup driver to use...
	I0814 01:06:12.351808   61804 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0814 01:06:12.367945   61804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0814 01:06:12.381540   61804 docker.go:217] disabling cri-docker service (if available) ...
	I0814 01:06:12.381607   61804 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0814 01:06:12.394497   61804 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0814 01:06:12.408848   61804 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0814 01:06:12.530080   61804 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0814 01:06:12.705566   61804 docker.go:233] disabling docker service ...
	I0814 01:06:12.705627   61804 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0814 01:06:12.721274   61804 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0814 01:06:12.736855   61804 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0814 01:06:12.851178   61804 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0814 01:06:12.973876   61804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0814 01:06:12.987600   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 01:06:13.004553   61804 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0814 01:06:13.004656   61804 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:06:13.014424   61804 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0814 01:06:13.014507   61804 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:06:13.024038   61804 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:06:13.033588   61804 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:06:13.043124   61804 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0814 01:06:13.052585   61804 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0814 01:06:13.061221   61804 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0814 01:06:13.061308   61804 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0814 01:06:13.075277   61804 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0814 01:06:13.087018   61804 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 01:06:13.227288   61804 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0814 01:06:13.372753   61804 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0814 01:06:13.372848   61804 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0814 01:06:13.377444   61804 start.go:563] Will wait 60s for crictl version
	I0814 01:06:13.377499   61804 ssh_runner.go:195] Run: which crictl
	I0814 01:06:13.381068   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0814 01:06:13.430604   61804 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0814 01:06:13.430694   61804 ssh_runner.go:195] Run: crio --version
	I0814 01:06:13.460827   61804 ssh_runner.go:195] Run: crio --version
	I0814 01:06:13.491550   61804 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0814 01:06:13.492760   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetIP
	I0814 01:06:13.495846   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:13.496218   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:13.496255   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:13.496435   61804 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0814 01:06:13.500489   61804 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 01:06:13.512643   61804 kubeadm.go:883] updating cluster {Name:old-k8s-version-179312 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-179312 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.123 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0814 01:06:13.512785   61804 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0814 01:06:13.512842   61804 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 01:06:13.560050   61804 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0814 01:06:13.560112   61804 ssh_runner.go:195] Run: which lz4
	I0814 01:06:13.564105   61804 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0814 01:06:13.567985   61804 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0814 01:06:13.568014   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0814 01:06:15.010425   61804 crio.go:462] duration metric: took 1.446361159s to copy over tarball
	I0814 01:06:15.010503   61804 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0814 01:06:17.960543   61804 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.950002604s)
	I0814 01:06:17.960583   61804 crio.go:469] duration metric: took 2.950131362s to extract the tarball
	I0814 01:06:17.960595   61804 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0814 01:06:18.002898   61804 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 01:06:18.039862   61804 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0814 01:06:18.039887   61804 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0814 01:06:18.039949   61804 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 01:06:18.039976   61804 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0814 01:06:18.040029   61804 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0814 01:06:18.040037   61804 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 01:06:18.040076   61804 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0814 01:06:18.040092   61804 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0814 01:06:18.040279   61804 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0814 01:06:18.040285   61804 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0814 01:06:18.041502   61804 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 01:06:18.041605   61804 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0814 01:06:18.041642   61804 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 01:06:18.041655   61804 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0814 01:06:18.041683   61804 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0814 01:06:18.041709   61804 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0814 01:06:18.041712   61804 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0814 01:06:18.041643   61804 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0814 01:06:18.267865   61804 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0814 01:06:18.300630   61804 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0814 01:06:18.309691   61804 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0814 01:06:18.312711   61804 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0814 01:06:18.319830   61804 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 01:06:18.333483   61804 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0814 01:06:18.333571   61804 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0814 01:06:18.333617   61804 ssh_runner.go:195] Run: which crictl
	I0814 01:06:18.333854   61804 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0814 01:06:18.355530   61804 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0814 01:06:18.460940   61804 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0814 01:06:18.460989   61804 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0814 01:06:18.460991   61804 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0814 01:06:18.461028   61804 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0814 01:06:18.461038   61804 ssh_runner.go:195] Run: which crictl
	I0814 01:06:18.461072   61804 ssh_runner.go:195] Run: which crictl
	I0814 01:06:18.466105   61804 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0814 01:06:18.466146   61804 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 01:06:18.466158   61804 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0814 01:06:18.466194   61804 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0814 01:06:18.466200   61804 ssh_runner.go:195] Run: which crictl
	I0814 01:06:18.466232   61804 ssh_runner.go:195] Run: which crictl
	I0814 01:06:18.466109   61804 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0814 01:06:18.466290   61804 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0814 01:06:18.466163   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0814 01:06:18.466338   61804 ssh_runner.go:195] Run: which crictl
	I0814 01:06:18.471203   61804 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0814 01:06:18.471244   61804 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0814 01:06:18.471327   61804 ssh_runner.go:195] Run: which crictl
	I0814 01:06:18.477596   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0814 01:06:18.477709   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 01:06:18.477741   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0814 01:06:18.536417   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0814 01:06:18.536483   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0814 01:06:18.536443   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0814 01:06:18.536516   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0814 01:06:18.560937   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0814 01:06:18.560979   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0814 01:06:18.571932   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 01:06:18.690215   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0814 01:06:18.690271   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0814 01:06:18.690385   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0814 01:06:18.690416   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0814 01:06:18.710801   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0814 01:06:18.722130   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0814 01:06:18.722180   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 01:06:18.854942   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0814 01:06:18.854975   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0814 01:06:18.855019   61804 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0814 01:06:18.855064   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0814 01:06:18.855069   61804 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0814 01:06:18.855143   61804 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0814 01:06:18.855197   61804 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0814 01:06:18.917832   61804 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0814 01:06:18.917892   61804 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0814 01:06:18.919778   61804 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0814 01:06:18.937014   61804 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 01:06:19.077956   61804 cache_images.go:92] duration metric: took 1.038051355s to LoadCachedImages
	W0814 01:06:19.078050   61804 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0814 01:06:19.078068   61804 kubeadm.go:934] updating node { 192.168.61.123 8443 v1.20.0 crio true true} ...
	I0814 01:06:19.078198   61804 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-179312 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.123
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-179312 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0814 01:06:19.078309   61804 ssh_runner.go:195] Run: crio config
	I0814 01:06:19.126091   61804 cni.go:84] Creating CNI manager for ""
	I0814 01:06:19.126114   61804 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 01:06:19.126129   61804 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0814 01:06:19.126159   61804 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.123 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-179312 NodeName:old-k8s-version-179312 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.123"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.123 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0814 01:06:19.126325   61804 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.123
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-179312"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.123
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.123"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0814 01:06:19.126402   61804 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0814 01:06:19.136422   61804 binaries.go:44] Found k8s binaries, skipping transfer
	I0814 01:06:19.136481   61804 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0814 01:06:19.145476   61804 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0814 01:06:19.161780   61804 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0814 01:06:19.178893   61804 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0814 01:06:19.196515   61804 ssh_runner.go:195] Run: grep 192.168.61.123	control-plane.minikube.internal$ /etc/hosts
	I0814 01:06:19.200204   61804 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.123	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 01:06:19.211943   61804 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 01:06:19.333517   61804 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 01:06:19.350008   61804 certs.go:68] Setting up /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312 for IP: 192.168.61.123
	I0814 01:06:19.350055   61804 certs.go:194] generating shared ca certs ...
	I0814 01:06:19.350094   61804 certs.go:226] acquiring lock for ca certs: {Name:mke321171338291cf6d66f3acbfe43d3fabcafa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 01:06:19.350294   61804 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19429-9425/.minikube/ca.key
	I0814 01:06:19.350371   61804 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.key
	I0814 01:06:19.350387   61804 certs.go:256] generating profile certs ...
	I0814 01:06:19.350530   61804 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312/client.key
	I0814 01:06:19.350603   61804 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312/apiserver.key.6e56bf34
	I0814 01:06:19.350667   61804 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312/proxy-client.key
	I0814 01:06:19.350846   61804 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589.pem (1338 bytes)
	W0814 01:06:19.350928   61804 certs.go:480] ignoring /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589_empty.pem, impossibly tiny 0 bytes
	I0814 01:06:19.350958   61804 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem (1679 bytes)
	I0814 01:06:19.350995   61804 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem (1082 bytes)
	I0814 01:06:19.351032   61804 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem (1123 bytes)
	I0814 01:06:19.351076   61804 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem (1675 bytes)
	I0814 01:06:19.351152   61804 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem (1708 bytes)
	I0814 01:06:19.352060   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0814 01:06:19.400249   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0814 01:06:19.430497   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0814 01:06:19.478315   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0814 01:06:19.507327   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0814 01:06:19.535095   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0814 01:06:19.564128   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0814 01:06:19.600227   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0814 01:06:19.624712   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0814 01:06:19.649975   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589.pem --> /usr/share/ca-certificates/16589.pem (1338 bytes)
	I0814 01:06:19.673278   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem --> /usr/share/ca-certificates/165892.pem (1708 bytes)
	I0814 01:06:19.697408   61804 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0814 01:06:19.716197   61804 ssh_runner.go:195] Run: openssl version
	I0814 01:06:19.723669   61804 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/165892.pem && ln -fs /usr/share/ca-certificates/165892.pem /etc/ssl/certs/165892.pem"
	I0814 01:06:19.737165   61804 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/165892.pem
	I0814 01:06:19.742731   61804 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 13 23:59 /usr/share/ca-certificates/165892.pem
	I0814 01:06:19.742778   61804 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/165892.pem
	I0814 01:06:19.750009   61804 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/165892.pem /etc/ssl/certs/3ec20f2e.0"
	I0814 01:06:19.761830   61804 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0814 01:06:19.772601   61804 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0814 01:06:19.777222   61804 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 13 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0814 01:06:19.777311   61804 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0814 01:06:19.784554   61804 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0814 01:06:19.794731   61804 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16589.pem && ln -fs /usr/share/ca-certificates/16589.pem /etc/ssl/certs/16589.pem"
	I0814 01:06:19.804326   61804 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16589.pem
	I0814 01:06:19.808528   61804 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 13 23:59 /usr/share/ca-certificates/16589.pem
	I0814 01:06:19.808589   61804 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16589.pem
	I0814 01:06:19.815518   61804 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16589.pem /etc/ssl/certs/51391683.0"
	I0814 01:06:19.828687   61804 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0814 01:06:19.833943   61804 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0814 01:06:19.839826   61804 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0814 01:06:19.845576   61804 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0814 01:06:19.851700   61804 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0814 01:06:19.857179   61804 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0814 01:06:19.862728   61804 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0814 01:06:19.868172   61804 kubeadm.go:392] StartCluster: {Name:old-k8s-version-179312 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-179312 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.123 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 01:06:19.868280   61804 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0814 01:06:19.868327   61804 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 01:06:19.905130   61804 cri.go:89] found id: ""
	I0814 01:06:19.905208   61804 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0814 01:06:19.915743   61804 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0814 01:06:19.915763   61804 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0814 01:06:19.915812   61804 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0814 01:06:19.926673   61804 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0814 01:06:19.928112   61804 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-179312" does not appear in /home/jenkins/minikube-integration/19429-9425/kubeconfig
	I0814 01:06:19.929057   61804 kubeconfig.go:62] /home/jenkins/minikube-integration/19429-9425/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-179312" cluster setting kubeconfig missing "old-k8s-version-179312" context setting]
	I0814 01:06:19.931588   61804 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19429-9425/kubeconfig: {Name:mkd99f966962f24d3ec49056c344f5320df43dba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 01:06:19.938507   61804 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0814 01:06:19.947574   61804 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.123
	I0814 01:06:19.947601   61804 kubeadm.go:1160] stopping kube-system containers ...
	I0814 01:06:19.947641   61804 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0814 01:06:19.947698   61804 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 01:06:19.986219   61804 cri.go:89] found id: ""
	I0814 01:06:19.986301   61804 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0814 01:06:20.001325   61804 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 01:06:20.010260   61804 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 01:06:20.010278   61804 kubeadm.go:157] found existing configuration files:
	
	I0814 01:06:20.010320   61804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 01:06:20.018691   61804 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 01:06:20.018753   61804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 01:06:20.027627   61804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 01:06:20.035892   61804 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 01:06:20.035948   61804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 01:06:20.044508   61804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 01:06:20.052714   61804 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 01:06:20.052760   61804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 01:06:20.062524   61804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 01:06:20.070978   61804 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 01:06:20.071037   61804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 01:06:20.079423   61804 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 01:06:20.088368   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:06:20.206955   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:06:21.197237   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:06:21.439928   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:06:21.552279   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:06:21.636249   61804 api_server.go:52] waiting for apiserver process to appear ...
	I0814 01:06:21.636337   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:22.136661   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:22.636861   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:23.136511   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:23.636583   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:24.136899   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:24.636605   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:25.136809   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:25.636474   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:26.137253   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:26.636758   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:27.137184   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:27.637201   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:28.137082   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:28.637409   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:29.136794   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:29.636401   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:30.136547   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:30.636748   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:31.136557   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:31.636752   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:32.137082   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:32.637429   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:33.136895   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:33.636703   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:34.136811   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:34.637429   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:35.137322   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:35.636955   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:36.136713   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:36.636457   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:37.137396   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:37.637271   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:38.137099   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:38.637303   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:39.136673   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:39.637384   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:40.136562   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:40.637447   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:41.137212   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:41.636824   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:42.136790   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:42.637352   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:43.137237   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:43.637327   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:44.136777   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:44.636971   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:45.137082   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:45.636661   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:46.136690   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:46.636597   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:47.136601   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:47.636799   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:48.136486   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:48.637415   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:49.136703   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:49.636646   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:50.137134   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:50.637310   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:51.136913   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:51.636930   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:52.137158   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:52.636489   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:53.137140   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:53.637032   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:54.137345   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:54.636613   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:55.137191   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:55.637149   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:56.137437   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:56.637155   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:57.136629   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:57.636616   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:58.136691   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:58.637180   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:59.137246   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:59.636603   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:00.137399   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:00.636477   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:01.136689   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:01.636867   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:02.136874   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:02.636850   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:03.136568   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:03.636915   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:04.137185   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:04.636433   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:05.136514   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:05.637177   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:06.136522   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:06.636384   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:07.136753   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:07.636417   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:08.137158   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:08.636665   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:09.137281   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:09.637102   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:10.136575   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:10.637290   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:11.136999   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:11.636523   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:12.136756   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:12.637369   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:13.136763   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:13.637275   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:14.137363   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:14.636871   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:15.136819   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:15.636660   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:16.136568   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:16.637322   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:17.137088   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:17.637082   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:18.136469   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:18.637351   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:19.136899   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:19.636984   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:20.137256   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:20.636678   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:21.136871   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:21.637264   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:07:21.637336   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:07:21.674035   61804 cri.go:89] found id: ""
	I0814 01:07:21.674081   61804 logs.go:276] 0 containers: []
	W0814 01:07:21.674091   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:07:21.674100   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:07:21.674150   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:07:21.706567   61804 cri.go:89] found id: ""
	I0814 01:07:21.706594   61804 logs.go:276] 0 containers: []
	W0814 01:07:21.706602   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:07:21.706608   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:07:21.706670   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:07:21.744892   61804 cri.go:89] found id: ""
	I0814 01:07:21.744917   61804 logs.go:276] 0 containers: []
	W0814 01:07:21.744927   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:07:21.744933   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:07:21.744987   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:07:21.780766   61804 cri.go:89] found id: ""
	I0814 01:07:21.780791   61804 logs.go:276] 0 containers: []
	W0814 01:07:21.780799   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:07:21.780805   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:07:21.780861   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:07:21.813710   61804 cri.go:89] found id: ""
	I0814 01:07:21.813737   61804 logs.go:276] 0 containers: []
	W0814 01:07:21.813744   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:07:21.813750   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:07:21.813800   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:07:21.851621   61804 cri.go:89] found id: ""
	I0814 01:07:21.851649   61804 logs.go:276] 0 containers: []
	W0814 01:07:21.851657   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:07:21.851663   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:07:21.851713   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:07:21.885176   61804 cri.go:89] found id: ""
	I0814 01:07:21.885207   61804 logs.go:276] 0 containers: []
	W0814 01:07:21.885218   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:07:21.885226   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:07:21.885293   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:07:21.922273   61804 cri.go:89] found id: ""
	I0814 01:07:21.922303   61804 logs.go:276] 0 containers: []
	W0814 01:07:21.922319   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:07:21.922330   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:07:21.922344   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:07:21.975619   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:07:21.975657   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:07:21.989295   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:07:21.989330   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:07:22.117376   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:07:22.117406   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:07:22.117421   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:07:22.190366   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:07:22.190407   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:07:24.727910   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:24.741649   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:07:24.741722   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:07:24.778658   61804 cri.go:89] found id: ""
	I0814 01:07:24.778684   61804 logs.go:276] 0 containers: []
	W0814 01:07:24.778693   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:07:24.778699   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:07:24.778761   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:07:24.811263   61804 cri.go:89] found id: ""
	I0814 01:07:24.811290   61804 logs.go:276] 0 containers: []
	W0814 01:07:24.811314   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:07:24.811321   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:07:24.811385   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:07:24.847414   61804 cri.go:89] found id: ""
	I0814 01:07:24.847442   61804 logs.go:276] 0 containers: []
	W0814 01:07:24.847450   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:07:24.847456   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:07:24.847512   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:07:24.888714   61804 cri.go:89] found id: ""
	I0814 01:07:24.888737   61804 logs.go:276] 0 containers: []
	W0814 01:07:24.888745   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:07:24.888750   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:07:24.888828   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:07:24.937957   61804 cri.go:89] found id: ""
	I0814 01:07:24.937983   61804 logs.go:276] 0 containers: []
	W0814 01:07:24.937994   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:07:24.938002   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:07:24.938086   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:07:24.990489   61804 cri.go:89] found id: ""
	I0814 01:07:24.990514   61804 logs.go:276] 0 containers: []
	W0814 01:07:24.990522   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:07:24.990530   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:07:24.990592   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:07:25.033458   61804 cri.go:89] found id: ""
	I0814 01:07:25.033489   61804 logs.go:276] 0 containers: []
	W0814 01:07:25.033500   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:07:25.033508   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:07:25.033594   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:07:25.065009   61804 cri.go:89] found id: ""
	I0814 01:07:25.065039   61804 logs.go:276] 0 containers: []
	W0814 01:07:25.065049   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:07:25.065062   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:07:25.065074   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:07:25.116806   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:07:25.116841   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:07:25.131759   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:07:25.131790   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:07:25.206389   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:07:25.206415   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:07:25.206435   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:07:25.284603   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:07:25.284632   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:07:27.823371   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:27.836369   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:07:27.836452   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:07:27.876906   61804 cri.go:89] found id: ""
	I0814 01:07:27.876937   61804 logs.go:276] 0 containers: []
	W0814 01:07:27.876950   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:07:27.876960   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:07:27.877039   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:07:27.912449   61804 cri.go:89] found id: ""
	I0814 01:07:27.912481   61804 logs.go:276] 0 containers: []
	W0814 01:07:27.912494   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:07:27.912501   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:07:27.912568   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:07:27.945584   61804 cri.go:89] found id: ""
	I0814 01:07:27.945611   61804 logs.go:276] 0 containers: []
	W0814 01:07:27.945620   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:07:27.945628   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:07:27.945693   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:07:27.982470   61804 cri.go:89] found id: ""
	I0814 01:07:27.982498   61804 logs.go:276] 0 containers: []
	W0814 01:07:27.982508   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:07:27.982517   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:07:27.982592   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:07:28.020494   61804 cri.go:89] found id: ""
	I0814 01:07:28.020521   61804 logs.go:276] 0 containers: []
	W0814 01:07:28.020529   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:07:28.020535   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:07:28.020604   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:07:28.055810   61804 cri.go:89] found id: ""
	I0814 01:07:28.055835   61804 logs.go:276] 0 containers: []
	W0814 01:07:28.055846   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:07:28.055854   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:07:28.055917   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:07:28.092241   61804 cri.go:89] found id: ""
	I0814 01:07:28.092266   61804 logs.go:276] 0 containers: []
	W0814 01:07:28.092273   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:07:28.092279   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:07:28.092336   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:07:28.128234   61804 cri.go:89] found id: ""
	I0814 01:07:28.128259   61804 logs.go:276] 0 containers: []
	W0814 01:07:28.128266   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:07:28.128275   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:07:28.128292   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:07:28.169651   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:07:28.169682   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:07:28.223578   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:07:28.223614   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:07:28.237283   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:07:28.237317   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:07:28.310610   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:07:28.310633   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:07:28.310657   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:07:30.892125   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:30.904416   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:07:30.904487   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:07:30.938158   61804 cri.go:89] found id: ""
	I0814 01:07:30.938186   61804 logs.go:276] 0 containers: []
	W0814 01:07:30.938197   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:07:30.938204   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:07:30.938273   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:07:30.969960   61804 cri.go:89] found id: ""
	I0814 01:07:30.969990   61804 logs.go:276] 0 containers: []
	W0814 01:07:30.970000   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:07:30.970006   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:07:30.970094   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:07:31.003442   61804 cri.go:89] found id: ""
	I0814 01:07:31.003472   61804 logs.go:276] 0 containers: []
	W0814 01:07:31.003484   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:07:31.003492   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:07:31.003547   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:07:31.036819   61804 cri.go:89] found id: ""
	I0814 01:07:31.036852   61804 logs.go:276] 0 containers: []
	W0814 01:07:31.036866   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:07:31.036874   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:07:31.036943   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:07:31.070521   61804 cri.go:89] found id: ""
	I0814 01:07:31.070546   61804 logs.go:276] 0 containers: []
	W0814 01:07:31.070556   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:07:31.070570   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:07:31.070627   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:07:31.111200   61804 cri.go:89] found id: ""
	I0814 01:07:31.111223   61804 logs.go:276] 0 containers: []
	W0814 01:07:31.111230   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:07:31.111236   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:07:31.111299   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:07:31.143931   61804 cri.go:89] found id: ""
	I0814 01:07:31.143965   61804 logs.go:276] 0 containers: []
	W0814 01:07:31.143973   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:07:31.143978   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:07:31.144027   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:07:31.176742   61804 cri.go:89] found id: ""
	I0814 01:07:31.176765   61804 logs.go:276] 0 containers: []
	W0814 01:07:31.176773   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:07:31.176782   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:07:31.176800   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:07:31.247117   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:07:31.247145   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:07:31.247159   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:07:31.327763   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:07:31.327797   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:07:31.368715   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:07:31.368753   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:07:31.421802   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:07:31.421833   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:07:33.936162   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:33.949580   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:07:33.949647   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:07:33.982423   61804 cri.go:89] found id: ""
	I0814 01:07:33.982452   61804 logs.go:276] 0 containers: []
	W0814 01:07:33.982464   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:07:33.982472   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:07:33.982532   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:07:34.015547   61804 cri.go:89] found id: ""
	I0814 01:07:34.015580   61804 logs.go:276] 0 containers: []
	W0814 01:07:34.015591   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:07:34.015598   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:07:34.015660   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:07:34.047814   61804 cri.go:89] found id: ""
	I0814 01:07:34.047837   61804 logs.go:276] 0 containers: []
	W0814 01:07:34.047845   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:07:34.047851   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:07:34.047914   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:07:34.080509   61804 cri.go:89] found id: ""
	I0814 01:07:34.080539   61804 logs.go:276] 0 containers: []
	W0814 01:07:34.080552   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:07:34.080561   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:07:34.080629   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:07:34.114693   61804 cri.go:89] found id: ""
	I0814 01:07:34.114723   61804 logs.go:276] 0 containers: []
	W0814 01:07:34.114735   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:07:34.114742   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:07:34.114812   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:07:34.148294   61804 cri.go:89] found id: ""
	I0814 01:07:34.148321   61804 logs.go:276] 0 containers: []
	W0814 01:07:34.148334   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:07:34.148344   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:07:34.148410   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:07:34.182913   61804 cri.go:89] found id: ""
	I0814 01:07:34.182938   61804 logs.go:276] 0 containers: []
	W0814 01:07:34.182947   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:07:34.182953   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:07:34.183002   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:07:34.215609   61804 cri.go:89] found id: ""
	I0814 01:07:34.215639   61804 logs.go:276] 0 containers: []
	W0814 01:07:34.215649   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:07:34.215662   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:07:34.215688   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:07:34.278627   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:07:34.278657   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:07:34.278674   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:07:34.353824   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:07:34.353863   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:07:34.390511   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:07:34.390551   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:07:34.440170   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:07:34.440205   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:07:36.955228   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:36.968676   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:07:36.968752   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:07:37.005738   61804 cri.go:89] found id: ""
	I0814 01:07:37.005770   61804 logs.go:276] 0 containers: []
	W0814 01:07:37.005781   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:07:37.005800   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:07:37.005876   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:07:37.038556   61804 cri.go:89] found id: ""
	I0814 01:07:37.038586   61804 logs.go:276] 0 containers: []
	W0814 01:07:37.038594   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:07:37.038599   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:07:37.038659   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:07:37.073835   61804 cri.go:89] found id: ""
	I0814 01:07:37.073870   61804 logs.go:276] 0 containers: []
	W0814 01:07:37.073881   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:07:37.073890   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:07:37.073952   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:07:37.109720   61804 cri.go:89] found id: ""
	I0814 01:07:37.109754   61804 logs.go:276] 0 containers: []
	W0814 01:07:37.109766   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:07:37.109774   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:07:37.109837   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:07:37.141361   61804 cri.go:89] found id: ""
	I0814 01:07:37.141391   61804 logs.go:276] 0 containers: []
	W0814 01:07:37.141401   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:07:37.141409   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:07:37.141460   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:07:37.172803   61804 cri.go:89] found id: ""
	I0814 01:07:37.172833   61804 logs.go:276] 0 containers: []
	W0814 01:07:37.172841   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:07:37.172847   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:07:37.172898   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:07:37.205074   61804 cri.go:89] found id: ""
	I0814 01:07:37.205101   61804 logs.go:276] 0 containers: []
	W0814 01:07:37.205110   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:07:37.205116   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:07:37.205172   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:07:37.237440   61804 cri.go:89] found id: ""
	I0814 01:07:37.237462   61804 logs.go:276] 0 containers: []
	W0814 01:07:37.237472   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:07:37.237484   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:07:37.237499   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:07:37.286411   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:07:37.286442   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:07:37.299649   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:07:37.299673   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:07:37.363165   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:07:37.363188   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:07:37.363209   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:07:37.440551   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:07:37.440589   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:07:39.980740   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:39.992656   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:07:39.992724   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:07:40.026980   61804 cri.go:89] found id: ""
	I0814 01:07:40.027009   61804 logs.go:276] 0 containers: []
	W0814 01:07:40.027020   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:07:40.027027   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:07:40.027093   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:07:40.059474   61804 cri.go:89] found id: ""
	I0814 01:07:40.059509   61804 logs.go:276] 0 containers: []
	W0814 01:07:40.059521   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:07:40.059528   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:07:40.059602   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:07:40.092222   61804 cri.go:89] found id: ""
	I0814 01:07:40.092251   61804 logs.go:276] 0 containers: []
	W0814 01:07:40.092260   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:07:40.092265   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:07:40.092314   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:07:40.123458   61804 cri.go:89] found id: ""
	I0814 01:07:40.123487   61804 logs.go:276] 0 containers: []
	W0814 01:07:40.123495   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:07:40.123501   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:07:40.123557   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:07:40.155410   61804 cri.go:89] found id: ""
	I0814 01:07:40.155433   61804 logs.go:276] 0 containers: []
	W0814 01:07:40.155461   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:07:40.155467   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:07:40.155517   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:07:40.186726   61804 cri.go:89] found id: ""
	I0814 01:07:40.186750   61804 logs.go:276] 0 containers: []
	W0814 01:07:40.186774   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:07:40.186782   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:07:40.186842   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:07:40.223940   61804 cri.go:89] found id: ""
	I0814 01:07:40.223964   61804 logs.go:276] 0 containers: []
	W0814 01:07:40.223974   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:07:40.223981   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:07:40.224039   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:07:40.255483   61804 cri.go:89] found id: ""
	I0814 01:07:40.255511   61804 logs.go:276] 0 containers: []
	W0814 01:07:40.255520   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:07:40.255532   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:07:40.255547   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:07:40.307368   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:07:40.307400   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:07:40.320297   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:07:40.320323   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:07:40.382358   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:07:40.382390   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:07:40.382406   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:07:40.464226   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:07:40.464312   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:07:43.001144   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:43.015011   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:07:43.015090   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:07:43.047581   61804 cri.go:89] found id: ""
	I0814 01:07:43.047617   61804 logs.go:276] 0 containers: []
	W0814 01:07:43.047629   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:07:43.047636   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:07:43.047709   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:07:43.081737   61804 cri.go:89] found id: ""
	I0814 01:07:43.081769   61804 logs.go:276] 0 containers: []
	W0814 01:07:43.081780   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:07:43.081788   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:07:43.081858   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:07:43.116828   61804 cri.go:89] found id: ""
	I0814 01:07:43.116851   61804 logs.go:276] 0 containers: []
	W0814 01:07:43.116860   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:07:43.116865   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:07:43.116918   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:07:43.149154   61804 cri.go:89] found id: ""
	I0814 01:07:43.149183   61804 logs.go:276] 0 containers: []
	W0814 01:07:43.149195   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:07:43.149203   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:07:43.149270   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:07:43.183298   61804 cri.go:89] found id: ""
	I0814 01:07:43.183327   61804 logs.go:276] 0 containers: []
	W0814 01:07:43.183335   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:07:43.183341   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:07:43.183402   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:07:43.217844   61804 cri.go:89] found id: ""
	I0814 01:07:43.217875   61804 logs.go:276] 0 containers: []
	W0814 01:07:43.217885   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:07:43.217894   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:07:43.217957   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:07:43.254501   61804 cri.go:89] found id: ""
	I0814 01:07:43.254529   61804 logs.go:276] 0 containers: []
	W0814 01:07:43.254540   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:07:43.254549   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:07:43.254621   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:07:43.288499   61804 cri.go:89] found id: ""
	I0814 01:07:43.288520   61804 logs.go:276] 0 containers: []
	W0814 01:07:43.288528   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:07:43.288538   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:07:43.288553   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:07:43.364920   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:07:43.364957   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:07:43.402536   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:07:43.402563   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:07:43.454370   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:07:43.454403   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:07:43.467972   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:07:43.468000   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:07:43.541823   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:07:46.042614   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:46.055014   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:07:46.055074   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:07:46.088632   61804 cri.go:89] found id: ""
	I0814 01:07:46.088664   61804 logs.go:276] 0 containers: []
	W0814 01:07:46.088676   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:07:46.088684   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:07:46.088755   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:07:46.121747   61804 cri.go:89] found id: ""
	I0814 01:07:46.121774   61804 logs.go:276] 0 containers: []
	W0814 01:07:46.121782   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:07:46.121788   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:07:46.121837   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:07:46.157301   61804 cri.go:89] found id: ""
	I0814 01:07:46.157329   61804 logs.go:276] 0 containers: []
	W0814 01:07:46.157340   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:07:46.157348   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:07:46.157412   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:07:46.188543   61804 cri.go:89] found id: ""
	I0814 01:07:46.188575   61804 logs.go:276] 0 containers: []
	W0814 01:07:46.188586   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:07:46.188594   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:07:46.188657   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:07:46.219762   61804 cri.go:89] found id: ""
	I0814 01:07:46.219787   61804 logs.go:276] 0 containers: []
	W0814 01:07:46.219795   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:07:46.219801   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:07:46.219849   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:07:46.253187   61804 cri.go:89] found id: ""
	I0814 01:07:46.253223   61804 logs.go:276] 0 containers: []
	W0814 01:07:46.253234   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:07:46.253242   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:07:46.253326   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:07:46.287614   61804 cri.go:89] found id: ""
	I0814 01:07:46.287647   61804 logs.go:276] 0 containers: []
	W0814 01:07:46.287656   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:07:46.287662   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:07:46.287716   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:07:46.323558   61804 cri.go:89] found id: ""
	I0814 01:07:46.323588   61804 logs.go:276] 0 containers: []
	W0814 01:07:46.323599   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:07:46.323611   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:07:46.323628   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:07:46.336110   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:07:46.336139   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:07:46.398541   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:07:46.398568   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:07:46.398584   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:07:46.476132   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:07:46.476166   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:07:46.521433   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:07:46.521470   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:07:49.071324   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:49.083741   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:07:49.083816   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:07:49.117788   61804 cri.go:89] found id: ""
	I0814 01:07:49.117816   61804 logs.go:276] 0 containers: []
	W0814 01:07:49.117828   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:07:49.117836   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:07:49.117903   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:07:49.153363   61804 cri.go:89] found id: ""
	I0814 01:07:49.153398   61804 logs.go:276] 0 containers: []
	W0814 01:07:49.153409   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:07:49.153417   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:07:49.153488   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:07:49.186229   61804 cri.go:89] found id: ""
	I0814 01:07:49.186253   61804 logs.go:276] 0 containers: []
	W0814 01:07:49.186261   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:07:49.186267   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:07:49.186327   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:07:49.218463   61804 cri.go:89] found id: ""
	I0814 01:07:49.218485   61804 logs.go:276] 0 containers: []
	W0814 01:07:49.218492   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:07:49.218498   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:07:49.218559   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:07:49.250172   61804 cri.go:89] found id: ""
	I0814 01:07:49.250204   61804 logs.go:276] 0 containers: []
	W0814 01:07:49.250214   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:07:49.250222   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:07:49.250287   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:07:49.285574   61804 cri.go:89] found id: ""
	I0814 01:07:49.285602   61804 logs.go:276] 0 containers: []
	W0814 01:07:49.285612   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:07:49.285620   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:07:49.285679   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:07:49.317583   61804 cri.go:89] found id: ""
	I0814 01:07:49.317614   61804 logs.go:276] 0 containers: []
	W0814 01:07:49.317625   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:07:49.317632   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:07:49.317690   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:07:49.350486   61804 cri.go:89] found id: ""
	I0814 01:07:49.350513   61804 logs.go:276] 0 containers: []
	W0814 01:07:49.350524   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:07:49.350535   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:07:49.350550   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:07:49.401242   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:07:49.401278   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:07:49.415776   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:07:49.415805   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:07:49.487135   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:07:49.487207   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:07:49.487229   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:07:49.569068   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:07:49.569103   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:07:52.108074   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:52.120495   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:07:52.120568   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:07:52.155022   61804 cri.go:89] found id: ""
	I0814 01:07:52.155047   61804 logs.go:276] 0 containers: []
	W0814 01:07:52.155055   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:07:52.155063   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:07:52.155131   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:07:52.186783   61804 cri.go:89] found id: ""
	I0814 01:07:52.186813   61804 logs.go:276] 0 containers: []
	W0814 01:07:52.186837   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:07:52.186854   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:07:52.186908   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:07:52.219089   61804 cri.go:89] found id: ""
	I0814 01:07:52.219118   61804 logs.go:276] 0 containers: []
	W0814 01:07:52.219129   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:07:52.219136   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:07:52.219200   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:07:52.252343   61804 cri.go:89] found id: ""
	I0814 01:07:52.252378   61804 logs.go:276] 0 containers: []
	W0814 01:07:52.252391   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:07:52.252399   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:07:52.252460   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:07:52.288827   61804 cri.go:89] found id: ""
	I0814 01:07:52.288848   61804 logs.go:276] 0 containers: []
	W0814 01:07:52.288855   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:07:52.288861   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:07:52.288913   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:07:52.322201   61804 cri.go:89] found id: ""
	I0814 01:07:52.322228   61804 logs.go:276] 0 containers: []
	W0814 01:07:52.322240   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:07:52.322247   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:07:52.322327   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:07:52.357482   61804 cri.go:89] found id: ""
	I0814 01:07:52.357508   61804 logs.go:276] 0 containers: []
	W0814 01:07:52.357519   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:07:52.357527   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:07:52.357599   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:07:52.390481   61804 cri.go:89] found id: ""
	I0814 01:07:52.390508   61804 logs.go:276] 0 containers: []
	W0814 01:07:52.390515   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:07:52.390523   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:07:52.390536   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:07:52.403144   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:07:52.403171   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:07:52.474148   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:07:52.474170   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:07:52.474182   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:07:52.555353   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:07:52.555396   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:07:52.592151   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:07:52.592180   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:07:55.143835   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:55.156285   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:07:55.156360   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:07:55.195624   61804 cri.go:89] found id: ""
	I0814 01:07:55.195655   61804 logs.go:276] 0 containers: []
	W0814 01:07:55.195666   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:07:55.195673   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:07:55.195735   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:07:55.230384   61804 cri.go:89] found id: ""
	I0814 01:07:55.230409   61804 logs.go:276] 0 containers: []
	W0814 01:07:55.230419   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:07:55.230426   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:07:55.230491   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:07:55.264774   61804 cri.go:89] found id: ""
	I0814 01:07:55.264802   61804 logs.go:276] 0 containers: []
	W0814 01:07:55.264812   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:07:55.264819   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:07:55.264905   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:07:55.297679   61804 cri.go:89] found id: ""
	I0814 01:07:55.297706   61804 logs.go:276] 0 containers: []
	W0814 01:07:55.297715   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:07:55.297721   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:07:55.297780   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:07:55.331555   61804 cri.go:89] found id: ""
	I0814 01:07:55.331591   61804 logs.go:276] 0 containers: []
	W0814 01:07:55.331602   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:07:55.331609   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:07:55.331685   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:07:55.362351   61804 cri.go:89] found id: ""
	I0814 01:07:55.362374   61804 logs.go:276] 0 containers: []
	W0814 01:07:55.362381   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:07:55.362388   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:07:55.362434   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:07:55.397261   61804 cri.go:89] found id: ""
	I0814 01:07:55.397292   61804 logs.go:276] 0 containers: []
	W0814 01:07:55.397301   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:07:55.397308   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:07:55.397355   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:07:55.431333   61804 cri.go:89] found id: ""
	I0814 01:07:55.431363   61804 logs.go:276] 0 containers: []
	W0814 01:07:55.431376   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:07:55.431388   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:07:55.431403   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:07:55.445865   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:07:55.445901   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:07:55.511474   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:07:55.511494   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:07:55.511505   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:07:55.596934   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:07:55.596966   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:07:55.632440   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:07:55.632477   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:07:58.183656   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:58.196717   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:07:58.196776   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:07:58.231854   61804 cri.go:89] found id: ""
	I0814 01:07:58.231890   61804 logs.go:276] 0 containers: []
	W0814 01:07:58.231902   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:07:58.231910   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:07:58.231972   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:07:58.267169   61804 cri.go:89] found id: ""
	I0814 01:07:58.267201   61804 logs.go:276] 0 containers: []
	W0814 01:07:58.267211   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:07:58.267218   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:07:58.267277   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:07:58.301552   61804 cri.go:89] found id: ""
	I0814 01:07:58.301581   61804 logs.go:276] 0 containers: []
	W0814 01:07:58.301589   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:07:58.301596   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:07:58.301652   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:07:58.334399   61804 cri.go:89] found id: ""
	I0814 01:07:58.334427   61804 logs.go:276] 0 containers: []
	W0814 01:07:58.334434   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:07:58.334440   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:07:58.334490   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:07:58.366748   61804 cri.go:89] found id: ""
	I0814 01:07:58.366777   61804 logs.go:276] 0 containers: []
	W0814 01:07:58.366787   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:07:58.366794   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:07:58.366860   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:07:58.401078   61804 cri.go:89] found id: ""
	I0814 01:07:58.401108   61804 logs.go:276] 0 containers: []
	W0814 01:07:58.401117   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:07:58.401123   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:07:58.401179   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:07:58.433766   61804 cri.go:89] found id: ""
	I0814 01:07:58.433795   61804 logs.go:276] 0 containers: []
	W0814 01:07:58.433807   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:07:58.433813   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:07:58.433863   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:07:58.467187   61804 cri.go:89] found id: ""
	I0814 01:07:58.467211   61804 logs.go:276] 0 containers: []
	W0814 01:07:58.467219   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:07:58.467227   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:07:58.467241   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:07:58.520695   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:07:58.520733   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:07:58.535262   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:07:58.535288   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:07:58.601335   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:07:58.601354   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:07:58.601367   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:07:58.683365   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:07:58.683411   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:01.221305   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:01.233782   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:01.233863   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:01.265991   61804 cri.go:89] found id: ""
	I0814 01:08:01.266019   61804 logs.go:276] 0 containers: []
	W0814 01:08:01.266030   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:01.266048   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:01.266116   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:01.300802   61804 cri.go:89] found id: ""
	I0814 01:08:01.300825   61804 logs.go:276] 0 containers: []
	W0814 01:08:01.300840   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:01.300851   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:01.300918   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:01.334762   61804 cri.go:89] found id: ""
	I0814 01:08:01.334788   61804 logs.go:276] 0 containers: []
	W0814 01:08:01.334796   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:01.334803   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:01.334858   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:01.367051   61804 cri.go:89] found id: ""
	I0814 01:08:01.367075   61804 logs.go:276] 0 containers: []
	W0814 01:08:01.367083   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:01.367089   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:01.367147   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:01.401875   61804 cri.go:89] found id: ""
	I0814 01:08:01.401904   61804 logs.go:276] 0 containers: []
	W0814 01:08:01.401915   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:01.401922   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:01.401982   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:01.435930   61804 cri.go:89] found id: ""
	I0814 01:08:01.435958   61804 logs.go:276] 0 containers: []
	W0814 01:08:01.435975   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:01.435994   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:01.436056   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:01.470913   61804 cri.go:89] found id: ""
	I0814 01:08:01.470943   61804 logs.go:276] 0 containers: []
	W0814 01:08:01.470958   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:01.470966   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:01.471030   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:01.506552   61804 cri.go:89] found id: ""
	I0814 01:08:01.506584   61804 logs.go:276] 0 containers: []
	W0814 01:08:01.506595   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:01.506607   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:01.506621   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:01.557203   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:01.557245   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:01.570729   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:01.570754   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:01.636244   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:01.636268   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:01.636282   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:01.720905   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:01.720937   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:04.261326   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:04.274952   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:04.275020   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:04.309640   61804 cri.go:89] found id: ""
	I0814 01:08:04.309695   61804 logs.go:276] 0 containers: []
	W0814 01:08:04.309708   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:04.309717   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:04.309784   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:04.343333   61804 cri.go:89] found id: ""
	I0814 01:08:04.343368   61804 logs.go:276] 0 containers: []
	W0814 01:08:04.343380   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:04.343388   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:04.343446   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:04.377058   61804 cri.go:89] found id: ""
	I0814 01:08:04.377090   61804 logs.go:276] 0 containers: []
	W0814 01:08:04.377101   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:04.377109   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:04.377170   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:04.411932   61804 cri.go:89] found id: ""
	I0814 01:08:04.411961   61804 logs.go:276] 0 containers: []
	W0814 01:08:04.411973   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:04.411980   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:04.412039   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:04.449523   61804 cri.go:89] found id: ""
	I0814 01:08:04.449557   61804 logs.go:276] 0 containers: []
	W0814 01:08:04.449569   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:04.449577   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:04.449639   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:04.505818   61804 cri.go:89] found id: ""
	I0814 01:08:04.505844   61804 logs.go:276] 0 containers: []
	W0814 01:08:04.505852   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:04.505858   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:04.505911   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:04.540720   61804 cri.go:89] found id: ""
	I0814 01:08:04.540747   61804 logs.go:276] 0 containers: []
	W0814 01:08:04.540754   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:04.540759   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:04.540822   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:04.575188   61804 cri.go:89] found id: ""
	I0814 01:08:04.575218   61804 logs.go:276] 0 containers: []
	W0814 01:08:04.575230   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:04.575241   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:04.575254   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:04.624557   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:04.624593   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:04.637679   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:04.637707   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:04.707655   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:04.707676   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:04.707690   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:04.792530   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:04.792564   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:07.333726   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:07.346667   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:07.346762   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:07.379773   61804 cri.go:89] found id: ""
	I0814 01:08:07.379809   61804 logs.go:276] 0 containers: []
	W0814 01:08:07.379821   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:07.379832   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:07.379898   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:07.413473   61804 cri.go:89] found id: ""
	I0814 01:08:07.413508   61804 logs.go:276] 0 containers: []
	W0814 01:08:07.413519   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:07.413528   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:07.413592   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:07.448033   61804 cri.go:89] found id: ""
	I0814 01:08:07.448065   61804 logs.go:276] 0 containers: []
	W0814 01:08:07.448076   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:07.448084   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:07.448149   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:07.483015   61804 cri.go:89] found id: ""
	I0814 01:08:07.483043   61804 logs.go:276] 0 containers: []
	W0814 01:08:07.483051   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:07.483057   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:07.483116   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:07.516222   61804 cri.go:89] found id: ""
	I0814 01:08:07.516245   61804 logs.go:276] 0 containers: []
	W0814 01:08:07.516253   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:07.516259   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:07.516309   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:07.552179   61804 cri.go:89] found id: ""
	I0814 01:08:07.552203   61804 logs.go:276] 0 containers: []
	W0814 01:08:07.552211   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:07.552217   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:07.552269   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:07.585804   61804 cri.go:89] found id: ""
	I0814 01:08:07.585832   61804 logs.go:276] 0 containers: []
	W0814 01:08:07.585842   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:07.585850   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:07.585913   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:07.620731   61804 cri.go:89] found id: ""
	I0814 01:08:07.620757   61804 logs.go:276] 0 containers: []
	W0814 01:08:07.620766   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:07.620774   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:07.620786   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:07.662648   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:07.662686   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:07.713380   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:07.713418   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:07.726770   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:07.726801   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:07.794679   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:07.794705   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:07.794720   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:10.370665   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:10.383986   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:10.384046   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:10.417596   61804 cri.go:89] found id: ""
	I0814 01:08:10.417622   61804 logs.go:276] 0 containers: []
	W0814 01:08:10.417634   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:10.417642   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:10.417703   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:10.453782   61804 cri.go:89] found id: ""
	I0814 01:08:10.453813   61804 logs.go:276] 0 containers: []
	W0814 01:08:10.453824   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:10.453832   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:10.453895   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:10.486795   61804 cri.go:89] found id: ""
	I0814 01:08:10.486821   61804 logs.go:276] 0 containers: []
	W0814 01:08:10.486831   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:10.486839   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:10.486930   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:10.519249   61804 cri.go:89] found id: ""
	I0814 01:08:10.519285   61804 logs.go:276] 0 containers: []
	W0814 01:08:10.519296   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:10.519304   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:10.519369   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:10.551791   61804 cri.go:89] found id: ""
	I0814 01:08:10.551818   61804 logs.go:276] 0 containers: []
	W0814 01:08:10.551825   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:10.551834   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:10.551892   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:10.584630   61804 cri.go:89] found id: ""
	I0814 01:08:10.584658   61804 logs.go:276] 0 containers: []
	W0814 01:08:10.584669   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:10.584679   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:10.584742   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:10.616870   61804 cri.go:89] found id: ""
	I0814 01:08:10.616898   61804 logs.go:276] 0 containers: []
	W0814 01:08:10.616911   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:10.616918   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:10.616984   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:10.650681   61804 cri.go:89] found id: ""
	I0814 01:08:10.650709   61804 logs.go:276] 0 containers: []
	W0814 01:08:10.650721   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:10.650731   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:10.650748   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:10.663021   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:10.663047   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:10.731788   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:10.731813   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:10.731829   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:10.812174   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:10.812213   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:10.854260   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:10.854287   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:13.414862   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:13.428537   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:13.428595   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:13.460800   61804 cri.go:89] found id: ""
	I0814 01:08:13.460836   61804 logs.go:276] 0 containers: []
	W0814 01:08:13.460850   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:13.460859   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:13.460933   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:13.494240   61804 cri.go:89] found id: ""
	I0814 01:08:13.494264   61804 logs.go:276] 0 containers: []
	W0814 01:08:13.494274   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:13.494282   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:13.494370   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:13.526684   61804 cri.go:89] found id: ""
	I0814 01:08:13.526715   61804 logs.go:276] 0 containers: []
	W0814 01:08:13.526726   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:13.526734   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:13.526797   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:13.560258   61804 cri.go:89] found id: ""
	I0814 01:08:13.560281   61804 logs.go:276] 0 containers: []
	W0814 01:08:13.560289   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:13.560296   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:13.560353   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:13.592615   61804 cri.go:89] found id: ""
	I0814 01:08:13.592641   61804 logs.go:276] 0 containers: []
	W0814 01:08:13.592653   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:13.592668   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:13.592732   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:13.627268   61804 cri.go:89] found id: ""
	I0814 01:08:13.627291   61804 logs.go:276] 0 containers: []
	W0814 01:08:13.627299   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:13.627305   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:13.627363   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:13.661932   61804 cri.go:89] found id: ""
	I0814 01:08:13.661955   61804 logs.go:276] 0 containers: []
	W0814 01:08:13.661963   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:13.661968   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:13.662024   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:13.694724   61804 cri.go:89] found id: ""
	I0814 01:08:13.694750   61804 logs.go:276] 0 containers: []
	W0814 01:08:13.694760   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:13.694770   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:13.694785   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:13.759415   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:13.759436   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:13.759449   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:13.835496   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:13.835532   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:13.873749   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:13.873779   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:13.927612   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:13.927647   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:16.440696   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:16.455648   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:16.455734   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:16.490557   61804 cri.go:89] found id: ""
	I0814 01:08:16.490587   61804 logs.go:276] 0 containers: []
	W0814 01:08:16.490599   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:16.490606   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:16.490667   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:16.524268   61804 cri.go:89] found id: ""
	I0814 01:08:16.524294   61804 logs.go:276] 0 containers: []
	W0814 01:08:16.524303   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:16.524315   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:16.524379   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:16.562651   61804 cri.go:89] found id: ""
	I0814 01:08:16.562686   61804 logs.go:276] 0 containers: []
	W0814 01:08:16.562696   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:16.562708   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:16.562771   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:16.598581   61804 cri.go:89] found id: ""
	I0814 01:08:16.598605   61804 logs.go:276] 0 containers: []
	W0814 01:08:16.598613   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:16.598619   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:16.598669   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:16.646849   61804 cri.go:89] found id: ""
	I0814 01:08:16.646872   61804 logs.go:276] 0 containers: []
	W0814 01:08:16.646880   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:16.646886   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:16.646939   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:16.698695   61804 cri.go:89] found id: ""
	I0814 01:08:16.698720   61804 logs.go:276] 0 containers: []
	W0814 01:08:16.698727   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:16.698733   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:16.698793   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:16.748149   61804 cri.go:89] found id: ""
	I0814 01:08:16.748182   61804 logs.go:276] 0 containers: []
	W0814 01:08:16.748193   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:16.748201   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:16.748263   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:16.783334   61804 cri.go:89] found id: ""
	I0814 01:08:16.783362   61804 logs.go:276] 0 containers: []
	W0814 01:08:16.783371   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:16.783378   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:16.783389   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:16.833178   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:16.833211   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:16.845843   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:16.845873   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:16.916728   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:16.916754   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:16.916770   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:17.001194   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:17.001236   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:19.540300   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:19.554740   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:19.554823   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:19.590452   61804 cri.go:89] found id: ""
	I0814 01:08:19.590478   61804 logs.go:276] 0 containers: []
	W0814 01:08:19.590489   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:19.590498   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:19.590559   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:19.623643   61804 cri.go:89] found id: ""
	I0814 01:08:19.623673   61804 logs.go:276] 0 containers: []
	W0814 01:08:19.623683   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:19.623691   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:19.623759   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:19.659205   61804 cri.go:89] found id: ""
	I0814 01:08:19.659228   61804 logs.go:276] 0 containers: []
	W0814 01:08:19.659236   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:19.659243   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:19.659312   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:19.695038   61804 cri.go:89] found id: ""
	I0814 01:08:19.695061   61804 logs.go:276] 0 containers: []
	W0814 01:08:19.695068   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:19.695075   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:19.695132   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:19.728525   61804 cri.go:89] found id: ""
	I0814 01:08:19.728555   61804 logs.go:276] 0 containers: []
	W0814 01:08:19.728568   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:19.728585   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:19.728652   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:19.764153   61804 cri.go:89] found id: ""
	I0814 01:08:19.764180   61804 logs.go:276] 0 containers: []
	W0814 01:08:19.764191   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:19.764198   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:19.764261   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:19.803346   61804 cri.go:89] found id: ""
	I0814 01:08:19.803382   61804 logs.go:276] 0 containers: []
	W0814 01:08:19.803392   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:19.803400   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:19.803462   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:19.835783   61804 cri.go:89] found id: ""
	I0814 01:08:19.835811   61804 logs.go:276] 0 containers: []
	W0814 01:08:19.835818   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:19.835827   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:19.835839   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:19.889917   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:19.889961   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:19.903826   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:19.903858   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:19.977790   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:19.977813   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:19.977832   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:20.053634   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:20.053672   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:22.598821   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:22.612128   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:22.612209   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:22.647840   61804 cri.go:89] found id: ""
	I0814 01:08:22.647864   61804 logs.go:276] 0 containers: []
	W0814 01:08:22.647873   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:22.647880   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:22.647942   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:22.681572   61804 cri.go:89] found id: ""
	I0814 01:08:22.681594   61804 logs.go:276] 0 containers: []
	W0814 01:08:22.681601   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:22.681606   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:22.681670   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:22.715737   61804 cri.go:89] found id: ""
	I0814 01:08:22.715785   61804 logs.go:276] 0 containers: []
	W0814 01:08:22.715793   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:22.715799   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:22.715856   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:22.750605   61804 cri.go:89] found id: ""
	I0814 01:08:22.750628   61804 logs.go:276] 0 containers: []
	W0814 01:08:22.750636   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:22.750643   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:22.750693   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:22.786410   61804 cri.go:89] found id: ""
	I0814 01:08:22.786434   61804 logs.go:276] 0 containers: []
	W0814 01:08:22.786442   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:22.786447   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:22.786502   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:22.821799   61804 cri.go:89] found id: ""
	I0814 01:08:22.821830   61804 logs.go:276] 0 containers: []
	W0814 01:08:22.821840   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:22.821846   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:22.821923   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:22.861218   61804 cri.go:89] found id: ""
	I0814 01:08:22.861243   61804 logs.go:276] 0 containers: []
	W0814 01:08:22.861254   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:22.861261   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:22.861324   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:22.896371   61804 cri.go:89] found id: ""
	I0814 01:08:22.896398   61804 logs.go:276] 0 containers: []
	W0814 01:08:22.896408   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:22.896419   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:22.896434   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:22.951998   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:22.952035   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:22.966214   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:22.966239   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:23.035790   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:23.035812   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:23.035824   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:23.119675   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:23.119708   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:25.657771   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:25.671521   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:25.671607   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:25.708419   61804 cri.go:89] found id: ""
	I0814 01:08:25.708451   61804 logs.go:276] 0 containers: []
	W0814 01:08:25.708460   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:25.708466   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:25.708514   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:25.743263   61804 cri.go:89] found id: ""
	I0814 01:08:25.743296   61804 logs.go:276] 0 containers: []
	W0814 01:08:25.743309   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:25.743318   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:25.743384   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:25.773544   61804 cri.go:89] found id: ""
	I0814 01:08:25.773570   61804 logs.go:276] 0 containers: []
	W0814 01:08:25.773580   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:25.773588   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:25.773649   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:25.805316   61804 cri.go:89] found id: ""
	I0814 01:08:25.805339   61804 logs.go:276] 0 containers: []
	W0814 01:08:25.805347   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:25.805353   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:25.805404   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:25.837785   61804 cri.go:89] found id: ""
	I0814 01:08:25.837810   61804 logs.go:276] 0 containers: []
	W0814 01:08:25.837818   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:25.837824   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:25.837893   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:25.877145   61804 cri.go:89] found id: ""
	I0814 01:08:25.877171   61804 logs.go:276] 0 containers: []
	W0814 01:08:25.877182   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:25.877190   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:25.877236   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:25.913823   61804 cri.go:89] found id: ""
	I0814 01:08:25.913861   61804 logs.go:276] 0 containers: []
	W0814 01:08:25.913872   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:25.913880   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:25.913946   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:25.947397   61804 cri.go:89] found id: ""
	I0814 01:08:25.947419   61804 logs.go:276] 0 containers: []
	W0814 01:08:25.947427   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:25.947435   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:25.947446   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:26.023754   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:26.023812   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:26.060030   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:26.060068   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:26.110625   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:26.110663   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:26.123952   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:26.123991   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:26.194210   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:28.694490   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:28.706976   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:28.707040   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:28.739739   61804 cri.go:89] found id: ""
	I0814 01:08:28.739768   61804 logs.go:276] 0 containers: []
	W0814 01:08:28.739775   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:28.739781   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:28.739831   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:28.771179   61804 cri.go:89] found id: ""
	I0814 01:08:28.771217   61804 logs.go:276] 0 containers: []
	W0814 01:08:28.771228   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:28.771237   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:28.771303   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:28.805634   61804 cri.go:89] found id: ""
	I0814 01:08:28.805661   61804 logs.go:276] 0 containers: []
	W0814 01:08:28.805670   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:28.805675   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:28.805727   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:28.840796   61804 cri.go:89] found id: ""
	I0814 01:08:28.840819   61804 logs.go:276] 0 containers: []
	W0814 01:08:28.840827   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:28.840833   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:28.840893   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:28.879627   61804 cri.go:89] found id: ""
	I0814 01:08:28.879656   61804 logs.go:276] 0 containers: []
	W0814 01:08:28.879668   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:28.879675   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:28.879734   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:28.916568   61804 cri.go:89] found id: ""
	I0814 01:08:28.916588   61804 logs.go:276] 0 containers: []
	W0814 01:08:28.916597   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:28.916602   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:28.916658   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:28.952959   61804 cri.go:89] found id: ""
	I0814 01:08:28.952986   61804 logs.go:276] 0 containers: []
	W0814 01:08:28.952996   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:28.953003   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:28.953061   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:28.993496   61804 cri.go:89] found id: ""
	I0814 01:08:28.993527   61804 logs.go:276] 0 containers: []
	W0814 01:08:28.993538   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:28.993550   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:28.993565   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:29.079181   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:29.079219   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:29.121692   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:29.121718   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:29.174008   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:29.174068   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:29.188872   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:29.188904   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:29.254381   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:31.754986   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:31.767581   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:31.767656   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:31.803826   61804 cri.go:89] found id: ""
	I0814 01:08:31.803853   61804 logs.go:276] 0 containers: []
	W0814 01:08:31.803861   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:31.803867   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:31.803927   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:31.837958   61804 cri.go:89] found id: ""
	I0814 01:08:31.837986   61804 logs.go:276] 0 containers: []
	W0814 01:08:31.837996   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:31.838004   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:31.838077   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:31.869567   61804 cri.go:89] found id: ""
	I0814 01:08:31.869595   61804 logs.go:276] 0 containers: []
	W0814 01:08:31.869604   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:31.869612   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:31.869680   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:31.906943   61804 cri.go:89] found id: ""
	I0814 01:08:31.906973   61804 logs.go:276] 0 containers: []
	W0814 01:08:31.906985   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:31.906992   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:31.907059   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:31.940969   61804 cri.go:89] found id: ""
	I0814 01:08:31.941006   61804 logs.go:276] 0 containers: []
	W0814 01:08:31.941017   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:31.941025   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:31.941081   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:31.974546   61804 cri.go:89] found id: ""
	I0814 01:08:31.974578   61804 logs.go:276] 0 containers: []
	W0814 01:08:31.974588   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:31.974596   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:31.974657   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:32.007586   61804 cri.go:89] found id: ""
	I0814 01:08:32.007619   61804 logs.go:276] 0 containers: []
	W0814 01:08:32.007633   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:32.007641   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:32.007703   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:32.040073   61804 cri.go:89] found id: ""
	I0814 01:08:32.040104   61804 logs.go:276] 0 containers: []
	W0814 01:08:32.040116   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:32.040128   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:32.040142   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:32.094938   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:32.094978   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:32.107967   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:32.108002   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:32.176290   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:32.176314   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:32.176326   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:32.251231   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:32.251269   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:34.791693   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:34.804519   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:34.804582   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:34.838907   61804 cri.go:89] found id: ""
	I0814 01:08:34.838933   61804 logs.go:276] 0 containers: []
	W0814 01:08:34.838941   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:34.838947   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:34.839008   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:34.869650   61804 cri.go:89] found id: ""
	I0814 01:08:34.869676   61804 logs.go:276] 0 containers: []
	W0814 01:08:34.869684   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:34.869689   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:34.869739   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:34.903598   61804 cri.go:89] found id: ""
	I0814 01:08:34.903635   61804 logs.go:276] 0 containers: []
	W0814 01:08:34.903648   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:34.903655   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:34.903719   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:34.937101   61804 cri.go:89] found id: ""
	I0814 01:08:34.937131   61804 logs.go:276] 0 containers: []
	W0814 01:08:34.937143   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:34.937151   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:34.937214   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:34.969880   61804 cri.go:89] found id: ""
	I0814 01:08:34.969913   61804 logs.go:276] 0 containers: []
	W0814 01:08:34.969925   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:34.969933   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:34.969990   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:35.004158   61804 cri.go:89] found id: ""
	I0814 01:08:35.004185   61804 logs.go:276] 0 containers: []
	W0814 01:08:35.004194   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:35.004200   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:35.004267   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:35.037368   61804 cri.go:89] found id: ""
	I0814 01:08:35.037397   61804 logs.go:276] 0 containers: []
	W0814 01:08:35.037407   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:35.037415   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:35.037467   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:35.071051   61804 cri.go:89] found id: ""
	I0814 01:08:35.071080   61804 logs.go:276] 0 containers: []
	W0814 01:08:35.071089   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:35.071102   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:35.071116   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:35.147845   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:35.147879   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:35.189235   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:35.189271   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:35.242094   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:35.242132   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:35.255405   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:35.255430   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:35.325820   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:37.826188   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:37.839036   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:37.839117   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:37.876368   61804 cri.go:89] found id: ""
	I0814 01:08:37.876397   61804 logs.go:276] 0 containers: []
	W0814 01:08:37.876406   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:37.876411   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:37.876468   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:37.916680   61804 cri.go:89] found id: ""
	I0814 01:08:37.916717   61804 logs.go:276] 0 containers: []
	W0814 01:08:37.916727   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:37.916735   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:37.916802   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:37.951025   61804 cri.go:89] found id: ""
	I0814 01:08:37.951048   61804 logs.go:276] 0 containers: []
	W0814 01:08:37.951056   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:37.951062   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:37.951122   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:37.984837   61804 cri.go:89] found id: ""
	I0814 01:08:37.984865   61804 logs.go:276] 0 containers: []
	W0814 01:08:37.984873   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:37.984878   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:37.984928   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:38.018722   61804 cri.go:89] found id: ""
	I0814 01:08:38.018744   61804 logs.go:276] 0 containers: []
	W0814 01:08:38.018752   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:38.018757   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:38.018815   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:38.052306   61804 cri.go:89] found id: ""
	I0814 01:08:38.052337   61804 logs.go:276] 0 containers: []
	W0814 01:08:38.052350   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:38.052358   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:38.052419   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:38.086752   61804 cri.go:89] found id: ""
	I0814 01:08:38.086784   61804 logs.go:276] 0 containers: []
	W0814 01:08:38.086801   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:38.086811   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:38.086877   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:38.119201   61804 cri.go:89] found id: ""
	I0814 01:08:38.119228   61804 logs.go:276] 0 containers: []
	W0814 01:08:38.119235   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:38.119243   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:38.119255   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:38.171460   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:38.171492   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:38.184712   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:38.184739   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:38.248529   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:38.248552   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:38.248568   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:38.324517   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:38.324556   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:40.865218   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:40.877772   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:40.877847   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:40.910171   61804 cri.go:89] found id: ""
	I0814 01:08:40.910197   61804 logs.go:276] 0 containers: []
	W0814 01:08:40.910204   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:40.910210   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:40.910257   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:40.947205   61804 cri.go:89] found id: ""
	I0814 01:08:40.947234   61804 logs.go:276] 0 containers: []
	W0814 01:08:40.947244   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:40.947249   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:40.947304   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:40.979404   61804 cri.go:89] found id: ""
	I0814 01:08:40.979428   61804 logs.go:276] 0 containers: []
	W0814 01:08:40.979436   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:40.979442   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:40.979500   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:41.017710   61804 cri.go:89] found id: ""
	I0814 01:08:41.017737   61804 logs.go:276] 0 containers: []
	W0814 01:08:41.017746   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:41.017752   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:41.017799   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:41.052240   61804 cri.go:89] found id: ""
	I0814 01:08:41.052269   61804 logs.go:276] 0 containers: []
	W0814 01:08:41.052278   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:41.052286   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:41.052353   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:41.084124   61804 cri.go:89] found id: ""
	I0814 01:08:41.084151   61804 logs.go:276] 0 containers: []
	W0814 01:08:41.084159   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:41.084165   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:41.084230   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:41.120994   61804 cri.go:89] found id: ""
	I0814 01:08:41.121027   61804 logs.go:276] 0 containers: []
	W0814 01:08:41.121039   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:41.121047   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:41.121106   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:41.155794   61804 cri.go:89] found id: ""
	I0814 01:08:41.155829   61804 logs.go:276] 0 containers: []
	W0814 01:08:41.155842   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:41.155854   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:41.155873   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:41.209146   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:41.209191   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:41.222112   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:41.222141   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:41.298512   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:41.298533   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:41.298550   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:41.378609   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:41.378645   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:43.924469   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:43.936857   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:43.936935   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:43.969234   61804 cri.go:89] found id: ""
	I0814 01:08:43.969267   61804 logs.go:276] 0 containers: []
	W0814 01:08:43.969276   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:43.969282   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:43.969348   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:44.003814   61804 cri.go:89] found id: ""
	I0814 01:08:44.003841   61804 logs.go:276] 0 containers: []
	W0814 01:08:44.003852   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:44.003860   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:44.003929   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:44.037828   61804 cri.go:89] found id: ""
	I0814 01:08:44.037858   61804 logs.go:276] 0 containers: []
	W0814 01:08:44.037869   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:44.037877   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:44.037931   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:44.077084   61804 cri.go:89] found id: ""
	I0814 01:08:44.077110   61804 logs.go:276] 0 containers: []
	W0814 01:08:44.077118   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:44.077124   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:44.077174   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:44.111028   61804 cri.go:89] found id: ""
	I0814 01:08:44.111054   61804 logs.go:276] 0 containers: []
	W0814 01:08:44.111063   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:44.111070   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:44.111122   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:44.143178   61804 cri.go:89] found id: ""
	I0814 01:08:44.143211   61804 logs.go:276] 0 containers: []
	W0814 01:08:44.143222   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:44.143229   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:44.143293   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:44.177606   61804 cri.go:89] found id: ""
	I0814 01:08:44.177636   61804 logs.go:276] 0 containers: []
	W0814 01:08:44.177648   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:44.177657   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:44.177723   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:44.210941   61804 cri.go:89] found id: ""
	I0814 01:08:44.210965   61804 logs.go:276] 0 containers: []
	W0814 01:08:44.210973   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:44.210982   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:44.210995   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:44.224219   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:44.224248   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:44.289411   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:44.289431   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:44.289442   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:44.369680   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:44.369720   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:44.407705   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:44.407742   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:46.962321   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:46.975711   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:46.975843   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:47.008529   61804 cri.go:89] found id: ""
	I0814 01:08:47.008642   61804 logs.go:276] 0 containers: []
	W0814 01:08:47.008651   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:47.008657   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:47.008707   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:47.042469   61804 cri.go:89] found id: ""
	I0814 01:08:47.042498   61804 logs.go:276] 0 containers: []
	W0814 01:08:47.042509   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:47.042518   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:47.042586   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:47.081186   61804 cri.go:89] found id: ""
	I0814 01:08:47.081214   61804 logs.go:276] 0 containers: []
	W0814 01:08:47.081222   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:47.081229   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:47.081286   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:47.117727   61804 cri.go:89] found id: ""
	I0814 01:08:47.117754   61804 logs.go:276] 0 containers: []
	W0814 01:08:47.117765   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:47.117773   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:47.117858   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:47.151247   61804 cri.go:89] found id: ""
	I0814 01:08:47.151283   61804 logs.go:276] 0 containers: []
	W0814 01:08:47.151298   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:47.151307   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:47.151370   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:47.185640   61804 cri.go:89] found id: ""
	I0814 01:08:47.185671   61804 logs.go:276] 0 containers: []
	W0814 01:08:47.185681   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:47.185689   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:47.185755   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:47.220597   61804 cri.go:89] found id: ""
	I0814 01:08:47.220625   61804 logs.go:276] 0 containers: []
	W0814 01:08:47.220633   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:47.220641   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:47.220714   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:47.257099   61804 cri.go:89] found id: ""
	I0814 01:08:47.257131   61804 logs.go:276] 0 containers: []
	W0814 01:08:47.257147   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:47.257162   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:47.257179   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:47.307503   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:47.307538   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:47.320882   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:47.320907   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:47.394519   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:47.394553   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:47.394567   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:47.475998   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:47.476058   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:50.019454   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:50.033470   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:50.033550   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:50.070782   61804 cri.go:89] found id: ""
	I0814 01:08:50.070806   61804 logs.go:276] 0 containers: []
	W0814 01:08:50.070813   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:50.070819   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:50.070881   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:50.104047   61804 cri.go:89] found id: ""
	I0814 01:08:50.104083   61804 logs.go:276] 0 containers: []
	W0814 01:08:50.104092   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:50.104101   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:50.104172   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:50.139445   61804 cri.go:89] found id: ""
	I0814 01:08:50.139472   61804 logs.go:276] 0 containers: []
	W0814 01:08:50.139480   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:50.139487   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:50.139545   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:50.173077   61804 cri.go:89] found id: ""
	I0814 01:08:50.173109   61804 logs.go:276] 0 containers: []
	W0814 01:08:50.173118   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:50.173126   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:50.173189   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:50.204234   61804 cri.go:89] found id: ""
	I0814 01:08:50.204264   61804 logs.go:276] 0 containers: []
	W0814 01:08:50.204273   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:50.204281   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:50.204342   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:50.237005   61804 cri.go:89] found id: ""
	I0814 01:08:50.237034   61804 logs.go:276] 0 containers: []
	W0814 01:08:50.237044   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:50.237052   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:50.237107   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:50.270171   61804 cri.go:89] found id: ""
	I0814 01:08:50.270197   61804 logs.go:276] 0 containers: []
	W0814 01:08:50.270204   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:50.270209   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:50.270274   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:50.304932   61804 cri.go:89] found id: ""
	I0814 01:08:50.304959   61804 logs.go:276] 0 containers: []
	W0814 01:08:50.304968   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:50.304980   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:50.305000   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:50.317524   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:50.317552   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:50.384790   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:50.384817   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:50.384833   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:50.461398   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:50.461432   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:50.518516   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:50.518545   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:53.069835   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:53.082707   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:53.082777   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:53.119053   61804 cri.go:89] found id: ""
	I0814 01:08:53.119075   61804 logs.go:276] 0 containers: []
	W0814 01:08:53.119083   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:53.119089   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:53.119138   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:53.154565   61804 cri.go:89] found id: ""
	I0814 01:08:53.154598   61804 logs.go:276] 0 containers: []
	W0814 01:08:53.154610   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:53.154618   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:53.154690   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:53.187144   61804 cri.go:89] found id: ""
	I0814 01:08:53.187171   61804 logs.go:276] 0 containers: []
	W0814 01:08:53.187178   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:53.187184   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:53.187236   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:53.220965   61804 cri.go:89] found id: ""
	I0814 01:08:53.220989   61804 logs.go:276] 0 containers: []
	W0814 01:08:53.220998   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:53.221004   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:53.221062   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:53.256825   61804 cri.go:89] found id: ""
	I0814 01:08:53.256857   61804 logs.go:276] 0 containers: []
	W0814 01:08:53.256868   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:53.256875   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:53.256941   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:53.295733   61804 cri.go:89] found id: ""
	I0814 01:08:53.295761   61804 logs.go:276] 0 containers: []
	W0814 01:08:53.295768   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:53.295774   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:53.295822   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:53.328928   61804 cri.go:89] found id: ""
	I0814 01:08:53.328959   61804 logs.go:276] 0 containers: []
	W0814 01:08:53.328970   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:53.328979   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:53.329049   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:53.362866   61804 cri.go:89] found id: ""
	I0814 01:08:53.362896   61804 logs.go:276] 0 containers: []
	W0814 01:08:53.362907   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:53.362919   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:53.362934   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:53.375681   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:53.375718   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:53.439108   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:53.439132   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:53.439148   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:53.524801   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:53.524838   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:53.560832   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:53.560866   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:56.117383   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:56.129668   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:56.129729   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:56.161928   61804 cri.go:89] found id: ""
	I0814 01:08:56.161953   61804 logs.go:276] 0 containers: []
	W0814 01:08:56.161966   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:56.161971   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:56.162017   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:56.192303   61804 cri.go:89] found id: ""
	I0814 01:08:56.192332   61804 logs.go:276] 0 containers: []
	W0814 01:08:56.192343   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:56.192360   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:56.192428   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:56.226668   61804 cri.go:89] found id: ""
	I0814 01:08:56.226696   61804 logs.go:276] 0 containers: []
	W0814 01:08:56.226707   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:56.226715   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:56.226776   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:56.284959   61804 cri.go:89] found id: ""
	I0814 01:08:56.284987   61804 logs.go:276] 0 containers: []
	W0814 01:08:56.284998   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:56.285006   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:56.285066   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:56.317591   61804 cri.go:89] found id: ""
	I0814 01:08:56.317623   61804 logs.go:276] 0 containers: []
	W0814 01:08:56.317633   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:56.317640   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:56.317707   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:56.350119   61804 cri.go:89] found id: ""
	I0814 01:08:56.350146   61804 logs.go:276] 0 containers: []
	W0814 01:08:56.350157   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:56.350165   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:56.350223   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:56.382204   61804 cri.go:89] found id: ""
	I0814 01:08:56.382231   61804 logs.go:276] 0 containers: []
	W0814 01:08:56.382239   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:56.382244   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:56.382295   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:56.415098   61804 cri.go:89] found id: ""
	I0814 01:08:56.415130   61804 logs.go:276] 0 containers: []
	W0814 01:08:56.415140   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:56.415160   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:56.415174   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:56.466056   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:56.466094   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:56.480989   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:56.481019   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:56.550348   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:56.550371   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:56.550387   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:56.629331   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:56.629371   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:59.166791   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:59.179818   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:59.179907   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:59.212759   61804 cri.go:89] found id: ""
	I0814 01:08:59.212781   61804 logs.go:276] 0 containers: []
	W0814 01:08:59.212789   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:59.212796   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:59.212851   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:59.248330   61804 cri.go:89] found id: ""
	I0814 01:08:59.248354   61804 logs.go:276] 0 containers: []
	W0814 01:08:59.248362   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:59.248368   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:59.248420   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:59.282101   61804 cri.go:89] found id: ""
	I0814 01:08:59.282123   61804 logs.go:276] 0 containers: []
	W0814 01:08:59.282136   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:59.282142   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:59.282190   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:59.318477   61804 cri.go:89] found id: ""
	I0814 01:08:59.318502   61804 logs.go:276] 0 containers: []
	W0814 01:08:59.318510   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:59.318516   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:59.318566   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:59.352473   61804 cri.go:89] found id: ""
	I0814 01:08:59.352499   61804 logs.go:276] 0 containers: []
	W0814 01:08:59.352507   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:59.352514   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:59.352583   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:59.386004   61804 cri.go:89] found id: ""
	I0814 01:08:59.386032   61804 logs.go:276] 0 containers: []
	W0814 01:08:59.386056   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:59.386065   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:59.386127   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:59.424280   61804 cri.go:89] found id: ""
	I0814 01:08:59.424309   61804 logs.go:276] 0 containers: []
	W0814 01:08:59.424334   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:59.424340   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:59.424390   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:59.461555   61804 cri.go:89] found id: ""
	I0814 01:08:59.461579   61804 logs.go:276] 0 containers: []
	W0814 01:08:59.461587   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:59.461596   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:59.461608   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:59.501997   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:59.502032   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:59.554228   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:59.554276   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:59.569169   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:59.569201   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:59.635758   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:59.635779   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:59.635793   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:02.211233   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:02.223647   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:02.223733   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:02.257172   61804 cri.go:89] found id: ""
	I0814 01:09:02.257204   61804 logs.go:276] 0 containers: []
	W0814 01:09:02.257215   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:02.257222   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:02.257286   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:02.290090   61804 cri.go:89] found id: ""
	I0814 01:09:02.290123   61804 logs.go:276] 0 containers: []
	W0814 01:09:02.290132   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:02.290139   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:02.290207   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:02.324436   61804 cri.go:89] found id: ""
	I0814 01:09:02.324461   61804 logs.go:276] 0 containers: []
	W0814 01:09:02.324469   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:02.324474   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:02.324531   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:02.357092   61804 cri.go:89] found id: ""
	I0814 01:09:02.357116   61804 logs.go:276] 0 containers: []
	W0814 01:09:02.357124   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:02.357130   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:02.357191   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:02.390237   61804 cri.go:89] found id: ""
	I0814 01:09:02.390265   61804 logs.go:276] 0 containers: []
	W0814 01:09:02.390278   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:02.390287   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:02.390357   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:02.425960   61804 cri.go:89] found id: ""
	I0814 01:09:02.425988   61804 logs.go:276] 0 containers: []
	W0814 01:09:02.425996   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:02.426002   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:02.426072   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:02.459644   61804 cri.go:89] found id: ""
	I0814 01:09:02.459683   61804 logs.go:276] 0 containers: []
	W0814 01:09:02.459694   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:02.459702   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:02.459764   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:02.496147   61804 cri.go:89] found id: ""
	I0814 01:09:02.496169   61804 logs.go:276] 0 containers: []
	W0814 01:09:02.496182   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:02.496190   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:02.496202   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:02.576512   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:02.576547   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:02.612410   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:02.612440   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:02.665810   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:02.665850   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:02.680992   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:02.681020   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:02.781868   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:05.282001   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:05.294986   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:05.295064   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:05.326520   61804 cri.go:89] found id: ""
	I0814 01:09:05.326547   61804 logs.go:276] 0 containers: []
	W0814 01:09:05.326555   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:05.326562   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:05.326618   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:05.358458   61804 cri.go:89] found id: ""
	I0814 01:09:05.358482   61804 logs.go:276] 0 containers: []
	W0814 01:09:05.358490   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:05.358497   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:05.358556   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:05.393729   61804 cri.go:89] found id: ""
	I0814 01:09:05.393763   61804 logs.go:276] 0 containers: []
	W0814 01:09:05.393771   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:05.393777   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:05.393824   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:05.433291   61804 cri.go:89] found id: ""
	I0814 01:09:05.433319   61804 logs.go:276] 0 containers: []
	W0814 01:09:05.433327   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:05.433334   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:05.433384   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:05.467163   61804 cri.go:89] found id: ""
	I0814 01:09:05.467187   61804 logs.go:276] 0 containers: []
	W0814 01:09:05.467198   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:05.467206   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:05.467284   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:05.499718   61804 cri.go:89] found id: ""
	I0814 01:09:05.499747   61804 logs.go:276] 0 containers: []
	W0814 01:09:05.499758   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:05.499768   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:05.499819   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:05.532818   61804 cri.go:89] found id: ""
	I0814 01:09:05.532847   61804 logs.go:276] 0 containers: []
	W0814 01:09:05.532859   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:05.532867   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:05.532920   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:05.566908   61804 cri.go:89] found id: ""
	I0814 01:09:05.566936   61804 logs.go:276] 0 containers: []
	W0814 01:09:05.566947   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:05.566957   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:05.566969   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:05.621247   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:05.621283   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:05.635566   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:05.635606   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:05.698579   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:05.698606   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:05.698622   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:05.780861   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:05.780897   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:08.322931   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:08.336836   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:08.336918   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:08.369802   61804 cri.go:89] found id: ""
	I0814 01:09:08.369833   61804 logs.go:276] 0 containers: []
	W0814 01:09:08.369842   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:08.369849   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:08.369899   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:08.415414   61804 cri.go:89] found id: ""
	I0814 01:09:08.415441   61804 logs.go:276] 0 containers: []
	W0814 01:09:08.415451   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:08.415459   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:08.415525   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:08.477026   61804 cri.go:89] found id: ""
	I0814 01:09:08.477058   61804 logs.go:276] 0 containers: []
	W0814 01:09:08.477069   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:08.477077   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:08.477145   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:08.522385   61804 cri.go:89] found id: ""
	I0814 01:09:08.522417   61804 logs.go:276] 0 containers: []
	W0814 01:09:08.522429   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:08.522438   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:08.522502   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:08.555803   61804 cri.go:89] found id: ""
	I0814 01:09:08.555839   61804 logs.go:276] 0 containers: []
	W0814 01:09:08.555848   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:08.555855   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:08.555922   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:08.589910   61804 cri.go:89] found id: ""
	I0814 01:09:08.589932   61804 logs.go:276] 0 containers: []
	W0814 01:09:08.589939   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:08.589945   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:08.589992   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:08.622278   61804 cri.go:89] found id: ""
	I0814 01:09:08.622313   61804 logs.go:276] 0 containers: []
	W0814 01:09:08.622321   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:08.622328   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:08.622381   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:08.655221   61804 cri.go:89] found id: ""
	I0814 01:09:08.655248   61804 logs.go:276] 0 containers: []
	W0814 01:09:08.655257   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:08.655266   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:08.655280   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:08.691932   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:08.691965   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:08.742551   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:08.742586   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:08.755590   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:08.755619   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:08.822365   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:08.822384   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:08.822401   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:11.397107   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:11.409425   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:11.409498   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:11.442680   61804 cri.go:89] found id: ""
	I0814 01:09:11.442711   61804 logs.go:276] 0 containers: []
	W0814 01:09:11.442724   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:11.442732   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:11.442791   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:11.482991   61804 cri.go:89] found id: ""
	I0814 01:09:11.483016   61804 logs.go:276] 0 containers: []
	W0814 01:09:11.483023   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:11.483034   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:11.483099   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:11.516069   61804 cri.go:89] found id: ""
	I0814 01:09:11.516091   61804 logs.go:276] 0 containers: []
	W0814 01:09:11.516100   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:11.516105   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:11.516154   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:11.549745   61804 cri.go:89] found id: ""
	I0814 01:09:11.549773   61804 logs.go:276] 0 containers: []
	W0814 01:09:11.549780   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:11.549787   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:11.549851   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:11.582542   61804 cri.go:89] found id: ""
	I0814 01:09:11.582569   61804 logs.go:276] 0 containers: []
	W0814 01:09:11.582577   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:11.582583   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:11.582642   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:11.616238   61804 cri.go:89] found id: ""
	I0814 01:09:11.616261   61804 logs.go:276] 0 containers: []
	W0814 01:09:11.616269   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:11.616275   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:11.616330   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:11.650238   61804 cri.go:89] found id: ""
	I0814 01:09:11.650286   61804 logs.go:276] 0 containers: []
	W0814 01:09:11.650301   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:11.650311   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:11.650384   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:11.683100   61804 cri.go:89] found id: ""
	I0814 01:09:11.683128   61804 logs.go:276] 0 containers: []
	W0814 01:09:11.683139   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:11.683149   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:11.683169   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:11.760248   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:11.760292   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:11.798965   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:11.798996   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:11.853109   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:11.853145   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:11.865645   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:11.865682   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:11.935478   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:14.436076   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:14.448846   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:14.448927   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:14.483833   61804 cri.go:89] found id: ""
	I0814 01:09:14.483873   61804 logs.go:276] 0 containers: []
	W0814 01:09:14.483882   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:14.483887   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:14.483940   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:14.522643   61804 cri.go:89] found id: ""
	I0814 01:09:14.522670   61804 logs.go:276] 0 containers: []
	W0814 01:09:14.522678   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:14.522683   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:14.522783   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:14.564084   61804 cri.go:89] found id: ""
	I0814 01:09:14.564111   61804 logs.go:276] 0 containers: []
	W0814 01:09:14.564121   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:14.564129   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:14.564193   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:14.603532   61804 cri.go:89] found id: ""
	I0814 01:09:14.603560   61804 logs.go:276] 0 containers: []
	W0814 01:09:14.603571   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:14.603578   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:14.603641   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:14.644420   61804 cri.go:89] found id: ""
	I0814 01:09:14.644443   61804 logs.go:276] 0 containers: []
	W0814 01:09:14.644450   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:14.644455   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:14.644503   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:14.681652   61804 cri.go:89] found id: ""
	I0814 01:09:14.681685   61804 logs.go:276] 0 containers: []
	W0814 01:09:14.681693   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:14.681701   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:14.681757   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:14.715830   61804 cri.go:89] found id: ""
	I0814 01:09:14.715852   61804 logs.go:276] 0 containers: []
	W0814 01:09:14.715860   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:14.715866   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:14.715912   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:14.752305   61804 cri.go:89] found id: ""
	I0814 01:09:14.752336   61804 logs.go:276] 0 containers: []
	W0814 01:09:14.752343   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:14.752352   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:14.752367   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:14.765250   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:14.765287   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:14.834427   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:14.834453   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:14.834470   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:14.914683   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:14.914721   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:14.959497   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:14.959534   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:17.513077   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:17.526300   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:17.526409   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:17.563670   61804 cri.go:89] found id: ""
	I0814 01:09:17.563700   61804 logs.go:276] 0 containers: []
	W0814 01:09:17.563709   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:17.563715   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:17.563768   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:17.599019   61804 cri.go:89] found id: ""
	I0814 01:09:17.599048   61804 logs.go:276] 0 containers: []
	W0814 01:09:17.599070   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:17.599078   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:17.599158   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:17.633378   61804 cri.go:89] found id: ""
	I0814 01:09:17.633407   61804 logs.go:276] 0 containers: []
	W0814 01:09:17.633422   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:17.633430   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:17.633494   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:17.667180   61804 cri.go:89] found id: ""
	I0814 01:09:17.667213   61804 logs.go:276] 0 containers: []
	W0814 01:09:17.667225   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:17.667233   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:17.667293   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:17.704552   61804 cri.go:89] found id: ""
	I0814 01:09:17.704582   61804 logs.go:276] 0 containers: []
	W0814 01:09:17.704595   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:17.704603   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:17.704670   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:17.735937   61804 cri.go:89] found id: ""
	I0814 01:09:17.735966   61804 logs.go:276] 0 containers: []
	W0814 01:09:17.735978   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:17.735987   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:17.736057   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:17.772223   61804 cri.go:89] found id: ""
	I0814 01:09:17.772251   61804 logs.go:276] 0 containers: []
	W0814 01:09:17.772263   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:17.772271   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:17.772335   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:17.807432   61804 cri.go:89] found id: ""
	I0814 01:09:17.807462   61804 logs.go:276] 0 containers: []
	W0814 01:09:17.807474   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:17.807485   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:17.807499   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:17.860093   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:17.860135   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:17.874608   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:17.874644   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:17.948791   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:17.948812   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:17.948827   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:18.024743   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:18.024778   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:20.559854   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:20.572920   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:20.573004   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:20.609163   61804 cri.go:89] found id: ""
	I0814 01:09:20.609189   61804 logs.go:276] 0 containers: []
	W0814 01:09:20.609200   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:20.609205   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:20.609253   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:20.646826   61804 cri.go:89] found id: ""
	I0814 01:09:20.646852   61804 logs.go:276] 0 containers: []
	W0814 01:09:20.646859   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:20.646865   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:20.646913   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:20.682403   61804 cri.go:89] found id: ""
	I0814 01:09:20.682432   61804 logs.go:276] 0 containers: []
	W0814 01:09:20.682443   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:20.682452   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:20.682515   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:20.717678   61804 cri.go:89] found id: ""
	I0814 01:09:20.717700   61804 logs.go:276] 0 containers: []
	W0814 01:09:20.717708   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:20.717713   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:20.717761   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:20.748451   61804 cri.go:89] found id: ""
	I0814 01:09:20.748481   61804 logs.go:276] 0 containers: []
	W0814 01:09:20.748492   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:20.748501   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:20.748567   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:20.785684   61804 cri.go:89] found id: ""
	I0814 01:09:20.785712   61804 logs.go:276] 0 containers: []
	W0814 01:09:20.785722   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:20.785729   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:20.785792   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:20.826195   61804 cri.go:89] found id: ""
	I0814 01:09:20.826225   61804 logs.go:276] 0 containers: []
	W0814 01:09:20.826233   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:20.826239   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:20.826305   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:20.860155   61804 cri.go:89] found id: ""
	I0814 01:09:20.860181   61804 logs.go:276] 0 containers: []
	W0814 01:09:20.860190   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:20.860198   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:20.860209   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:20.909428   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:20.909464   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:20.923178   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:20.923208   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:20.994502   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:20.994537   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:20.994556   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:21.074097   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:21.074138   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:23.615557   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:23.628906   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:23.628976   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:23.661923   61804 cri.go:89] found id: ""
	I0814 01:09:23.661954   61804 logs.go:276] 0 containers: []
	W0814 01:09:23.661966   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:23.661973   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:23.662033   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:23.693786   61804 cri.go:89] found id: ""
	I0814 01:09:23.693815   61804 logs.go:276] 0 containers: []
	W0814 01:09:23.693828   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:23.693844   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:23.693938   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:23.726707   61804 cri.go:89] found id: ""
	I0814 01:09:23.726739   61804 logs.go:276] 0 containers: []
	W0814 01:09:23.726750   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:23.726758   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:23.726823   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:23.757433   61804 cri.go:89] found id: ""
	I0814 01:09:23.757457   61804 logs.go:276] 0 containers: []
	W0814 01:09:23.757465   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:23.757471   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:23.757521   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:23.789493   61804 cri.go:89] found id: ""
	I0814 01:09:23.789516   61804 logs.go:276] 0 containers: []
	W0814 01:09:23.789523   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:23.789529   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:23.789589   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:23.824641   61804 cri.go:89] found id: ""
	I0814 01:09:23.824668   61804 logs.go:276] 0 containers: []
	W0814 01:09:23.824676   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:23.824685   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:23.824758   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:23.857651   61804 cri.go:89] found id: ""
	I0814 01:09:23.857678   61804 logs.go:276] 0 containers: []
	W0814 01:09:23.857688   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:23.857697   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:23.857761   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:23.898116   61804 cri.go:89] found id: ""
	I0814 01:09:23.898138   61804 logs.go:276] 0 containers: []
	W0814 01:09:23.898145   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:23.898154   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:23.898169   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:23.982086   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:23.982121   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:24.018340   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:24.018372   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:24.067264   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:24.067300   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:24.081648   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:24.081681   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:24.156566   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:26.656930   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:26.669540   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:26.669616   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:26.701786   61804 cri.go:89] found id: ""
	I0814 01:09:26.701819   61804 logs.go:276] 0 containers: []
	W0814 01:09:26.701828   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:26.701834   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:26.701897   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:26.734372   61804 cri.go:89] found id: ""
	I0814 01:09:26.734397   61804 logs.go:276] 0 containers: []
	W0814 01:09:26.734405   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:26.734410   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:26.734463   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:26.767100   61804 cri.go:89] found id: ""
	I0814 01:09:26.767125   61804 logs.go:276] 0 containers: []
	W0814 01:09:26.767140   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:26.767148   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:26.767210   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:26.802145   61804 cri.go:89] found id: ""
	I0814 01:09:26.802168   61804 logs.go:276] 0 containers: []
	W0814 01:09:26.802177   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:26.802182   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:26.802230   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:26.835588   61804 cri.go:89] found id: ""
	I0814 01:09:26.835616   61804 logs.go:276] 0 containers: []
	W0814 01:09:26.835624   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:26.835630   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:26.835685   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:26.868104   61804 cri.go:89] found id: ""
	I0814 01:09:26.868130   61804 logs.go:276] 0 containers: []
	W0814 01:09:26.868138   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:26.868144   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:26.868209   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:26.899709   61804 cri.go:89] found id: ""
	I0814 01:09:26.899736   61804 logs.go:276] 0 containers: []
	W0814 01:09:26.899755   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:26.899764   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:26.899824   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:26.934964   61804 cri.go:89] found id: ""
	I0814 01:09:26.934989   61804 logs.go:276] 0 containers: []
	W0814 01:09:26.934996   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:26.935005   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:26.935023   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:26.970832   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:26.970859   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:27.022349   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:27.022390   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:27.035656   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:27.035683   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:27.115414   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:27.115441   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:27.115458   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:29.701338   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:29.713890   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:29.713947   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:29.745724   61804 cri.go:89] found id: ""
	I0814 01:09:29.745749   61804 logs.go:276] 0 containers: []
	W0814 01:09:29.745756   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:29.745763   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:29.745816   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:29.777020   61804 cri.go:89] found id: ""
	I0814 01:09:29.777047   61804 logs.go:276] 0 containers: []
	W0814 01:09:29.777057   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:29.777065   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:29.777130   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:29.813355   61804 cri.go:89] found id: ""
	I0814 01:09:29.813386   61804 logs.go:276] 0 containers: []
	W0814 01:09:29.813398   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:29.813406   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:29.813464   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:29.845184   61804 cri.go:89] found id: ""
	I0814 01:09:29.845212   61804 logs.go:276] 0 containers: []
	W0814 01:09:29.845222   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:29.845227   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:29.845288   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:29.881128   61804 cri.go:89] found id: ""
	I0814 01:09:29.881158   61804 logs.go:276] 0 containers: []
	W0814 01:09:29.881169   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:29.881177   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:29.881249   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:29.912034   61804 cri.go:89] found id: ""
	I0814 01:09:29.912078   61804 logs.go:276] 0 containers: []
	W0814 01:09:29.912091   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:29.912100   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:29.912173   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:29.950345   61804 cri.go:89] found id: ""
	I0814 01:09:29.950378   61804 logs.go:276] 0 containers: []
	W0814 01:09:29.950386   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:29.950392   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:29.950454   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:29.989118   61804 cri.go:89] found id: ""
	I0814 01:09:29.989150   61804 logs.go:276] 0 containers: []
	W0814 01:09:29.989161   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:29.989172   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:29.989186   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:30.042231   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:30.042262   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:30.056231   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:30.056262   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:30.130840   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:30.130871   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:30.130891   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:30.209332   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:30.209372   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:32.751036   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:32.765011   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:32.765072   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:32.802505   61804 cri.go:89] found id: ""
	I0814 01:09:32.802533   61804 logs.go:276] 0 containers: []
	W0814 01:09:32.802543   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:32.802548   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:32.802600   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:32.835127   61804 cri.go:89] found id: ""
	I0814 01:09:32.835165   61804 logs.go:276] 0 containers: []
	W0814 01:09:32.835174   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:32.835179   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:32.835230   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:32.871768   61804 cri.go:89] found id: ""
	I0814 01:09:32.871793   61804 logs.go:276] 0 containers: []
	W0814 01:09:32.871800   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:32.871814   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:32.871865   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:32.907601   61804 cri.go:89] found id: ""
	I0814 01:09:32.907625   61804 logs.go:276] 0 containers: []
	W0814 01:09:32.907634   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:32.907640   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:32.907693   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:32.942615   61804 cri.go:89] found id: ""
	I0814 01:09:32.942640   61804 logs.go:276] 0 containers: []
	W0814 01:09:32.942649   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:32.942655   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:32.942707   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:32.975436   61804 cri.go:89] found id: ""
	I0814 01:09:32.975467   61804 logs.go:276] 0 containers: []
	W0814 01:09:32.975478   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:32.975486   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:32.975546   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:33.008982   61804 cri.go:89] found id: ""
	I0814 01:09:33.009013   61804 logs.go:276] 0 containers: []
	W0814 01:09:33.009021   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:33.009027   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:33.009077   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:33.042312   61804 cri.go:89] found id: ""
	I0814 01:09:33.042345   61804 logs.go:276] 0 containers: []
	W0814 01:09:33.042362   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:33.042371   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:33.042383   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:33.102102   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:33.102145   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:33.116497   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:33.116527   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:33.191821   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:33.191847   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:33.191862   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:33.272510   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:33.272562   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:35.813246   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:35.826224   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:35.826304   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:35.859220   61804 cri.go:89] found id: ""
	I0814 01:09:35.859252   61804 logs.go:276] 0 containers: []
	W0814 01:09:35.859263   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:35.859274   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:35.859349   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:35.896460   61804 cri.go:89] found id: ""
	I0814 01:09:35.896485   61804 logs.go:276] 0 containers: []
	W0814 01:09:35.896494   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:35.896500   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:35.896559   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:35.929796   61804 cri.go:89] found id: ""
	I0814 01:09:35.929819   61804 logs.go:276] 0 containers: []
	W0814 01:09:35.929827   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:35.929832   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:35.929883   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:35.963928   61804 cri.go:89] found id: ""
	I0814 01:09:35.963954   61804 logs.go:276] 0 containers: []
	W0814 01:09:35.963965   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:35.963972   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:35.964033   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:36.004613   61804 cri.go:89] found id: ""
	I0814 01:09:36.004644   61804 logs.go:276] 0 containers: []
	W0814 01:09:36.004654   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:36.004660   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:36.004729   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:36.039212   61804 cri.go:89] found id: ""
	I0814 01:09:36.039241   61804 logs.go:276] 0 containers: []
	W0814 01:09:36.039249   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:36.039256   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:36.039311   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:36.072917   61804 cri.go:89] found id: ""
	I0814 01:09:36.072945   61804 logs.go:276] 0 containers: []
	W0814 01:09:36.072954   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:36.072960   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:36.073020   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:36.113542   61804 cri.go:89] found id: ""
	I0814 01:09:36.113573   61804 logs.go:276] 0 containers: []
	W0814 01:09:36.113584   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:36.113594   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:36.113610   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:36.152043   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:36.152071   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:36.203163   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:36.203200   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:36.216733   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:36.216764   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:36.288171   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:36.288193   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:36.288206   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:38.868008   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:38.881009   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:38.881089   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:38.914485   61804 cri.go:89] found id: ""
	I0814 01:09:38.914515   61804 logs.go:276] 0 containers: []
	W0814 01:09:38.914527   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:38.914535   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:38.914595   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:38.950810   61804 cri.go:89] found id: ""
	I0814 01:09:38.950841   61804 logs.go:276] 0 containers: []
	W0814 01:09:38.950852   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:38.950860   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:38.950913   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:38.984938   61804 cri.go:89] found id: ""
	I0814 01:09:38.984964   61804 logs.go:276] 0 containers: []
	W0814 01:09:38.984972   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:38.984980   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:38.985050   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:39.017383   61804 cri.go:89] found id: ""
	I0814 01:09:39.017408   61804 logs.go:276] 0 containers: []
	W0814 01:09:39.017415   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:39.017421   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:39.017467   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:39.050669   61804 cri.go:89] found id: ""
	I0814 01:09:39.050694   61804 logs.go:276] 0 containers: []
	W0814 01:09:39.050706   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:39.050712   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:39.050777   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:39.083840   61804 cri.go:89] found id: ""
	I0814 01:09:39.083870   61804 logs.go:276] 0 containers: []
	W0814 01:09:39.083882   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:39.083903   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:39.083973   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:39.117880   61804 cri.go:89] found id: ""
	I0814 01:09:39.117905   61804 logs.go:276] 0 containers: []
	W0814 01:09:39.117913   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:39.117920   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:39.117989   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:39.151956   61804 cri.go:89] found id: ""
	I0814 01:09:39.151981   61804 logs.go:276] 0 containers: []
	W0814 01:09:39.151991   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:39.152002   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:39.152017   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:39.229820   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:39.229860   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:39.266989   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:39.267023   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:39.317673   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:39.317709   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:39.332968   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:39.332997   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:39.401164   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:41.901891   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:41.914735   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:41.914810   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:41.950605   61804 cri.go:89] found id: ""
	I0814 01:09:41.950633   61804 logs.go:276] 0 containers: []
	W0814 01:09:41.950641   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:41.950648   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:41.950699   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:41.984517   61804 cri.go:89] found id: ""
	I0814 01:09:41.984541   61804 logs.go:276] 0 containers: []
	W0814 01:09:41.984549   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:41.984555   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:41.984609   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:42.018378   61804 cri.go:89] found id: ""
	I0814 01:09:42.018405   61804 logs.go:276] 0 containers: []
	W0814 01:09:42.018413   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:42.018418   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:42.018475   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:42.057088   61804 cri.go:89] found id: ""
	I0814 01:09:42.057126   61804 logs.go:276] 0 containers: []
	W0814 01:09:42.057134   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:42.057140   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:42.057208   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:42.093523   61804 cri.go:89] found id: ""
	I0814 01:09:42.093548   61804 logs.go:276] 0 containers: []
	W0814 01:09:42.093564   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:42.093569   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:42.093621   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:42.127036   61804 cri.go:89] found id: ""
	I0814 01:09:42.127059   61804 logs.go:276] 0 containers: []
	W0814 01:09:42.127067   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:42.127072   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:42.127123   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:42.161194   61804 cri.go:89] found id: ""
	I0814 01:09:42.161218   61804 logs.go:276] 0 containers: []
	W0814 01:09:42.161226   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:42.161231   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:42.161279   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:42.195595   61804 cri.go:89] found id: ""
	I0814 01:09:42.195624   61804 logs.go:276] 0 containers: []
	W0814 01:09:42.195633   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:42.195643   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:42.195656   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:42.251942   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:42.251974   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:42.309142   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:42.309179   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:42.322696   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:42.322724   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:42.389877   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:42.389903   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:42.389918   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:44.974486   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:44.986981   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:44.987044   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:45.023400   61804 cri.go:89] found id: ""
	I0814 01:09:45.023426   61804 logs.go:276] 0 containers: []
	W0814 01:09:45.023435   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:45.023441   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:45.023492   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:45.057923   61804 cri.go:89] found id: ""
	I0814 01:09:45.057948   61804 logs.go:276] 0 containers: []
	W0814 01:09:45.057961   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:45.057968   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:45.058024   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:45.092882   61804 cri.go:89] found id: ""
	I0814 01:09:45.092908   61804 logs.go:276] 0 containers: []
	W0814 01:09:45.092918   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:45.092924   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:45.092987   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:45.128802   61804 cri.go:89] found id: ""
	I0814 01:09:45.128832   61804 logs.go:276] 0 containers: []
	W0814 01:09:45.128840   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:45.128846   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:45.128909   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:45.164528   61804 cri.go:89] found id: ""
	I0814 01:09:45.164556   61804 logs.go:276] 0 containers: []
	W0814 01:09:45.164564   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:45.164571   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:45.164619   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:45.198115   61804 cri.go:89] found id: ""
	I0814 01:09:45.198145   61804 logs.go:276] 0 containers: []
	W0814 01:09:45.198157   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:45.198164   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:45.198231   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:45.230356   61804 cri.go:89] found id: ""
	I0814 01:09:45.230389   61804 logs.go:276] 0 containers: []
	W0814 01:09:45.230401   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:45.230409   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:45.230471   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:45.268342   61804 cri.go:89] found id: ""
	I0814 01:09:45.268367   61804 logs.go:276] 0 containers: []
	W0814 01:09:45.268376   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:45.268384   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:45.268398   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:45.321257   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:45.321294   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:45.334182   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:45.334206   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:45.409140   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:45.409162   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:45.409178   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:45.493974   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:45.494013   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:48.032466   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:48.045704   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:48.045783   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:48.084634   61804 cri.go:89] found id: ""
	I0814 01:09:48.084663   61804 logs.go:276] 0 containers: []
	W0814 01:09:48.084674   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:48.084683   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:48.084743   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:48.121917   61804 cri.go:89] found id: ""
	I0814 01:09:48.121941   61804 logs.go:276] 0 containers: []
	W0814 01:09:48.121948   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:48.121953   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:48.122014   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:48.156005   61804 cri.go:89] found id: ""
	I0814 01:09:48.156029   61804 logs.go:276] 0 containers: []
	W0814 01:09:48.156038   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:48.156046   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:48.156104   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:48.190105   61804 cri.go:89] found id: ""
	I0814 01:09:48.190127   61804 logs.go:276] 0 containers: []
	W0814 01:09:48.190136   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:48.190141   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:48.190202   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:48.222617   61804 cri.go:89] found id: ""
	I0814 01:09:48.222641   61804 logs.go:276] 0 containers: []
	W0814 01:09:48.222649   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:48.222655   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:48.222727   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:48.256198   61804 cri.go:89] found id: ""
	I0814 01:09:48.256222   61804 logs.go:276] 0 containers: []
	W0814 01:09:48.256230   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:48.256236   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:48.256294   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:48.294389   61804 cri.go:89] found id: ""
	I0814 01:09:48.294420   61804 logs.go:276] 0 containers: []
	W0814 01:09:48.294428   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:48.294434   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:48.294496   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:48.331503   61804 cri.go:89] found id: ""
	I0814 01:09:48.331540   61804 logs.go:276] 0 containers: []
	W0814 01:09:48.331553   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:48.331565   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:48.331585   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:48.407092   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:48.407134   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:48.446890   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:48.446920   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:48.498523   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:48.498559   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:48.511540   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:48.511578   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:48.576299   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:51.076974   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:51.089440   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:51.089508   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:51.122770   61804 cri.go:89] found id: ""
	I0814 01:09:51.122794   61804 logs.go:276] 0 containers: []
	W0814 01:09:51.122806   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:51.122814   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:51.122873   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:51.159045   61804 cri.go:89] found id: ""
	I0814 01:09:51.159075   61804 logs.go:276] 0 containers: []
	W0814 01:09:51.159084   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:51.159090   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:51.159144   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:51.192983   61804 cri.go:89] found id: ""
	I0814 01:09:51.193013   61804 logs.go:276] 0 containers: []
	W0814 01:09:51.193022   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:51.193028   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:51.193087   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:51.225112   61804 cri.go:89] found id: ""
	I0814 01:09:51.225137   61804 logs.go:276] 0 containers: []
	W0814 01:09:51.225145   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:51.225151   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:51.225204   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:51.257785   61804 cri.go:89] found id: ""
	I0814 01:09:51.257813   61804 logs.go:276] 0 containers: []
	W0814 01:09:51.257822   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:51.257828   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:51.257879   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:51.289863   61804 cri.go:89] found id: ""
	I0814 01:09:51.289891   61804 logs.go:276] 0 containers: []
	W0814 01:09:51.289902   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:51.289910   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:51.289963   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:51.321834   61804 cri.go:89] found id: ""
	I0814 01:09:51.321860   61804 logs.go:276] 0 containers: []
	W0814 01:09:51.321870   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:51.321880   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:51.321949   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:51.354494   61804 cri.go:89] found id: ""
	I0814 01:09:51.354517   61804 logs.go:276] 0 containers: []
	W0814 01:09:51.354526   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:51.354535   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:51.354556   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:51.424704   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:51.424726   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:51.424741   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:51.505301   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:51.505337   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:51.544567   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:51.544603   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:51.598924   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:51.598954   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:54.113501   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:54.128000   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:54.128075   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:54.162230   61804 cri.go:89] found id: ""
	I0814 01:09:54.162260   61804 logs.go:276] 0 containers: []
	W0814 01:09:54.162270   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:54.162277   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:54.162327   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:54.196395   61804 cri.go:89] found id: ""
	I0814 01:09:54.196421   61804 logs.go:276] 0 containers: []
	W0814 01:09:54.196432   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:54.196440   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:54.196500   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:54.229685   61804 cri.go:89] found id: ""
	I0814 01:09:54.229718   61804 logs.go:276] 0 containers: []
	W0814 01:09:54.229730   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:54.229738   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:54.229825   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:54.263141   61804 cri.go:89] found id: ""
	I0814 01:09:54.263174   61804 logs.go:276] 0 containers: []
	W0814 01:09:54.263185   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:54.263193   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:54.263257   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:54.298658   61804 cri.go:89] found id: ""
	I0814 01:09:54.298689   61804 logs.go:276] 0 containers: []
	W0814 01:09:54.298700   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:54.298708   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:54.298794   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:54.331254   61804 cri.go:89] found id: ""
	I0814 01:09:54.331278   61804 logs.go:276] 0 containers: []
	W0814 01:09:54.331287   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:54.331294   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:54.331348   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:54.362930   61804 cri.go:89] found id: ""
	I0814 01:09:54.362954   61804 logs.go:276] 0 containers: []
	W0814 01:09:54.362961   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:54.362967   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:54.363017   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:54.403663   61804 cri.go:89] found id: ""
	I0814 01:09:54.403690   61804 logs.go:276] 0 containers: []
	W0814 01:09:54.403697   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:54.403706   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:54.403725   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:54.460623   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:54.460661   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:54.478728   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:54.478757   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:54.548615   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:54.548640   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:54.548654   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:54.624350   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:54.624385   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:57.164202   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:57.176107   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:57.176174   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:57.211204   61804 cri.go:89] found id: ""
	I0814 01:09:57.211230   61804 logs.go:276] 0 containers: []
	W0814 01:09:57.211238   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:57.211245   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:57.211305   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:57.243004   61804 cri.go:89] found id: ""
	I0814 01:09:57.243035   61804 logs.go:276] 0 containers: []
	W0814 01:09:57.243046   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:57.243052   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:57.243113   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:57.275315   61804 cri.go:89] found id: ""
	I0814 01:09:57.275346   61804 logs.go:276] 0 containers: []
	W0814 01:09:57.275357   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:57.275365   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:57.275435   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:57.311856   61804 cri.go:89] found id: ""
	I0814 01:09:57.311878   61804 logs.go:276] 0 containers: []
	W0814 01:09:57.311885   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:57.311890   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:57.311944   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:57.345305   61804 cri.go:89] found id: ""
	I0814 01:09:57.345335   61804 logs.go:276] 0 containers: []
	W0814 01:09:57.345347   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:57.345355   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:57.345419   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:57.378001   61804 cri.go:89] found id: ""
	I0814 01:09:57.378033   61804 logs.go:276] 0 containers: []
	W0814 01:09:57.378058   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:57.378066   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:57.378127   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:57.410664   61804 cri.go:89] found id: ""
	I0814 01:09:57.410691   61804 logs.go:276] 0 containers: []
	W0814 01:09:57.410700   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:57.410706   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:57.410766   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:57.443477   61804 cri.go:89] found id: ""
	I0814 01:09:57.443505   61804 logs.go:276] 0 containers: []
	W0814 01:09:57.443514   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:57.443523   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:57.443538   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:57.497674   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:57.497710   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:57.511347   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:57.511380   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:57.580111   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:57.580137   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:57.580153   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:57.660119   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:57.660166   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:10:00.203685   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:10:00.224480   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:10:00.224552   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:10:00.265353   61804 cri.go:89] found id: ""
	I0814 01:10:00.265379   61804 logs.go:276] 0 containers: []
	W0814 01:10:00.265388   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:10:00.265395   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:10:00.265449   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:10:00.301086   61804 cri.go:89] found id: ""
	I0814 01:10:00.301112   61804 logs.go:276] 0 containers: []
	W0814 01:10:00.301122   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:10:00.301129   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:10:00.301203   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:10:00.335369   61804 cri.go:89] found id: ""
	I0814 01:10:00.335400   61804 logs.go:276] 0 containers: []
	W0814 01:10:00.335422   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:10:00.335442   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:10:00.335501   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:10:00.369341   61804 cri.go:89] found id: ""
	I0814 01:10:00.369367   61804 logs.go:276] 0 containers: []
	W0814 01:10:00.369377   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:10:00.369384   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:10:00.369446   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:10:00.403958   61804 cri.go:89] found id: ""
	I0814 01:10:00.403985   61804 logs.go:276] 0 containers: []
	W0814 01:10:00.403993   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:10:00.403998   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:10:00.404059   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:10:00.437921   61804 cri.go:89] found id: ""
	I0814 01:10:00.437944   61804 logs.go:276] 0 containers: []
	W0814 01:10:00.437952   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:10:00.437958   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:10:00.438020   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:10:00.471076   61804 cri.go:89] found id: ""
	I0814 01:10:00.471116   61804 logs.go:276] 0 containers: []
	W0814 01:10:00.471127   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:10:00.471135   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:10:00.471194   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:10:00.506002   61804 cri.go:89] found id: ""
	I0814 01:10:00.506026   61804 logs.go:276] 0 containers: []
	W0814 01:10:00.506034   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:10:00.506056   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:10:00.506074   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:10:00.576627   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:10:00.576653   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:10:00.576668   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:10:00.661108   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:10:00.661150   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:10:00.699083   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:10:00.699111   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:10:00.748944   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:10:00.748981   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:10:03.262338   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:10:03.274831   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:10:03.274909   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:10:03.308413   61804 cri.go:89] found id: ""
	I0814 01:10:03.308445   61804 logs.go:276] 0 containers: []
	W0814 01:10:03.308456   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:10:03.308463   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:10:03.308530   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:10:03.340763   61804 cri.go:89] found id: ""
	I0814 01:10:03.340789   61804 logs.go:276] 0 containers: []
	W0814 01:10:03.340798   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:10:03.340804   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:10:03.340872   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:10:03.375914   61804 cri.go:89] found id: ""
	I0814 01:10:03.375945   61804 logs.go:276] 0 containers: []
	W0814 01:10:03.375956   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:10:03.375964   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:10:03.376028   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:10:03.408904   61804 cri.go:89] found id: ""
	I0814 01:10:03.408934   61804 logs.go:276] 0 containers: []
	W0814 01:10:03.408944   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:10:03.408951   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:10:03.409015   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:10:03.443664   61804 cri.go:89] found id: ""
	I0814 01:10:03.443694   61804 logs.go:276] 0 containers: []
	W0814 01:10:03.443704   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:10:03.443712   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:10:03.443774   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:10:03.475742   61804 cri.go:89] found id: ""
	I0814 01:10:03.475775   61804 logs.go:276] 0 containers: []
	W0814 01:10:03.475786   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:10:03.475794   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:10:03.475856   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:10:03.509252   61804 cri.go:89] found id: ""
	I0814 01:10:03.509297   61804 logs.go:276] 0 containers: []
	W0814 01:10:03.509309   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:10:03.509315   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:10:03.509380   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:10:03.544311   61804 cri.go:89] found id: ""
	I0814 01:10:03.544332   61804 logs.go:276] 0 containers: []
	W0814 01:10:03.544341   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:10:03.544350   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:10:03.544365   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:10:03.620425   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:10:03.620459   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:10:03.658574   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:10:03.658601   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:10:03.708154   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:10:03.708187   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:10:03.721959   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:10:03.721986   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:10:03.789903   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:10:06.290301   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:10:06.301935   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:10:06.301994   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:10:06.336211   61804 cri.go:89] found id: ""
	I0814 01:10:06.336231   61804 logs.go:276] 0 containers: []
	W0814 01:10:06.336239   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:10:06.336245   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:10:06.336293   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:10:06.369489   61804 cri.go:89] found id: ""
	I0814 01:10:06.369517   61804 logs.go:276] 0 containers: []
	W0814 01:10:06.369526   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:10:06.369532   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:10:06.369590   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:10:06.401142   61804 cri.go:89] found id: ""
	I0814 01:10:06.401167   61804 logs.go:276] 0 containers: []
	W0814 01:10:06.401176   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:10:06.401183   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:10:06.401233   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:10:06.432265   61804 cri.go:89] found id: ""
	I0814 01:10:06.432294   61804 logs.go:276] 0 containers: []
	W0814 01:10:06.432303   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:10:06.432308   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:10:06.432368   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:10:06.464786   61804 cri.go:89] found id: ""
	I0814 01:10:06.464815   61804 logs.go:276] 0 containers: []
	W0814 01:10:06.464826   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:10:06.464834   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:10:06.464928   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:10:06.497984   61804 cri.go:89] found id: ""
	I0814 01:10:06.498013   61804 logs.go:276] 0 containers: []
	W0814 01:10:06.498021   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:10:06.498027   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:10:06.498122   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:10:06.528722   61804 cri.go:89] found id: ""
	I0814 01:10:06.528750   61804 logs.go:276] 0 containers: []
	W0814 01:10:06.528760   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:10:06.528768   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:10:06.528836   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:10:06.559920   61804 cri.go:89] found id: ""
	I0814 01:10:06.559947   61804 logs.go:276] 0 containers: []
	W0814 01:10:06.559955   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:10:06.559964   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:10:06.559976   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:10:06.609227   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:10:06.609256   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:10:06.621627   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:10:06.621652   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:10:06.686110   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:10:06.686132   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:10:06.686145   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:10:06.767163   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:10:06.767201   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:10:09.302611   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:10:09.314804   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:10:09.314863   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:10:09.347222   61804 cri.go:89] found id: ""
	I0814 01:10:09.347248   61804 logs.go:276] 0 containers: []
	W0814 01:10:09.347257   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:10:09.347262   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:10:09.347311   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:10:09.382005   61804 cri.go:89] found id: ""
	I0814 01:10:09.382035   61804 logs.go:276] 0 containers: []
	W0814 01:10:09.382059   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:10:09.382067   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:10:09.382129   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:10:09.413728   61804 cri.go:89] found id: ""
	I0814 01:10:09.413759   61804 logs.go:276] 0 containers: []
	W0814 01:10:09.413771   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:10:09.413778   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:10:09.413843   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:10:09.446389   61804 cri.go:89] found id: ""
	I0814 01:10:09.446422   61804 logs.go:276] 0 containers: []
	W0814 01:10:09.446435   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:10:09.446455   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:10:09.446511   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:10:09.482224   61804 cri.go:89] found id: ""
	I0814 01:10:09.482253   61804 logs.go:276] 0 containers: []
	W0814 01:10:09.482261   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:10:09.482267   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:10:09.482330   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:10:09.517482   61804 cri.go:89] found id: ""
	I0814 01:10:09.517511   61804 logs.go:276] 0 containers: []
	W0814 01:10:09.517529   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:10:09.517538   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:10:09.517600   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:10:09.550825   61804 cri.go:89] found id: ""
	I0814 01:10:09.550849   61804 logs.go:276] 0 containers: []
	W0814 01:10:09.550857   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:10:09.550863   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:10:09.550923   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:10:09.585090   61804 cri.go:89] found id: ""
	I0814 01:10:09.585122   61804 logs.go:276] 0 containers: []
	W0814 01:10:09.585129   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:10:09.585137   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:10:09.585148   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:10:09.636337   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:10:09.636367   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:10:09.649807   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:10:09.649837   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:10:09.720720   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:10:09.720743   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:10:09.720759   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:10:09.805985   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:10:09.806027   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:10:12.350767   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:10:12.364446   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:10:12.364516   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:10:12.396353   61804 cri.go:89] found id: ""
	I0814 01:10:12.396387   61804 logs.go:276] 0 containers: []
	W0814 01:10:12.396400   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:10:12.396409   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:10:12.396478   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:10:12.427988   61804 cri.go:89] found id: ""
	I0814 01:10:12.428010   61804 logs.go:276] 0 containers: []
	W0814 01:10:12.428022   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:10:12.428033   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:10:12.428094   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:10:12.461269   61804 cri.go:89] found id: ""
	I0814 01:10:12.461295   61804 logs.go:276] 0 containers: []
	W0814 01:10:12.461304   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:10:12.461310   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:10:12.461364   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:10:12.495746   61804 cri.go:89] found id: ""
	I0814 01:10:12.495772   61804 logs.go:276] 0 containers: []
	W0814 01:10:12.495783   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:10:12.495791   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:10:12.495850   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:10:12.528862   61804 cri.go:89] found id: ""
	I0814 01:10:12.528891   61804 logs.go:276] 0 containers: []
	W0814 01:10:12.528901   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:10:12.528909   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:10:12.528969   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:10:12.562169   61804 cri.go:89] found id: ""
	I0814 01:10:12.562196   61804 logs.go:276] 0 containers: []
	W0814 01:10:12.562206   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:10:12.562214   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:10:12.562279   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:10:12.601089   61804 cri.go:89] found id: ""
	I0814 01:10:12.601118   61804 logs.go:276] 0 containers: []
	W0814 01:10:12.601129   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:10:12.601137   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:10:12.601200   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:10:12.635250   61804 cri.go:89] found id: ""
	I0814 01:10:12.635276   61804 logs.go:276] 0 containers: []
	W0814 01:10:12.635285   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:10:12.635293   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:10:12.635306   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:10:12.686904   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:10:12.686937   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:10:12.702218   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:10:12.702244   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:10:12.767008   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:10:12.767034   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:10:12.767051   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:10:12.849601   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:10:12.849639   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:10:15.387785   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:10:15.401850   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:10:15.401916   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:10:15.441217   61804 cri.go:89] found id: ""
	I0814 01:10:15.441240   61804 logs.go:276] 0 containers: []
	W0814 01:10:15.441255   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:10:15.441261   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:10:15.441312   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:10:15.475123   61804 cri.go:89] found id: ""
	I0814 01:10:15.475158   61804 logs.go:276] 0 containers: []
	W0814 01:10:15.475167   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:10:15.475172   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:10:15.475234   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:10:15.509696   61804 cri.go:89] found id: ""
	I0814 01:10:15.509725   61804 logs.go:276] 0 containers: []
	W0814 01:10:15.509733   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:10:15.509739   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:10:15.509797   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:10:15.542584   61804 cri.go:89] found id: ""
	I0814 01:10:15.542615   61804 logs.go:276] 0 containers: []
	W0814 01:10:15.542625   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:10:15.542632   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:10:15.542701   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:10:15.576508   61804 cri.go:89] found id: ""
	I0814 01:10:15.576540   61804 logs.go:276] 0 containers: []
	W0814 01:10:15.576552   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:10:15.576558   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:10:15.576622   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:10:15.613618   61804 cri.go:89] found id: ""
	I0814 01:10:15.613649   61804 logs.go:276] 0 containers: []
	W0814 01:10:15.613660   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:10:15.613669   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:10:15.613732   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:10:15.646153   61804 cri.go:89] found id: ""
	I0814 01:10:15.646173   61804 logs.go:276] 0 containers: []
	W0814 01:10:15.646182   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:10:15.646189   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:10:15.646241   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:10:15.681417   61804 cri.go:89] found id: ""
	I0814 01:10:15.681444   61804 logs.go:276] 0 containers: []
	W0814 01:10:15.681455   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:10:15.681466   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:10:15.681483   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:10:15.763989   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:10:15.764026   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:10:15.803304   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:10:15.803337   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:10:15.872591   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:10:15.872630   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:10:15.886469   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:10:15.886504   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:10:15.956403   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:10:18.457103   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:10:18.470059   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:10:18.470138   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:10:18.505369   61804 cri.go:89] found id: ""
	I0814 01:10:18.505399   61804 logs.go:276] 0 containers: []
	W0814 01:10:18.505410   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:10:18.505419   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:10:18.505481   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:10:18.536719   61804 cri.go:89] found id: ""
	I0814 01:10:18.536750   61804 logs.go:276] 0 containers: []
	W0814 01:10:18.536781   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:10:18.536790   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:10:18.536845   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:10:18.571048   61804 cri.go:89] found id: ""
	I0814 01:10:18.571077   61804 logs.go:276] 0 containers: []
	W0814 01:10:18.571089   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:10:18.571096   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:10:18.571161   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:10:18.605547   61804 cri.go:89] found id: ""
	I0814 01:10:18.605569   61804 logs.go:276] 0 containers: []
	W0814 01:10:18.605578   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:10:18.605585   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:10:18.605645   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:10:18.637177   61804 cri.go:89] found id: ""
	I0814 01:10:18.637199   61804 logs.go:276] 0 containers: []
	W0814 01:10:18.637207   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:10:18.637213   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:10:18.637275   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:10:18.674976   61804 cri.go:89] found id: ""
	I0814 01:10:18.675003   61804 logs.go:276] 0 containers: []
	W0814 01:10:18.675012   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:10:18.675017   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:10:18.675066   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:10:18.709808   61804 cri.go:89] found id: ""
	I0814 01:10:18.709832   61804 logs.go:276] 0 containers: []
	W0814 01:10:18.709840   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:10:18.709846   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:10:18.709902   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:10:18.743577   61804 cri.go:89] found id: ""
	I0814 01:10:18.743601   61804 logs.go:276] 0 containers: []
	W0814 01:10:18.743607   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:10:18.743615   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:10:18.743635   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:10:18.794913   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:10:18.794944   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:10:18.807665   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:10:18.807692   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:10:18.877814   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:10:18.877835   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:10:18.877847   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:10:18.962319   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:10:18.962356   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:10:21.500596   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:10:21.513404   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:10:21.513479   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:10:21.554150   61804 cri.go:89] found id: ""
	I0814 01:10:21.554179   61804 logs.go:276] 0 containers: []
	W0814 01:10:21.554188   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:10:21.554194   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:10:21.554251   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:10:21.588785   61804 cri.go:89] found id: ""
	I0814 01:10:21.588807   61804 logs.go:276] 0 containers: []
	W0814 01:10:21.588815   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:10:21.588820   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:10:21.588870   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:10:21.621537   61804 cri.go:89] found id: ""
	I0814 01:10:21.621572   61804 logs.go:276] 0 containers: []
	W0814 01:10:21.621581   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:10:21.621587   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:10:21.621640   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:10:21.660651   61804 cri.go:89] found id: ""
	I0814 01:10:21.660680   61804 logs.go:276] 0 containers: []
	W0814 01:10:21.660690   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:10:21.660698   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:10:21.660763   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:10:21.697233   61804 cri.go:89] found id: ""
	I0814 01:10:21.697259   61804 logs.go:276] 0 containers: []
	W0814 01:10:21.697269   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:10:21.697276   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:10:21.697347   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:10:21.728389   61804 cri.go:89] found id: ""
	I0814 01:10:21.728416   61804 logs.go:276] 0 containers: []
	W0814 01:10:21.728428   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:10:21.728435   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:10:21.728498   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:10:21.761502   61804 cri.go:89] found id: ""
	I0814 01:10:21.761534   61804 logs.go:276] 0 containers: []
	W0814 01:10:21.761546   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:10:21.761552   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:10:21.761624   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:10:21.796569   61804 cri.go:89] found id: ""
	I0814 01:10:21.796598   61804 logs.go:276] 0 containers: []
	W0814 01:10:21.796610   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:10:21.796621   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:10:21.796637   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:10:21.845444   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:10:21.845483   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:10:21.858017   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:10:21.858057   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:10:21.930417   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:10:21.930443   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:10:21.930460   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:10:22.005912   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:10:22.005951   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:10:24.545241   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:10:24.559341   61804 kubeadm.go:597] duration metric: took 4m4.643567639s to restartPrimaryControlPlane
	W0814 01:10:24.559407   61804 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0814 01:10:24.559430   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0814 01:10:28.294241   61804 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (3.734785326s)
	I0814 01:10:28.294319   61804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 01:10:28.311148   61804 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 01:10:28.321145   61804 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 01:10:28.335025   61804 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 01:10:28.335042   61804 kubeadm.go:157] found existing configuration files:
	
	I0814 01:10:28.335084   61804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 01:10:28.348778   61804 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 01:10:28.348838   61804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 01:10:28.362209   61804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 01:10:28.374981   61804 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 01:10:28.375054   61804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 01:10:28.385686   61804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 01:10:28.396608   61804 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 01:10:28.396681   61804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 01:10:28.410155   61804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 01:10:28.419462   61804 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 01:10:28.419524   61804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 01:10:28.429089   61804 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0814 01:10:28.506715   61804 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0814 01:10:28.506816   61804 kubeadm.go:310] [preflight] Running pre-flight checks
	I0814 01:10:28.668770   61804 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0814 01:10:28.668908   61804 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0814 01:10:28.669020   61804 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0814 01:10:28.865442   61804 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0814 01:10:28.866971   61804 out.go:204]   - Generating certificates and keys ...
	I0814 01:10:28.867065   61804 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0814 01:10:28.867151   61804 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0814 01:10:28.867270   61804 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0814 01:10:28.867370   61804 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0814 01:10:28.867486   61804 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0814 01:10:28.867575   61804 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0814 01:10:28.867668   61804 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0814 01:10:28.867762   61804 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0814 01:10:28.867854   61804 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0814 01:10:28.867969   61804 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0814 01:10:28.868026   61804 kubeadm.go:310] [certs] Using the existing "sa" key
	I0814 01:10:28.868095   61804 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0814 01:10:29.109820   61804 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0814 01:10:29.305485   61804 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0814 01:10:29.447627   61804 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0814 01:10:29.519749   61804 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0814 01:10:29.534507   61804 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0814 01:10:29.535858   61804 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0814 01:10:29.535915   61804 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0814 01:10:29.679100   61804 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0814 01:10:29.681457   61804 out.go:204]   - Booting up control plane ...
	I0814 01:10:29.681596   61804 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0814 01:10:29.686193   61804 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0814 01:10:29.690458   61804 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0814 01:10:29.690602   61804 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0814 01:10:29.692526   61804 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0814 01:11:09.693028   61804 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0814 01:11:09.693700   61804 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 01:11:09.693975   61804 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 01:11:14.694223   61804 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 01:11:14.694446   61804 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 01:11:24.694861   61804 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 01:11:24.695123   61804 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 01:11:44.695887   61804 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 01:11:44.696122   61804 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 01:12:24.697922   61804 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 01:12:24.698217   61804 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 01:12:24.698256   61804 kubeadm.go:310] 
	I0814 01:12:24.698318   61804 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0814 01:12:24.698406   61804 kubeadm.go:310] 		timed out waiting for the condition
	I0814 01:12:24.698434   61804 kubeadm.go:310] 
	I0814 01:12:24.698484   61804 kubeadm.go:310] 	This error is likely caused by:
	I0814 01:12:24.698530   61804 kubeadm.go:310] 		- The kubelet is not running
	I0814 01:12:24.698640   61804 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0814 01:12:24.698651   61804 kubeadm.go:310] 
	I0814 01:12:24.698784   61804 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0814 01:12:24.698841   61804 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0814 01:12:24.698874   61804 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0814 01:12:24.698878   61804 kubeadm.go:310] 
	I0814 01:12:24.699009   61804 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0814 01:12:24.699119   61804 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0814 01:12:24.699128   61804 kubeadm.go:310] 
	I0814 01:12:24.699294   61804 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0814 01:12:24.699431   61804 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0814 01:12:24.699536   61804 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0814 01:12:24.699635   61804 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0814 01:12:24.699647   61804 kubeadm.go:310] 
	I0814 01:12:24.700201   61804 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0814 01:12:24.700300   61804 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0814 01:12:24.700391   61804 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0814 01:12:24.700527   61804 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0814 01:12:24.700577   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0814 01:12:30.038180   61804 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.337582505s)
	I0814 01:12:30.038256   61804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 01:12:30.052476   61804 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 01:12:30.062330   61804 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 01:12:30.062357   61804 kubeadm.go:157] found existing configuration files:
	
	I0814 01:12:30.062409   61804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 01:12:30.072303   61804 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 01:12:30.072355   61804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 01:12:30.081331   61804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 01:12:30.090105   61804 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 01:12:30.090163   61804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 01:12:30.099446   61804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 01:12:30.108290   61804 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 01:12:30.108346   61804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 01:12:30.117872   61804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 01:12:30.126357   61804 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 01:12:30.126424   61804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 01:12:30.136277   61804 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0814 01:12:30.342736   61804 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0814 01:14:26.274820   61804 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0814 01:14:26.274958   61804 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0814 01:14:26.276512   61804 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0814 01:14:26.276601   61804 kubeadm.go:310] [preflight] Running pre-flight checks
	I0814 01:14:26.276743   61804 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0814 01:14:26.276887   61804 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0814 01:14:26.277017   61804 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0814 01:14:26.277097   61804 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0814 01:14:26.278845   61804 out.go:204]   - Generating certificates and keys ...
	I0814 01:14:26.278935   61804 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0814 01:14:26.279005   61804 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0814 01:14:26.279103   61804 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0814 01:14:26.279187   61804 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0814 01:14:26.279278   61804 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0814 01:14:26.279351   61804 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0814 01:14:26.279433   61804 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0814 01:14:26.279515   61804 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0814 01:14:26.279623   61804 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0814 01:14:26.279725   61804 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0814 01:14:26.279776   61804 kubeadm.go:310] [certs] Using the existing "sa" key
	I0814 01:14:26.279858   61804 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0814 01:14:26.279933   61804 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0814 01:14:26.280086   61804 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0814 01:14:26.280188   61804 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0814 01:14:26.280289   61804 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0814 01:14:26.280424   61804 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0814 01:14:26.280517   61804 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0814 01:14:26.280573   61804 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0814 01:14:26.280648   61804 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0814 01:14:26.281982   61804 out.go:204]   - Booting up control plane ...
	I0814 01:14:26.282070   61804 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0814 01:14:26.282159   61804 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0814 01:14:26.282249   61804 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0814 01:14:26.282389   61804 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0814 01:14:26.282564   61804 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0814 01:14:26.282624   61804 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0814 01:14:26.282685   61804 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 01:14:26.282866   61804 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 01:14:26.282971   61804 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 01:14:26.283161   61804 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 01:14:26.283235   61804 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 01:14:26.283494   61804 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 01:14:26.283611   61804 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 01:14:26.283768   61804 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 01:14:26.283830   61804 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 01:14:26.284021   61804 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 01:14:26.284032   61804 kubeadm.go:310] 
	I0814 01:14:26.284069   61804 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0814 01:14:26.284126   61804 kubeadm.go:310] 		timed out waiting for the condition
	I0814 01:14:26.284135   61804 kubeadm.go:310] 
	I0814 01:14:26.284188   61804 kubeadm.go:310] 	This error is likely caused by:
	I0814 01:14:26.284234   61804 kubeadm.go:310] 		- The kubelet is not running
	I0814 01:14:26.284336   61804 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0814 01:14:26.284344   61804 kubeadm.go:310] 
	I0814 01:14:26.284429   61804 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0814 01:14:26.284463   61804 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0814 01:14:26.284490   61804 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0814 01:14:26.284499   61804 kubeadm.go:310] 
	I0814 01:14:26.284587   61804 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0814 01:14:26.284726   61804 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0814 01:14:26.284747   61804 kubeadm.go:310] 
	I0814 01:14:26.284889   61804 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0814 01:14:26.285007   61804 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0814 01:14:26.285083   61804 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0814 01:14:26.285158   61804 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0814 01:14:26.285174   61804 kubeadm.go:310] 
	I0814 01:14:26.285220   61804 kubeadm.go:394] duration metric: took 8m6.417053649s to StartCluster
	I0814 01:14:26.285266   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:14:26.285318   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:14:26.327320   61804 cri.go:89] found id: ""
	I0814 01:14:26.327351   61804 logs.go:276] 0 containers: []
	W0814 01:14:26.327359   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:14:26.327366   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:14:26.327435   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:14:26.362074   61804 cri.go:89] found id: ""
	I0814 01:14:26.362101   61804 logs.go:276] 0 containers: []
	W0814 01:14:26.362109   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:14:26.362115   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:14:26.362192   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:14:26.395777   61804 cri.go:89] found id: ""
	I0814 01:14:26.395802   61804 logs.go:276] 0 containers: []
	W0814 01:14:26.395814   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:14:26.395821   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:14:26.395884   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:14:26.429263   61804 cri.go:89] found id: ""
	I0814 01:14:26.429290   61804 logs.go:276] 0 containers: []
	W0814 01:14:26.429299   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:14:26.429307   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:14:26.429370   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:14:26.463278   61804 cri.go:89] found id: ""
	I0814 01:14:26.463307   61804 logs.go:276] 0 containers: []
	W0814 01:14:26.463314   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:14:26.463321   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:14:26.463381   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:14:26.496454   61804 cri.go:89] found id: ""
	I0814 01:14:26.496493   61804 logs.go:276] 0 containers: []
	W0814 01:14:26.496513   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:14:26.496521   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:14:26.496591   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:14:26.530536   61804 cri.go:89] found id: ""
	I0814 01:14:26.530567   61804 logs.go:276] 0 containers: []
	W0814 01:14:26.530579   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:14:26.530587   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:14:26.530659   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:14:26.564201   61804 cri.go:89] found id: ""
	I0814 01:14:26.564232   61804 logs.go:276] 0 containers: []
	W0814 01:14:26.564245   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:14:26.564258   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:14:26.564274   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:14:26.614225   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:14:26.614263   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:14:26.632126   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:14:26.632162   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:14:26.733732   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:14:26.733757   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:14:26.733773   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:14:26.849177   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:14:26.849218   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0814 01:14:26.885741   61804 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0814 01:14:26.885794   61804 out.go:239] * 
	* 
	W0814 01:14:26.885846   61804 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0814 01:14:26.885871   61804 out.go:239] * 
	* 
	W0814 01:14:26.886747   61804 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0814 01:14:26.889874   61804 out.go:177] 
	W0814 01:14:26.891040   61804 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0814 01:14:26.891083   61804 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0814 01:14:26.891101   61804 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0814 01:14:26.892501   61804 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-179312 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-179312 -n old-k8s-version-179312
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-179312 -n old-k8s-version-179312: exit status 2 (225.761654ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-179312 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-179312 logs -n 25: (1.548120726s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| unpause | -p pause-074686                                        | pause-074686                 | jenkins | v1.33.1 | 14 Aug 24 00:56 UTC | 14 Aug 24 00:56 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| pause   | -p pause-074686                                        | pause-074686                 | jenkins | v1.33.1 | 14 Aug 24 00:56 UTC | 14 Aug 24 00:56 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| delete  | -p pause-074686                                        | pause-074686                 | jenkins | v1.33.1 | 14 Aug 24 00:56 UTC | 14 Aug 24 00:56 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| delete  | -p pause-074686                                        | pause-074686                 | jenkins | v1.33.1 | 14 Aug 24 00:56 UTC | 14 Aug 24 00:56 UTC |
	| delete  | -p                                                     | disable-driver-mounts-655306 | jenkins | v1.33.1 | 14 Aug 24 00:56 UTC | 14 Aug 24 00:56 UTC |
	|         | disable-driver-mounts-655306                           |                              |         |         |                     |                     |
	| start   | -p no-preload-776907                                   | no-preload-776907            | jenkins | v1.33.1 | 14 Aug 24 00:56 UTC | 14 Aug 24 00:58 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-769488                              | cert-expiration-769488       | jenkins | v1.33.1 | 14 Aug 24 00:57 UTC | 14 Aug 24 00:58 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-769488                              | cert-expiration-769488       | jenkins | v1.33.1 | 14 Aug 24 00:58 UTC | 14 Aug 24 00:58 UTC |
	| start   | -p                                                     | default-k8s-diff-port-585256 | jenkins | v1.33.1 | 14 Aug 24 00:58 UTC | 14 Aug 24 00:58 UTC |
	|         | default-k8s-diff-port-585256                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-901410            | embed-certs-901410           | jenkins | v1.33.1 | 14 Aug 24 00:58 UTC | 14 Aug 24 00:58 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-901410                                  | embed-certs-901410           | jenkins | v1.33.1 | 14 Aug 24 00:58 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-776907             | no-preload-776907            | jenkins | v1.33.1 | 14 Aug 24 00:58 UTC | 14 Aug 24 00:58 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-776907                                   | no-preload-776907            | jenkins | v1.33.1 | 14 Aug 24 00:58 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-585256  | default-k8s-diff-port-585256 | jenkins | v1.33.1 | 14 Aug 24 00:59 UTC | 14 Aug 24 00:59 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-585256 | jenkins | v1.33.1 | 14 Aug 24 00:59 UTC |                     |
	|         | default-k8s-diff-port-585256                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-179312        | old-k8s-version-179312       | jenkins | v1.33.1 | 14 Aug 24 01:00 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-901410                 | embed-certs-901410           | jenkins | v1.33.1 | 14 Aug 24 01:00 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-901410                                  | embed-certs-901410           | jenkins | v1.33.1 | 14 Aug 24 01:00 UTC | 14 Aug 24 01:11 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-776907                  | no-preload-776907            | jenkins | v1.33.1 | 14 Aug 24 01:01 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-776907                                   | no-preload-776907            | jenkins | v1.33.1 | 14 Aug 24 01:01 UTC | 14 Aug 24 01:10 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-585256       | default-k8s-diff-port-585256 | jenkins | v1.33.1 | 14 Aug 24 01:01 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-179312                              | old-k8s-version-179312       | jenkins | v1.33.1 | 14 Aug 24 01:01 UTC | 14 Aug 24 01:01 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-585256 | jenkins | v1.33.1 | 14 Aug 24 01:01 UTC | 14 Aug 24 01:11 UTC |
	|         | default-k8s-diff-port-585256                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-179312             | old-k8s-version-179312       | jenkins | v1.33.1 | 14 Aug 24 01:01 UTC | 14 Aug 24 01:01 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-179312                              | old-k8s-version-179312       | jenkins | v1.33.1 | 14 Aug 24 01:01 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/14 01:01:39
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0814 01:01:39.512898   61804 out.go:291] Setting OutFile to fd 1 ...
	I0814 01:01:39.513038   61804 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 01:01:39.513051   61804 out.go:304] Setting ErrFile to fd 2...
	I0814 01:01:39.513057   61804 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 01:01:39.513259   61804 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19429-9425/.minikube/bin
	I0814 01:01:39.513864   61804 out.go:298] Setting JSON to false
	I0814 01:01:39.514866   61804 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6245,"bootTime":1723591054,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0814 01:01:39.514924   61804 start.go:139] virtualization: kvm guest
	I0814 01:01:39.516858   61804 out.go:177] * [old-k8s-version-179312] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0814 01:01:39.518018   61804 out.go:177]   - MINIKUBE_LOCATION=19429
	I0814 01:01:39.518036   61804 notify.go:220] Checking for updates...
	I0814 01:01:39.520190   61804 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0814 01:01:39.521372   61804 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19429-9425/kubeconfig
	I0814 01:01:39.522536   61804 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19429-9425/.minikube
	I0814 01:01:39.523748   61804 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0814 01:01:39.524905   61804 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0814 01:01:39.526506   61804 config.go:182] Loaded profile config "old-k8s-version-179312": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0814 01:01:39.526919   61804 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:01:39.526976   61804 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:01:39.541877   61804 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35025
	I0814 01:01:39.542250   61804 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:01:39.542776   61804 main.go:141] libmachine: Using API Version  1
	I0814 01:01:39.542796   61804 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:01:39.543149   61804 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:01:39.543304   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .DriverName
	I0814 01:01:39.544990   61804 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0814 01:01:39.546103   61804 driver.go:392] Setting default libvirt URI to qemu:///system
	I0814 01:01:39.546426   61804 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:01:39.546461   61804 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:01:39.561404   61804 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42995
	I0814 01:01:39.561820   61804 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:01:39.562277   61804 main.go:141] libmachine: Using API Version  1
	I0814 01:01:39.562305   61804 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:01:39.562609   61804 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:01:39.562824   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .DriverName
	I0814 01:01:39.598760   61804 out.go:177] * Using the kvm2 driver based on existing profile
	I0814 01:01:39.599899   61804 start.go:297] selected driver: kvm2
	I0814 01:01:39.599912   61804 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-179312 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-179312 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.123 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 01:01:39.600052   61804 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0814 01:01:39.600706   61804 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 01:01:39.600767   61804 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19429-9425/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0814 01:01:39.616316   61804 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0814 01:01:39.616678   61804 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 01:01:39.616712   61804 cni.go:84] Creating CNI manager for ""
	I0814 01:01:39.616719   61804 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 01:01:39.616748   61804 start.go:340] cluster config:
	{Name:old-k8s-version-179312 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-179312 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.123 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 01:01:39.616839   61804 iso.go:125] acquiring lock: {Name:mk654171f0e78c238a265344dbbd1eacb21d0f1b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 01:01:39.618491   61804 out.go:177] * Starting "old-k8s-version-179312" primary control-plane node in "old-k8s-version-179312" cluster
	I0814 01:01:36.022382   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:01:39.094354   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:01:38.136107   61689 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0814 01:01:38.136146   61689 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0814 01:01:38.136159   61689 cache.go:56] Caching tarball of preloaded images
	I0814 01:01:38.136234   61689 preload.go:172] Found /home/jenkins/minikube-integration/19429-9425/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0814 01:01:38.136245   61689 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0814 01:01:38.136360   61689 profile.go:143] Saving config to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/default-k8s-diff-port-585256/config.json ...
	I0814 01:01:38.136567   61689 start.go:360] acquireMachinesLock for default-k8s-diff-port-585256: {Name:mk8ab1b491404dd6cc95a37a50c74a14d6d0e4c9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 01:01:39.619632   61804 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0814 01:01:39.619674   61804 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0814 01:01:39.619694   61804 cache.go:56] Caching tarball of preloaded images
	I0814 01:01:39.619767   61804 preload.go:172] Found /home/jenkins/minikube-integration/19429-9425/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0814 01:01:39.619781   61804 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0814 01:01:39.619899   61804 profile.go:143] Saving config to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312/config.json ...
	I0814 01:01:39.620085   61804 start.go:360] acquireMachinesLock for old-k8s-version-179312: {Name:mk8ab1b491404dd6cc95a37a50c74a14d6d0e4c9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 01:01:45.174229   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:01:48.246337   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:01:54.326275   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:01:57.398310   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:02:03.478349   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:02:06.550262   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:02:12.630330   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:02:15.702383   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:02:21.782321   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:02:24.854346   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:02:30.934349   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:02:34.006298   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:02:40.086361   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:02:43.158326   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:02:49.238298   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:02:52.310357   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:02:58.390361   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:03:01.462356   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:03:07.542292   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:03:10.614310   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:03:16.694325   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:03:19.766305   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:03:25.846331   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:03:28.918369   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:03:34.998360   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:03:38.070357   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:03:44.150338   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:03:47.222336   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:03:53.302301   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:03:56.374355   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:04:02.454379   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:04:05.526325   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:04:11.606322   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:04:14.678359   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:04:20.758332   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:04:23.830339   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:04:29.910318   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:04:32.982355   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:04:39.062376   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:04:42.134351   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:04:48.214321   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:04:51.286357   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:04:57.366282   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:05:00.438378   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:05:06.518254   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:05:09.590272   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:05:12.594550   61447 start.go:364] duration metric: took 3m55.982517455s to acquireMachinesLock for "no-preload-776907"
	I0814 01:05:12.594617   61447 start.go:96] Skipping create...Using existing machine configuration
	I0814 01:05:12.594639   61447 fix.go:54] fixHost starting: 
	I0814 01:05:12.595017   61447 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:05:12.595051   61447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:05:12.611377   61447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40079
	I0814 01:05:12.611848   61447 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:05:12.612405   61447 main.go:141] libmachine: Using API Version  1
	I0814 01:05:12.612433   61447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:05:12.612810   61447 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:05:12.613004   61447 main.go:141] libmachine: (no-preload-776907) Calling .DriverName
	I0814 01:05:12.613170   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetState
	I0814 01:05:12.614831   61447 fix.go:112] recreateIfNeeded on no-preload-776907: state=Stopped err=<nil>
	I0814 01:05:12.614852   61447 main.go:141] libmachine: (no-preload-776907) Calling .DriverName
	W0814 01:05:12.615027   61447 fix.go:138] unexpected machine state, will restart: <nil>
	I0814 01:05:12.616713   61447 out.go:177] * Restarting existing kvm2 VM for "no-preload-776907" ...
	I0814 01:05:12.591919   61115 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 01:05:12.591979   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetMachineName
	I0814 01:05:12.592302   61115 buildroot.go:166] provisioning hostname "embed-certs-901410"
	I0814 01:05:12.592333   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetMachineName
	I0814 01:05:12.592567   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHHostname
	I0814 01:05:12.594384   61115 machine.go:97] duration metric: took 4m37.436734696s to provisionDockerMachine
	I0814 01:05:12.594452   61115 fix.go:56] duration metric: took 4m37.45620173s for fixHost
	I0814 01:05:12.594468   61115 start.go:83] releasing machines lock for "embed-certs-901410", held for 4m37.456229846s
	W0814 01:05:12.594503   61115 start.go:714] error starting host: provision: host is not running
	W0814 01:05:12.594696   61115 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0814 01:05:12.594717   61115 start.go:729] Will try again in 5 seconds ...
	I0814 01:05:12.617855   61447 main.go:141] libmachine: (no-preload-776907) Calling .Start
	I0814 01:05:12.618047   61447 main.go:141] libmachine: (no-preload-776907) Ensuring networks are active...
	I0814 01:05:12.619058   61447 main.go:141] libmachine: (no-preload-776907) Ensuring network default is active
	I0814 01:05:12.619398   61447 main.go:141] libmachine: (no-preload-776907) Ensuring network mk-no-preload-776907 is active
	I0814 01:05:12.619763   61447 main.go:141] libmachine: (no-preload-776907) Getting domain xml...
	I0814 01:05:12.620437   61447 main.go:141] libmachine: (no-preload-776907) Creating domain...
	I0814 01:05:13.819938   61447 main.go:141] libmachine: (no-preload-776907) Waiting to get IP...
	I0814 01:05:13.820741   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:13.821142   61447 main.go:141] libmachine: (no-preload-776907) DBG | unable to find current IP address of domain no-preload-776907 in network mk-no-preload-776907
	I0814 01:05:13.821244   61447 main.go:141] libmachine: (no-preload-776907) DBG | I0814 01:05:13.821137   62559 retry.go:31] will retry after 224.897937ms: waiting for machine to come up
	I0814 01:05:14.047611   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:14.048046   61447 main.go:141] libmachine: (no-preload-776907) DBG | unable to find current IP address of domain no-preload-776907 in network mk-no-preload-776907
	I0814 01:05:14.048073   61447 main.go:141] libmachine: (no-preload-776907) DBG | I0814 01:05:14.047999   62559 retry.go:31] will retry after 289.797156ms: waiting for machine to come up
	I0814 01:05:14.339577   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:14.339966   61447 main.go:141] libmachine: (no-preload-776907) DBG | unable to find current IP address of domain no-preload-776907 in network mk-no-preload-776907
	I0814 01:05:14.339990   61447 main.go:141] libmachine: (no-preload-776907) DBG | I0814 01:05:14.339923   62559 retry.go:31] will retry after 335.55372ms: waiting for machine to come up
	I0814 01:05:14.677277   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:14.677646   61447 main.go:141] libmachine: (no-preload-776907) DBG | unable to find current IP address of domain no-preload-776907 in network mk-no-preload-776907
	I0814 01:05:14.677850   61447 main.go:141] libmachine: (no-preload-776907) DBG | I0814 01:05:14.677612   62559 retry.go:31] will retry after 376.666569ms: waiting for machine to come up
	I0814 01:05:15.056486   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:15.057008   61447 main.go:141] libmachine: (no-preload-776907) DBG | unable to find current IP address of domain no-preload-776907 in network mk-no-preload-776907
	I0814 01:05:15.057046   61447 main.go:141] libmachine: (no-preload-776907) DBG | I0814 01:05:15.056935   62559 retry.go:31] will retry after 594.277419ms: waiting for machine to come up
	I0814 01:05:15.652571   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:15.653122   61447 main.go:141] libmachine: (no-preload-776907) DBG | unable to find current IP address of domain no-preload-776907 in network mk-no-preload-776907
	I0814 01:05:15.653156   61447 main.go:141] libmachine: (no-preload-776907) DBG | I0814 01:05:15.653066   62559 retry.go:31] will retry after 827.123674ms: waiting for machine to come up
	I0814 01:05:16.482405   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:16.482799   61447 main.go:141] libmachine: (no-preload-776907) DBG | unable to find current IP address of domain no-preload-776907 in network mk-no-preload-776907
	I0814 01:05:16.482827   61447 main.go:141] libmachine: (no-preload-776907) DBG | I0814 01:05:16.482746   62559 retry.go:31] will retry after 897.843008ms: waiting for machine to come up
	I0814 01:05:17.595257   61115 start.go:360] acquireMachinesLock for embed-certs-901410: {Name:mk8ab1b491404dd6cc95a37a50c74a14d6d0e4c9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 01:05:17.381838   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:17.382282   61447 main.go:141] libmachine: (no-preload-776907) DBG | unable to find current IP address of domain no-preload-776907 in network mk-no-preload-776907
	I0814 01:05:17.382309   61447 main.go:141] libmachine: (no-preload-776907) DBG | I0814 01:05:17.382233   62559 retry.go:31] will retry after 1.346474914s: waiting for machine to come up
	I0814 01:05:18.730384   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:18.730837   61447 main.go:141] libmachine: (no-preload-776907) DBG | unable to find current IP address of domain no-preload-776907 in network mk-no-preload-776907
	I0814 01:05:18.730865   61447 main.go:141] libmachine: (no-preload-776907) DBG | I0814 01:05:18.730770   62559 retry.go:31] will retry after 1.755579596s: waiting for machine to come up
	I0814 01:05:20.488719   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:20.489235   61447 main.go:141] libmachine: (no-preload-776907) DBG | unable to find current IP address of domain no-preload-776907 in network mk-no-preload-776907
	I0814 01:05:20.489269   61447 main.go:141] libmachine: (no-preload-776907) DBG | I0814 01:05:20.489180   62559 retry.go:31] will retry after 1.82357845s: waiting for machine to come up
	I0814 01:05:22.315099   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:22.315508   61447 main.go:141] libmachine: (no-preload-776907) DBG | unable to find current IP address of domain no-preload-776907 in network mk-no-preload-776907
	I0814 01:05:22.315543   61447 main.go:141] libmachine: (no-preload-776907) DBG | I0814 01:05:22.315458   62559 retry.go:31] will retry after 1.799604975s: waiting for machine to come up
	I0814 01:05:24.116869   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:24.117361   61447 main.go:141] libmachine: (no-preload-776907) DBG | unable to find current IP address of domain no-preload-776907 in network mk-no-preload-776907
	I0814 01:05:24.117389   61447 main.go:141] libmachine: (no-preload-776907) DBG | I0814 01:05:24.117302   62559 retry.go:31] will retry after 2.588913034s: waiting for machine to come up
	I0814 01:05:26.708996   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:26.709436   61447 main.go:141] libmachine: (no-preload-776907) DBG | unable to find current IP address of domain no-preload-776907 in network mk-no-preload-776907
	I0814 01:05:26.709462   61447 main.go:141] libmachine: (no-preload-776907) DBG | I0814 01:05:26.709395   62559 retry.go:31] will retry after 3.736481406s: waiting for machine to come up
	I0814 01:05:30.449552   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:30.450068   61447 main.go:141] libmachine: (no-preload-776907) Found IP for machine: 192.168.72.94
	I0814 01:05:30.450093   61447 main.go:141] libmachine: (no-preload-776907) Reserving static IP address...
	I0814 01:05:30.450109   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has current primary IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:30.450584   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "no-preload-776907", mac: "52:54:00:96:29:79", ip: "192.168.72.94"} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:30.450609   61447 main.go:141] libmachine: (no-preload-776907) Reserved static IP address: 192.168.72.94
	I0814 01:05:30.450629   61447 main.go:141] libmachine: (no-preload-776907) DBG | skip adding static IP to network mk-no-preload-776907 - found existing host DHCP lease matching {name: "no-preload-776907", mac: "52:54:00:96:29:79", ip: "192.168.72.94"}
	I0814 01:05:30.450640   61447 main.go:141] libmachine: (no-preload-776907) Waiting for SSH to be available...
	I0814 01:05:30.450652   61447 main.go:141] libmachine: (no-preload-776907) DBG | Getting to WaitForSSH function...
	I0814 01:05:30.452908   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:30.453222   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:30.453250   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:30.453351   61447 main.go:141] libmachine: (no-preload-776907) DBG | Using SSH client type: external
	I0814 01:05:30.453380   61447 main.go:141] libmachine: (no-preload-776907) DBG | Using SSH private key: /home/jenkins/minikube-integration/19429-9425/.minikube/machines/no-preload-776907/id_rsa (-rw-------)
	I0814 01:05:30.453413   61447 main.go:141] libmachine: (no-preload-776907) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.94 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19429-9425/.minikube/machines/no-preload-776907/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0814 01:05:30.453430   61447 main.go:141] libmachine: (no-preload-776907) DBG | About to run SSH command:
	I0814 01:05:30.453443   61447 main.go:141] libmachine: (no-preload-776907) DBG | exit 0
	I0814 01:05:30.574126   61447 main.go:141] libmachine: (no-preload-776907) DBG | SSH cmd err, output: <nil>: 
	I0814 01:05:30.574502   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetConfigRaw
	I0814 01:05:30.575125   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetIP
	I0814 01:05:30.577732   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:30.578169   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:30.578203   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:30.578449   61447 profile.go:143] Saving config to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/no-preload-776907/config.json ...
	I0814 01:05:30.578651   61447 machine.go:94] provisionDockerMachine start ...
	I0814 01:05:30.578669   61447 main.go:141] libmachine: (no-preload-776907) Calling .DriverName
	I0814 01:05:30.578916   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHHostname
	I0814 01:05:30.581363   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:30.581653   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:30.581678   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:30.581769   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHPort
	I0814 01:05:30.581944   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 01:05:30.582114   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 01:05:30.582230   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHUsername
	I0814 01:05:30.582389   61447 main.go:141] libmachine: Using SSH client type: native
	I0814 01:05:30.582631   61447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.94 22 <nil> <nil>}
	I0814 01:05:30.582641   61447 main.go:141] libmachine: About to run SSH command:
	hostname
	I0814 01:05:30.678219   61447 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0814 01:05:30.678248   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetMachineName
	I0814 01:05:30.678530   61447 buildroot.go:166] provisioning hostname "no-preload-776907"
	I0814 01:05:30.678560   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetMachineName
	I0814 01:05:30.678736   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHHostname
	I0814 01:05:30.681602   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:30.681914   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:30.681943   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:30.682058   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHPort
	I0814 01:05:30.682224   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 01:05:30.682373   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 01:05:30.682507   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHUsername
	I0814 01:05:30.682662   61447 main.go:141] libmachine: Using SSH client type: native
	I0814 01:05:30.682832   61447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.94 22 <nil> <nil>}
	I0814 01:05:30.682844   61447 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-776907 && echo "no-preload-776907" | sudo tee /etc/hostname
	I0814 01:05:30.790444   61447 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-776907
	
	I0814 01:05:30.790476   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHHostname
	I0814 01:05:30.793090   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:30.793357   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:30.793386   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:30.793503   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHPort
	I0814 01:05:30.793713   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 01:05:30.793885   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 01:05:30.794030   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHUsername
	I0814 01:05:30.794206   61447 main.go:141] libmachine: Using SSH client type: native
	I0814 01:05:30.794390   61447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.94 22 <nil> <nil>}
	I0814 01:05:30.794411   61447 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-776907' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-776907/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-776907' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0814 01:05:30.897761   61447 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 01:05:30.897818   61447 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19429-9425/.minikube CaCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19429-9425/.minikube}
	I0814 01:05:30.897869   61447 buildroot.go:174] setting up certificates
	I0814 01:05:30.897890   61447 provision.go:84] configureAuth start
	I0814 01:05:30.897915   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetMachineName
	I0814 01:05:30.898272   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetIP
	I0814 01:05:30.900961   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:30.901235   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:30.901268   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:30.901432   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHHostname
	I0814 01:05:30.903329   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:30.903604   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:30.903634   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:30.903799   61447 provision.go:143] copyHostCerts
	I0814 01:05:30.903866   61447 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem, removing ...
	I0814 01:05:30.903881   61447 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem
	I0814 01:05:30.903960   61447 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem (1082 bytes)
	I0814 01:05:30.904104   61447 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem, removing ...
	I0814 01:05:30.904126   61447 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem
	I0814 01:05:30.904165   61447 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem (1123 bytes)
	I0814 01:05:30.904259   61447 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem, removing ...
	I0814 01:05:30.904271   61447 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem
	I0814 01:05:30.904304   61447 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem (1675 bytes)
	I0814 01:05:30.904389   61447 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem org=jenkins.no-preload-776907 san=[127.0.0.1 192.168.72.94 localhost minikube no-preload-776907]
	I0814 01:05:31.219047   61447 provision.go:177] copyRemoteCerts
	I0814 01:05:31.219108   61447 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0814 01:05:31.219138   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHHostname
	I0814 01:05:31.222328   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:31.222679   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:31.222719   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:31.222858   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHPort
	I0814 01:05:31.223059   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 01:05:31.223199   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHUsername
	I0814 01:05:31.223368   61447 sshutil.go:53] new ssh client: &{IP:192.168.72.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/no-preload-776907/id_rsa Username:docker}
	I0814 01:05:31.299711   61447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0814 01:05:31.321459   61447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0814 01:05:31.342798   61447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0814 01:05:31.363610   61447 provision.go:87] duration metric: took 465.708315ms to configureAuth
	I0814 01:05:31.363636   61447 buildroot.go:189] setting minikube options for container-runtime
	I0814 01:05:31.363877   61447 config.go:182] Loaded profile config "no-preload-776907": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 01:05:31.363970   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHHostname
	I0814 01:05:31.366458   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:31.366723   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:31.366753   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:31.366948   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHPort
	I0814 01:05:31.367154   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 01:05:31.367300   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 01:05:31.367452   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHUsername
	I0814 01:05:31.367605   61447 main.go:141] libmachine: Using SSH client type: native
	I0814 01:05:31.367826   61447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.94 22 <nil> <nil>}
	I0814 01:05:31.367848   61447 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0814 01:05:31.826307   61689 start.go:364] duration metric: took 3m53.689696917s to acquireMachinesLock for "default-k8s-diff-port-585256"
	I0814 01:05:31.826378   61689 start.go:96] Skipping create...Using existing machine configuration
	I0814 01:05:31.826394   61689 fix.go:54] fixHost starting: 
	I0814 01:05:31.826794   61689 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:05:31.826829   61689 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:05:31.842943   61689 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38143
	I0814 01:05:31.843345   61689 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:05:31.843840   61689 main.go:141] libmachine: Using API Version  1
	I0814 01:05:31.843872   61689 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:05:31.844236   61689 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:05:31.844445   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .DriverName
	I0814 01:05:31.844653   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetState
	I0814 01:05:31.846298   61689 fix.go:112] recreateIfNeeded on default-k8s-diff-port-585256: state=Stopped err=<nil>
	I0814 01:05:31.846319   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .DriverName
	W0814 01:05:31.846504   61689 fix.go:138] unexpected machine state, will restart: <nil>
	I0814 01:05:31.848477   61689 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-585256" ...
	I0814 01:05:31.849592   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .Start
	I0814 01:05:31.849779   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Ensuring networks are active...
	I0814 01:05:31.850320   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Ensuring network default is active
	I0814 01:05:31.850622   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Ensuring network mk-default-k8s-diff-port-585256 is active
	I0814 01:05:31.850949   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Getting domain xml...
	I0814 01:05:31.851706   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Creating domain...
	I0814 01:05:31.612709   61447 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0814 01:05:31.612730   61447 machine.go:97] duration metric: took 1.0340672s to provisionDockerMachine
	I0814 01:05:31.612741   61447 start.go:293] postStartSetup for "no-preload-776907" (driver="kvm2")
	I0814 01:05:31.612763   61447 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0814 01:05:31.612794   61447 main.go:141] libmachine: (no-preload-776907) Calling .DriverName
	I0814 01:05:31.613074   61447 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0814 01:05:31.613098   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHHostname
	I0814 01:05:31.615600   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:31.615957   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:31.615985   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:31.616091   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHPort
	I0814 01:05:31.616244   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 01:05:31.616373   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHUsername
	I0814 01:05:31.616516   61447 sshutil.go:53] new ssh client: &{IP:192.168.72.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/no-preload-776907/id_rsa Username:docker}
	I0814 01:05:31.691987   61447 ssh_runner.go:195] Run: cat /etc/os-release
	I0814 01:05:31.695849   61447 info.go:137] Remote host: Buildroot 2023.02.9
	I0814 01:05:31.695872   61447 filesync.go:126] Scanning /home/jenkins/minikube-integration/19429-9425/.minikube/addons for local assets ...
	I0814 01:05:31.695940   61447 filesync.go:126] Scanning /home/jenkins/minikube-integration/19429-9425/.minikube/files for local assets ...
	I0814 01:05:31.696016   61447 filesync.go:149] local asset: /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem -> 165892.pem in /etc/ssl/certs
	I0814 01:05:31.696099   61447 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0814 01:05:31.704650   61447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem --> /etc/ssl/certs/165892.pem (1708 bytes)
	I0814 01:05:31.725889   61447 start.go:296] duration metric: took 113.131949ms for postStartSetup
	I0814 01:05:31.725939   61447 fix.go:56] duration metric: took 19.131305949s for fixHost
	I0814 01:05:31.725962   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHHostname
	I0814 01:05:31.728613   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:31.729001   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:31.729030   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:31.729178   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHPort
	I0814 01:05:31.729379   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 01:05:31.729556   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 01:05:31.729721   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHUsername
	I0814 01:05:31.729861   61447 main.go:141] libmachine: Using SSH client type: native
	I0814 01:05:31.730062   61447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.94 22 <nil> <nil>}
	I0814 01:05:31.730076   61447 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0814 01:05:31.826139   61447 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723597531.803704808
	
	I0814 01:05:31.826161   61447 fix.go:216] guest clock: 1723597531.803704808
	I0814 01:05:31.826172   61447 fix.go:229] Guest: 2024-08-14 01:05:31.803704808 +0000 UTC Remote: 2024-08-14 01:05:31.72594365 +0000 UTC m=+255.249076472 (delta=77.761158ms)
	I0814 01:05:31.826197   61447 fix.go:200] guest clock delta is within tolerance: 77.761158ms
	I0814 01:05:31.826208   61447 start.go:83] releasing machines lock for "no-preload-776907", held for 19.231627325s
	I0814 01:05:31.826240   61447 main.go:141] libmachine: (no-preload-776907) Calling .DriverName
	I0814 01:05:31.826536   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetIP
	I0814 01:05:31.829417   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:31.829824   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:31.829854   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:31.829986   61447 main.go:141] libmachine: (no-preload-776907) Calling .DriverName
	I0814 01:05:31.830482   61447 main.go:141] libmachine: (no-preload-776907) Calling .DriverName
	I0814 01:05:31.830633   61447 main.go:141] libmachine: (no-preload-776907) Calling .DriverName
	I0814 01:05:31.830697   61447 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0814 01:05:31.830804   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHHostname
	I0814 01:05:31.830894   61447 ssh_runner.go:195] Run: cat /version.json
	I0814 01:05:31.830914   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHHostname
	I0814 01:05:31.833641   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:31.833963   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:31.833992   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:31.834096   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:31.834260   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHPort
	I0814 01:05:31.834427   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 01:05:31.834549   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHUsername
	I0814 01:05:31.834575   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:31.834599   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:31.834696   61447 sshutil.go:53] new ssh client: &{IP:192.168.72.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/no-preload-776907/id_rsa Username:docker}
	I0814 01:05:31.834773   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHPort
	I0814 01:05:31.834917   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 01:05:31.835101   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHUsername
	I0814 01:05:31.835253   61447 sshutil.go:53] new ssh client: &{IP:192.168.72.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/no-preload-776907/id_rsa Username:docker}
	I0814 01:05:31.915928   61447 ssh_runner.go:195] Run: systemctl --version
	I0814 01:05:31.947877   61447 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0814 01:05:32.091869   61447 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0814 01:05:32.097278   61447 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0814 01:05:32.097333   61447 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0814 01:05:32.112225   61447 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0814 01:05:32.112243   61447 start.go:495] detecting cgroup driver to use...
	I0814 01:05:32.112317   61447 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0814 01:05:32.131562   61447 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0814 01:05:32.145858   61447 docker.go:217] disabling cri-docker service (if available) ...
	I0814 01:05:32.145917   61447 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0814 01:05:32.160887   61447 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0814 01:05:32.175742   61447 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0814 01:05:32.290421   61447 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0814 01:05:32.420159   61447 docker.go:233] disabling docker service ...
	I0814 01:05:32.420237   61447 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0814 01:05:32.434020   61447 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0814 01:05:32.451378   61447 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0814 01:05:32.601306   61447 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0814 01:05:32.714480   61447 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0814 01:05:32.727033   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 01:05:32.743611   61447 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0814 01:05:32.743681   61447 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:05:32.753404   61447 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0814 01:05:32.753471   61447 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:05:32.762934   61447 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:05:32.772193   61447 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:05:32.781270   61447 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0814 01:05:32.791271   61447 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:05:32.802788   61447 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:05:32.821431   61447 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:05:32.831529   61447 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0814 01:05:32.840975   61447 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0814 01:05:32.841033   61447 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0814 01:05:32.854037   61447 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0814 01:05:32.863437   61447 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 01:05:32.999601   61447 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0814 01:05:33.152806   61447 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0814 01:05:33.152868   61447 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0814 01:05:33.157209   61447 start.go:563] Will wait 60s for crictl version
	I0814 01:05:33.157266   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:05:33.160792   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0814 01:05:33.196825   61447 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0814 01:05:33.196903   61447 ssh_runner.go:195] Run: crio --version
	I0814 01:05:33.222886   61447 ssh_runner.go:195] Run: crio --version
	I0814 01:05:33.258900   61447 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0814 01:05:33.260059   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetIP
	I0814 01:05:33.263044   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:33.263422   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:33.263449   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:33.263749   61447 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0814 01:05:33.268315   61447 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 01:05:33.282628   61447 kubeadm.go:883] updating cluster {Name:no-preload-776907 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:no-preload-776907 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.94 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0814 01:05:33.282744   61447 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0814 01:05:33.282800   61447 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 01:05:33.319748   61447 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0814 01:05:33.319777   61447 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0814 01:05:33.319875   61447 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0814 01:05:33.319855   61447 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0814 01:05:33.319906   61447 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0814 01:05:33.319846   61447 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 01:05:33.319845   61447 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0814 01:05:33.320006   61447 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0814 01:05:33.320011   61447 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0814 01:05:33.320011   61447 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0814 01:05:33.321705   61447 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0814 01:05:33.321719   61447 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0814 01:05:33.321741   61447 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0814 01:05:33.321800   61447 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0814 01:05:33.321820   61447 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0814 01:05:33.321851   61447 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 01:05:33.321862   61447 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0814 01:05:33.321858   61447 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0814 01:05:33.549228   61447 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0814 01:05:33.558351   61447 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0814 01:05:33.561199   61447 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0814 01:05:33.570929   61447 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0814 01:05:33.573362   61447 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0814 01:05:33.606128   61447 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0814 01:05:33.623839   61447 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0814 01:05:33.721634   61447 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0814 01:05:33.721674   61447 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0814 01:05:33.721695   61447 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0814 01:05:33.721706   61447 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0814 01:05:33.721718   61447 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0814 01:05:33.721743   61447 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0814 01:05:33.721756   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:05:33.721790   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:05:33.721743   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:05:33.721822   61447 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0814 01:05:33.721851   61447 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0814 01:05:33.721904   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:05:33.733731   61447 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0814 01:05:33.733762   61447 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0814 01:05:33.733792   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:05:33.745957   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0814 01:05:33.745957   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0814 01:05:33.746027   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0814 01:05:33.746031   61447 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0814 01:05:33.746075   61447 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0814 01:05:33.746100   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0814 01:05:33.746110   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0814 01:05:33.746128   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:05:33.837313   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0814 01:05:33.837334   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0814 01:05:33.840696   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0814 01:05:33.840751   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0814 01:05:33.840821   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0814 01:05:33.840959   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0814 01:05:33.952383   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0814 01:05:33.952459   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0814 01:05:33.960252   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0814 01:05:33.966935   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0814 01:05:33.966980   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0814 01:05:33.966949   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0814 01:05:34.070125   61447 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0814 01:05:34.070241   61447 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0814 01:05:34.070361   61447 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0814 01:05:34.070427   61447 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0814 01:05:34.070495   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0814 01:05:34.091128   61447 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0814 01:05:34.091240   61447 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0814 01:05:34.092453   61447 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0814 01:05:34.092547   61447 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.15-0
	I0814 01:05:34.092649   61447 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0814 01:05:34.092743   61447 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0814 01:05:34.100595   61447 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0814 01:05:34.100616   61447 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0814 01:05:34.100663   61447 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0814 01:05:34.100799   61447 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0814 01:05:34.130869   61447 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0814 01:05:34.130914   61447 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0814 01:05:34.130931   61447 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0814 01:05:34.130968   61447 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0814 01:05:34.131021   61447 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0814 01:05:34.197462   61447 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 01:05:36.080029   61447 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (1.979348221s)
	I0814 01:05:36.080056   61447 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0814 01:05:36.080081   61447 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0814 01:05:36.080140   61447 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0814 01:05:36.080175   61447 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.882683519s)
	I0814 01:05:36.080139   61447 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0: (1.949094618s)
	I0814 01:05:36.080227   61447 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0814 01:05:36.080270   61447 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 01:05:36.080310   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:05:36.080232   61447 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0814 01:05:33.131411   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting to get IP...
	I0814 01:05:33.132448   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:33.132806   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | unable to find current IP address of domain default-k8s-diff-port-585256 in network mk-default-k8s-diff-port-585256
	I0814 01:05:33.132920   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | I0814 01:05:33.132799   62699 retry.go:31] will retry after 311.730649ms: waiting for machine to come up
	I0814 01:05:33.446380   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:33.446841   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | unable to find current IP address of domain default-k8s-diff-port-585256 in network mk-default-k8s-diff-port-585256
	I0814 01:05:33.446870   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | I0814 01:05:33.446794   62699 retry.go:31] will retry after 383.687115ms: waiting for machine to come up
	I0814 01:05:33.832368   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:33.832974   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | unable to find current IP address of domain default-k8s-diff-port-585256 in network mk-default-k8s-diff-port-585256
	I0814 01:05:33.833008   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | I0814 01:05:33.832808   62699 retry.go:31] will retry after 455.445491ms: waiting for machine to come up
	I0814 01:05:34.289395   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:34.289832   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | unable to find current IP address of domain default-k8s-diff-port-585256 in network mk-default-k8s-diff-port-585256
	I0814 01:05:34.289869   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | I0814 01:05:34.289782   62699 retry.go:31] will retry after 513.174411ms: waiting for machine to come up
	I0814 01:05:34.804399   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:34.804842   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | unable to find current IP address of domain default-k8s-diff-port-585256 in network mk-default-k8s-diff-port-585256
	I0814 01:05:34.804877   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | I0814 01:05:34.804793   62699 retry.go:31] will retry after 497.23394ms: waiting for machine to come up
	I0814 01:05:35.303286   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:35.303809   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | unable to find current IP address of domain default-k8s-diff-port-585256 in network mk-default-k8s-diff-port-585256
	I0814 01:05:35.303839   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | I0814 01:05:35.303757   62699 retry.go:31] will retry after 774.036418ms: waiting for machine to come up
	I0814 01:05:36.080026   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:36.080605   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | unable to find current IP address of domain default-k8s-diff-port-585256 in network mk-default-k8s-diff-port-585256
	I0814 01:05:36.080631   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | I0814 01:05:36.080572   62699 retry.go:31] will retry after 970.636476ms: waiting for machine to come up
	I0814 01:05:37.052546   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:37.052978   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | unable to find current IP address of domain default-k8s-diff-port-585256 in network mk-default-k8s-diff-port-585256
	I0814 01:05:37.053007   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | I0814 01:05:37.052929   62699 retry.go:31] will retry after 1.471882931s: waiting for machine to come up
	I0814 01:05:37.749423   61447 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.669254345s)
	I0814 01:05:37.749462   61447 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0814 01:05:37.749464   61447 ssh_runner.go:235] Completed: which crictl: (1.669139781s)
	I0814 01:05:37.749508   61447 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0814 01:05:37.749520   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 01:05:37.749573   61447 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0814 01:05:40.024973   61447 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.275431609s)
	I0814 01:05:40.024997   61447 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (2.275404079s)
	I0814 01:05:40.025019   61447 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0814 01:05:40.025049   61447 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0814 01:05:40.025050   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 01:05:40.025084   61447 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0814 01:05:38.526491   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:38.527039   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | unable to find current IP address of domain default-k8s-diff-port-585256 in network mk-default-k8s-diff-port-585256
	I0814 01:05:38.527074   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | I0814 01:05:38.526996   62699 retry.go:31] will retry after 1.14308512s: waiting for machine to come up
	I0814 01:05:39.672470   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:39.672869   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | unable to find current IP address of domain default-k8s-diff-port-585256 in network mk-default-k8s-diff-port-585256
	I0814 01:05:39.672893   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | I0814 01:05:39.672812   62699 retry.go:31] will retry after 2.208537111s: waiting for machine to come up
	I0814 01:05:41.883541   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:41.883981   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | unable to find current IP address of domain default-k8s-diff-port-585256 in network mk-default-k8s-diff-port-585256
	I0814 01:05:41.884004   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | I0814 01:05:41.883925   62699 retry.go:31] will retry after 1.996466385s: waiting for machine to come up
	I0814 01:05:43.619471   61447 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.594358195s)
	I0814 01:05:43.619507   61447 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0814 01:05:43.619537   61447 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0814 01:05:43.619541   61447 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.594466847s)
	I0814 01:05:43.619586   61447 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0814 01:05:43.619612   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 01:05:44.986974   61447 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (1.367364508s)
	I0814 01:05:44.987013   61447 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0814 01:05:44.987045   61447 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0814 01:05:44.987041   61447 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.367403978s)
	I0814 01:05:44.987087   61447 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0814 01:05:44.987109   61447 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0814 01:05:44.987207   61447 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0814 01:05:44.991463   61447 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0814 01:05:43.882980   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:43.883366   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | unable to find current IP address of domain default-k8s-diff-port-585256 in network mk-default-k8s-diff-port-585256
	I0814 01:05:43.883395   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | I0814 01:05:43.883327   62699 retry.go:31] will retry after 3.565128765s: waiting for machine to come up
	I0814 01:05:47.449997   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:47.450447   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | unable to find current IP address of domain default-k8s-diff-port-585256 in network mk-default-k8s-diff-port-585256
	I0814 01:05:47.450477   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | I0814 01:05:47.450398   62699 retry.go:31] will retry after 3.284570516s: waiting for machine to come up
	I0814 01:05:46.846330   61447 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (1.859214752s)
	I0814 01:05:46.846363   61447 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0814 01:05:46.846397   61447 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0814 01:05:46.846448   61447 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0814 01:05:47.484561   61447 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0814 01:05:47.484612   61447 cache_images.go:123] Successfully loaded all cached images
	I0814 01:05:47.484618   61447 cache_images.go:92] duration metric: took 14.164829321s to LoadCachedImages
	I0814 01:05:47.484632   61447 kubeadm.go:934] updating node { 192.168.72.94 8443 v1.31.0 crio true true} ...
	I0814 01:05:47.484813   61447 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-776907 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.94
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-776907 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0814 01:05:47.484897   61447 ssh_runner.go:195] Run: crio config
	I0814 01:05:47.530082   61447 cni.go:84] Creating CNI manager for ""
	I0814 01:05:47.530105   61447 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 01:05:47.530120   61447 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0814 01:05:47.530143   61447 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.94 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-776907 NodeName:no-preload-776907 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.94"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.94 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0814 01:05:47.530285   61447 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.94
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-776907"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.94
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.94"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0814 01:05:47.530350   61447 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0814 01:05:47.540091   61447 binaries.go:44] Found k8s binaries, skipping transfer
	I0814 01:05:47.540155   61447 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0814 01:05:47.548445   61447 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0814 01:05:47.563668   61447 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0814 01:05:47.578184   61447 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0814 01:05:47.593013   61447 ssh_runner.go:195] Run: grep 192.168.72.94	control-plane.minikube.internal$ /etc/hosts
	I0814 01:05:47.596371   61447 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.94	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 01:05:47.606895   61447 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 01:05:47.711714   61447 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 01:05:47.726979   61447 certs.go:68] Setting up /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/no-preload-776907 for IP: 192.168.72.94
	I0814 01:05:47.727006   61447 certs.go:194] generating shared ca certs ...
	I0814 01:05:47.727027   61447 certs.go:226] acquiring lock for ca certs: {Name:mke321171338291cf6d66f3acbfe43d3fabcafa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 01:05:47.727236   61447 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19429-9425/.minikube/ca.key
	I0814 01:05:47.727305   61447 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.key
	I0814 01:05:47.727321   61447 certs.go:256] generating profile certs ...
	I0814 01:05:47.727446   61447 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/no-preload-776907/client.key
	I0814 01:05:47.727532   61447 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/no-preload-776907/apiserver.key.b2b1ec25
	I0814 01:05:47.727583   61447 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/no-preload-776907/proxy-client.key
	I0814 01:05:47.727745   61447 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589.pem (1338 bytes)
	W0814 01:05:47.727796   61447 certs.go:480] ignoring /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589_empty.pem, impossibly tiny 0 bytes
	I0814 01:05:47.727811   61447 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem (1679 bytes)
	I0814 01:05:47.727846   61447 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem (1082 bytes)
	I0814 01:05:47.727882   61447 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem (1123 bytes)
	I0814 01:05:47.727907   61447 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem (1675 bytes)
	I0814 01:05:47.727948   61447 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem (1708 bytes)
	I0814 01:05:47.728598   61447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0814 01:05:47.758661   61447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0814 01:05:47.790036   61447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0814 01:05:47.814323   61447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0814 01:05:47.839537   61447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/no-preload-776907/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0814 01:05:47.867466   61447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/no-preload-776907/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0814 01:05:47.898996   61447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/no-preload-776907/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0814 01:05:47.923051   61447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/no-preload-776907/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0814 01:05:47.946004   61447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0814 01:05:47.967147   61447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589.pem --> /usr/share/ca-certificates/16589.pem (1338 bytes)
	I0814 01:05:47.988005   61447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem --> /usr/share/ca-certificates/165892.pem (1708 bytes)
	I0814 01:05:48.009704   61447 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0814 01:05:48.024096   61447 ssh_runner.go:195] Run: openssl version
	I0814 01:05:48.029499   61447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0814 01:05:48.038961   61447 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0814 01:05:48.042928   61447 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 13 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0814 01:05:48.042967   61447 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0814 01:05:48.048101   61447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0814 01:05:48.057498   61447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16589.pem && ln -fs /usr/share/ca-certificates/16589.pem /etc/ssl/certs/16589.pem"
	I0814 01:05:48.067275   61447 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16589.pem
	I0814 01:05:48.071457   61447 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 13 23:59 /usr/share/ca-certificates/16589.pem
	I0814 01:05:48.071503   61447 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16589.pem
	I0814 01:05:48.076924   61447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16589.pem /etc/ssl/certs/51391683.0"
	I0814 01:05:48.086951   61447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/165892.pem && ln -fs /usr/share/ca-certificates/165892.pem /etc/ssl/certs/165892.pem"
	I0814 01:05:48.097071   61447 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/165892.pem
	I0814 01:05:48.101070   61447 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 13 23:59 /usr/share/ca-certificates/165892.pem
	I0814 01:05:48.101116   61447 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/165892.pem
	I0814 01:05:48.106289   61447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/165892.pem /etc/ssl/certs/3ec20f2e.0"
	I0814 01:05:48.116109   61447 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0814 01:05:48.119931   61447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0814 01:05:48.124976   61447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0814 01:05:48.129900   61447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0814 01:05:48.135041   61447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0814 01:05:48.140528   61447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0814 01:05:48.145653   61447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0814 01:05:48.150733   61447 kubeadm.go:392] StartCluster: {Name:no-preload-776907 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:no-preload-776907 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.94 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 01:05:48.150833   61447 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0814 01:05:48.150869   61447 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 01:05:48.184513   61447 cri.go:89] found id: ""
	I0814 01:05:48.184585   61447 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0814 01:05:48.194089   61447 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0814 01:05:48.194107   61447 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0814 01:05:48.194145   61447 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0814 01:05:48.202993   61447 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0814 01:05:48.203917   61447 kubeconfig.go:125] found "no-preload-776907" server: "https://192.168.72.94:8443"
	I0814 01:05:48.205929   61447 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0814 01:05:48.214947   61447 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.94
	I0814 01:05:48.214974   61447 kubeadm.go:1160] stopping kube-system containers ...
	I0814 01:05:48.214985   61447 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0814 01:05:48.215023   61447 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 01:05:48.247731   61447 cri.go:89] found id: ""
	I0814 01:05:48.247803   61447 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0814 01:05:48.262901   61447 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 01:05:48.271600   61447 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 01:05:48.271616   61447 kubeadm.go:157] found existing configuration files:
	
	I0814 01:05:48.271652   61447 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 01:05:48.279915   61447 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 01:05:48.279963   61447 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 01:05:48.288458   61447 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 01:05:48.296996   61447 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 01:05:48.297049   61447 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 01:05:48.305625   61447 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 01:05:48.313796   61447 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 01:05:48.313837   61447 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 01:05:48.322211   61447 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 01:05:48.330289   61447 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 01:05:48.330350   61447 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 01:05:48.338604   61447 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 01:05:48.347106   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:05:48.452598   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:05:49.345180   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:05:49.535832   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:05:49.597770   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:05:49.711880   61447 api_server.go:52] waiting for apiserver process to appear ...
	I0814 01:05:49.711964   61447 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:05:50.212332   61447 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:05:50.712073   61447 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:05:50.726301   61447 api_server.go:72] duration metric: took 1.014425118s to wait for apiserver process to appear ...
	I0814 01:05:50.726335   61447 api_server.go:88] waiting for apiserver healthz status ...
	I0814 01:05:50.726369   61447 api_server.go:253] Checking apiserver healthz at https://192.168.72.94:8443/healthz ...
	I0814 01:05:52.086727   61804 start.go:364] duration metric: took 4m12.466611913s to acquireMachinesLock for "old-k8s-version-179312"
	I0814 01:05:52.086801   61804 start.go:96] Skipping create...Using existing machine configuration
	I0814 01:05:52.086811   61804 fix.go:54] fixHost starting: 
	I0814 01:05:52.087240   61804 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:05:52.087282   61804 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:05:52.104210   61804 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42343
	I0814 01:05:52.104679   61804 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:05:52.105122   61804 main.go:141] libmachine: Using API Version  1
	I0814 01:05:52.105146   61804 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:05:52.105462   61804 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:05:52.105656   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .DriverName
	I0814 01:05:52.105804   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetState
	I0814 01:05:52.107362   61804 fix.go:112] recreateIfNeeded on old-k8s-version-179312: state=Stopped err=<nil>
	I0814 01:05:52.107399   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .DriverName
	W0814 01:05:52.107543   61804 fix.go:138] unexpected machine state, will restart: <nil>
	I0814 01:05:52.109460   61804 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-179312" ...
	I0814 01:05:50.738825   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:50.739311   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Found IP for machine: 192.168.39.110
	I0814 01:05:50.739333   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Reserving static IP address...
	I0814 01:05:50.739353   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has current primary IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:50.739784   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-585256", mac: "52:54:00:00:bd:a3", ip: "192.168.39.110"} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:05:50.739819   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Reserved static IP address: 192.168.39.110
	I0814 01:05:50.739844   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | skip adding static IP to network mk-default-k8s-diff-port-585256 - found existing host DHCP lease matching {name: "default-k8s-diff-port-585256", mac: "52:54:00:00:bd:a3", ip: "192.168.39.110"}
	I0814 01:05:50.739871   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | Getting to WaitForSSH function...
	I0814 01:05:50.739888   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for SSH to be available...
	I0814 01:05:50.742187   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:50.742563   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:05:50.742597   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:50.742696   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | Using SSH client type: external
	I0814 01:05:50.742726   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | Using SSH private key: /home/jenkins/minikube-integration/19429-9425/.minikube/machines/default-k8s-diff-port-585256/id_rsa (-rw-------)
	I0814 01:05:50.742755   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.110 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19429-9425/.minikube/machines/default-k8s-diff-port-585256/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0814 01:05:50.742769   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | About to run SSH command:
	I0814 01:05:50.742784   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | exit 0
	I0814 01:05:50.870185   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | SSH cmd err, output: <nil>: 
	I0814 01:05:50.870601   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetConfigRaw
	I0814 01:05:50.871331   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetIP
	I0814 01:05:50.873990   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:50.874371   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:05:50.874401   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:50.874720   61689 profile.go:143] Saving config to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/default-k8s-diff-port-585256/config.json ...
	I0814 01:05:50.874962   61689 machine.go:94] provisionDockerMachine start ...
	I0814 01:05:50.874984   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .DriverName
	I0814 01:05:50.875223   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHHostname
	I0814 01:05:50.877460   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:50.877829   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:05:50.877868   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:50.877958   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHPort
	I0814 01:05:50.878140   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 01:05:50.878274   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 01:05:50.878440   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHUsername
	I0814 01:05:50.878596   61689 main.go:141] libmachine: Using SSH client type: native
	I0814 01:05:50.878828   61689 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0814 01:05:50.878844   61689 main.go:141] libmachine: About to run SSH command:
	hostname
	I0814 01:05:50.990920   61689 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0814 01:05:50.990952   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetMachineName
	I0814 01:05:50.991216   61689 buildroot.go:166] provisioning hostname "default-k8s-diff-port-585256"
	I0814 01:05:50.991244   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetMachineName
	I0814 01:05:50.991445   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHHostname
	I0814 01:05:50.994031   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:50.994353   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:05:50.994384   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:50.994595   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHPort
	I0814 01:05:50.994785   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 01:05:50.994936   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 01:05:50.995105   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHUsername
	I0814 01:05:50.995273   61689 main.go:141] libmachine: Using SSH client type: native
	I0814 01:05:50.995458   61689 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0814 01:05:50.995475   61689 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-585256 && echo "default-k8s-diff-port-585256" | sudo tee /etc/hostname
	I0814 01:05:51.115106   61689 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-585256
	
	I0814 01:05:51.115141   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHHostname
	I0814 01:05:51.118113   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:51.118480   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:05:51.118509   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:51.118726   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHPort
	I0814 01:05:51.118932   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 01:05:51.119097   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 01:05:51.119218   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHUsername
	I0814 01:05:51.119418   61689 main.go:141] libmachine: Using SSH client type: native
	I0814 01:05:51.119594   61689 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0814 01:05:51.119619   61689 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-585256' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-585256/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-585256' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0814 01:05:51.239368   61689 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 01:05:51.239404   61689 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19429-9425/.minikube CaCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19429-9425/.minikube}
	I0814 01:05:51.239430   61689 buildroot.go:174] setting up certificates
	I0814 01:05:51.239438   61689 provision.go:84] configureAuth start
	I0814 01:05:51.239450   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetMachineName
	I0814 01:05:51.239744   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetIP
	I0814 01:05:51.242426   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:51.242864   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:05:51.242894   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:51.243061   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHHostname
	I0814 01:05:51.245385   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:51.245774   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:05:51.245802   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:51.245950   61689 provision.go:143] copyHostCerts
	I0814 01:05:51.246001   61689 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem, removing ...
	I0814 01:05:51.246012   61689 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem
	I0814 01:05:51.246090   61689 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem (1082 bytes)
	I0814 01:05:51.246184   61689 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem, removing ...
	I0814 01:05:51.246192   61689 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem
	I0814 01:05:51.246211   61689 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem (1123 bytes)
	I0814 01:05:51.246268   61689 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem, removing ...
	I0814 01:05:51.246274   61689 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem
	I0814 01:05:51.246291   61689 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem (1675 bytes)
	I0814 01:05:51.246345   61689 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-585256 san=[127.0.0.1 192.168.39.110 default-k8s-diff-port-585256 localhost minikube]
	I0814 01:05:51.390720   61689 provision.go:177] copyRemoteCerts
	I0814 01:05:51.390779   61689 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0814 01:05:51.390828   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHHostname
	I0814 01:05:51.393583   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:51.394011   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:05:51.394065   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:51.394311   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHPort
	I0814 01:05:51.394493   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 01:05:51.394648   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHUsername
	I0814 01:05:51.394774   61689 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/default-k8s-diff-port-585256/id_rsa Username:docker}
	I0814 01:05:51.479700   61689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0814 01:05:51.501643   61689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0814 01:05:51.523469   61689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0814 01:05:51.548552   61689 provision.go:87] duration metric: took 309.100404ms to configureAuth
	I0814 01:05:51.548579   61689 buildroot.go:189] setting minikube options for container-runtime
	I0814 01:05:51.548811   61689 config.go:182] Loaded profile config "default-k8s-diff-port-585256": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 01:05:51.548902   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHHostname
	I0814 01:05:51.551955   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:51.552410   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:05:51.552439   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:51.552657   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHPort
	I0814 01:05:51.552846   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 01:05:51.553007   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 01:05:51.553131   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHUsername
	I0814 01:05:51.553293   61689 main.go:141] libmachine: Using SSH client type: native
	I0814 01:05:51.553506   61689 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0814 01:05:51.553536   61689 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0814 01:05:51.836027   61689 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0814 01:05:51.836048   61689 machine.go:97] duration metric: took 961.072984ms to provisionDockerMachine
	I0814 01:05:51.836060   61689 start.go:293] postStartSetup for "default-k8s-diff-port-585256" (driver="kvm2")
	I0814 01:05:51.836075   61689 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0814 01:05:51.836092   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .DriverName
	I0814 01:05:51.836448   61689 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0814 01:05:51.836483   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHHostname
	I0814 01:05:51.839252   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:51.839608   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:05:51.839634   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:51.839785   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHPort
	I0814 01:05:51.839998   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 01:05:51.840158   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHUsername
	I0814 01:05:51.840306   61689 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/default-k8s-diff-port-585256/id_rsa Username:docker}
	I0814 01:05:51.928323   61689 ssh_runner.go:195] Run: cat /etc/os-release
	I0814 01:05:51.932227   61689 info.go:137] Remote host: Buildroot 2023.02.9
	I0814 01:05:51.932252   61689 filesync.go:126] Scanning /home/jenkins/minikube-integration/19429-9425/.minikube/addons for local assets ...
	I0814 01:05:51.932331   61689 filesync.go:126] Scanning /home/jenkins/minikube-integration/19429-9425/.minikube/files for local assets ...
	I0814 01:05:51.932417   61689 filesync.go:149] local asset: /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem -> 165892.pem in /etc/ssl/certs
	I0814 01:05:51.932539   61689 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0814 01:05:51.941299   61689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem --> /etc/ssl/certs/165892.pem (1708 bytes)
	I0814 01:05:51.966445   61689 start.go:296] duration metric: took 130.370634ms for postStartSetup
	I0814 01:05:51.966488   61689 fix.go:56] duration metric: took 20.140102397s for fixHost
	I0814 01:05:51.966509   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHHostname
	I0814 01:05:51.969169   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:51.969542   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:05:51.969574   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:51.970716   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHPort
	I0814 01:05:51.970923   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 01:05:51.971093   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 01:05:51.971233   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHUsername
	I0814 01:05:51.971411   61689 main.go:141] libmachine: Using SSH client type: native
	I0814 01:05:51.971649   61689 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0814 01:05:51.971663   61689 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0814 01:05:52.086583   61689 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723597552.047212997
	
	I0814 01:05:52.086611   61689 fix.go:216] guest clock: 1723597552.047212997
	I0814 01:05:52.086621   61689 fix.go:229] Guest: 2024-08-14 01:05:52.047212997 +0000 UTC Remote: 2024-08-14 01:05:51.966492542 +0000 UTC m=+253.980961749 (delta=80.720455ms)
	I0814 01:05:52.086647   61689 fix.go:200] guest clock delta is within tolerance: 80.720455ms
	I0814 01:05:52.086653   61689 start.go:83] releasing machines lock for "default-k8s-diff-port-585256", held for 20.260304872s
	I0814 01:05:52.086686   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .DriverName
	I0814 01:05:52.086988   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetIP
	I0814 01:05:52.089862   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:52.090237   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:05:52.090269   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:52.090388   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .DriverName
	I0814 01:05:52.090896   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .DriverName
	I0814 01:05:52.091065   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .DriverName
	I0814 01:05:52.091161   61689 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0814 01:05:52.091208   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHHostname
	I0814 01:05:52.091307   61689 ssh_runner.go:195] Run: cat /version.json
	I0814 01:05:52.091327   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHHostname
	I0814 01:05:52.094188   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:52.094456   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:52.094520   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:05:52.094539   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:52.094722   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHPort
	I0814 01:05:52.094906   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 01:05:52.095028   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:05:52.095052   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:52.095095   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHUsername
	I0814 01:05:52.095210   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHPort
	I0814 01:05:52.095290   61689 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/default-k8s-diff-port-585256/id_rsa Username:docker}
	I0814 01:05:52.095355   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 01:05:52.095505   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHUsername
	I0814 01:05:52.095657   61689 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/default-k8s-diff-port-585256/id_rsa Username:docker}
	I0814 01:05:52.214838   61689 ssh_runner.go:195] Run: systemctl --version
	I0814 01:05:52.222204   61689 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0814 01:05:52.375439   61689 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0814 01:05:52.381523   61689 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0814 01:05:52.381609   61689 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0814 01:05:52.401552   61689 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0814 01:05:52.401582   61689 start.go:495] detecting cgroup driver to use...
	I0814 01:05:52.401651   61689 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0814 01:05:52.417919   61689 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0814 01:05:52.437217   61689 docker.go:217] disabling cri-docker service (if available) ...
	I0814 01:05:52.437288   61689 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0814 01:05:52.453875   61689 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0814 01:05:52.470300   61689 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0814 01:05:52.595346   61689 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0814 01:05:52.762539   61689 docker.go:233] disabling docker service ...
	I0814 01:05:52.762616   61689 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0814 01:05:52.778328   61689 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0814 01:05:52.791736   61689 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0814 01:05:52.935414   61689 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0814 01:05:53.120909   61689 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0814 01:05:53.134424   61689 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 01:05:53.152618   61689 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0814 01:05:53.152693   61689 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:05:53.164847   61689 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0814 01:05:53.164922   61689 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:05:53.176337   61689 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:05:53.187338   61689 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:05:53.198573   61689 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0814 01:05:53.208385   61689 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:05:53.218220   61689 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:05:53.234795   61689 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:05:53.251006   61689 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0814 01:05:53.265820   61689 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0814 01:05:53.265883   61689 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0814 01:05:53.285753   61689 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0814 01:05:53.298127   61689 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 01:05:53.458646   61689 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0814 01:05:53.610690   61689 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0814 01:05:53.610765   61689 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0814 01:05:53.615292   61689 start.go:563] Will wait 60s for crictl version
	I0814 01:05:53.615348   61689 ssh_runner.go:195] Run: which crictl
	I0814 01:05:53.618756   61689 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0814 01:05:53.658450   61689 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0814 01:05:53.658551   61689 ssh_runner.go:195] Run: crio --version
	I0814 01:05:53.685316   61689 ssh_runner.go:195] Run: crio --version
	I0814 01:05:53.715106   61689 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0814 01:05:52.110579   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .Start
	I0814 01:05:52.110744   61804 main.go:141] libmachine: (old-k8s-version-179312) Ensuring networks are active...
	I0814 01:05:52.111309   61804 main.go:141] libmachine: (old-k8s-version-179312) Ensuring network default is active
	I0814 01:05:52.111709   61804 main.go:141] libmachine: (old-k8s-version-179312) Ensuring network mk-old-k8s-version-179312 is active
	I0814 01:05:52.112094   61804 main.go:141] libmachine: (old-k8s-version-179312) Getting domain xml...
	I0814 01:05:52.112845   61804 main.go:141] libmachine: (old-k8s-version-179312) Creating domain...
	I0814 01:05:53.502995   61804 main.go:141] libmachine: (old-k8s-version-179312) Waiting to get IP...
	I0814 01:05:53.504003   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:05:53.504428   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 01:05:53.504496   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 01:05:53.504392   62858 retry.go:31] will retry after 197.24813ms: waiting for machine to come up
	I0814 01:05:53.702874   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:05:53.703413   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 01:05:53.703435   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 01:05:53.703362   62858 retry.go:31] will retry after 310.273767ms: waiting for machine to come up
	I0814 01:05:54.015867   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:05:54.016309   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 01:05:54.016343   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 01:05:54.016247   62858 retry.go:31] will retry after 401.494411ms: waiting for machine to come up
	I0814 01:05:54.419847   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:05:54.420305   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 01:05:54.420330   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 01:05:54.420256   62858 retry.go:31] will retry after 407.322632ms: waiting for machine to come up
	I0814 01:05:53.379895   61447 api_server.go:279] https://192.168.72.94:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0814 01:05:53.379926   61447 api_server.go:103] status: https://192.168.72.94:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0814 01:05:53.379939   61447 api_server.go:253] Checking apiserver healthz at https://192.168.72.94:8443/healthz ...
	I0814 01:05:53.410913   61447 api_server.go:279] https://192.168.72.94:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0814 01:05:53.410945   61447 api_server.go:103] status: https://192.168.72.94:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0814 01:05:53.727193   61447 api_server.go:253] Checking apiserver healthz at https://192.168.72.94:8443/healthz ...
	I0814 01:05:53.740840   61447 api_server.go:279] https://192.168.72.94:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0814 01:05:53.740877   61447 api_server.go:103] status: https://192.168.72.94:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0814 01:05:54.227186   61447 api_server.go:253] Checking apiserver healthz at https://192.168.72.94:8443/healthz ...
	I0814 01:05:54.238685   61447 api_server.go:279] https://192.168.72.94:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0814 01:05:54.238721   61447 api_server.go:103] status: https://192.168.72.94:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0814 01:05:54.727193   61447 api_server.go:253] Checking apiserver healthz at https://192.168.72.94:8443/healthz ...
	I0814 01:05:54.733996   61447 api_server.go:279] https://192.168.72.94:8443/healthz returned 200:
	ok
	I0814 01:05:54.744409   61447 api_server.go:141] control plane version: v1.31.0
	I0814 01:05:54.744439   61447 api_server.go:131] duration metric: took 4.018095644s to wait for apiserver health ...
	I0814 01:05:54.744455   61447 cni.go:84] Creating CNI manager for ""
	I0814 01:05:54.744495   61447 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 01:05:54.746461   61447 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0814 01:05:54.748115   61447 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0814 01:05:54.764310   61447 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0814 01:05:54.794096   61447 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 01:05:54.818989   61447 system_pods.go:59] 8 kube-system pods found
	I0814 01:05:54.819032   61447 system_pods.go:61] "coredns-6f6b679f8f-dz9zk" [67e29ce3-7f67-4b96-8030-c980773b5772] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0814 01:05:54.819042   61447 system_pods.go:61] "etcd-no-preload-776907" [b81b7341-dcd8-4374-8241-8797eb33d707] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0814 01:05:54.819081   61447 system_pods.go:61] "kube-apiserver-no-preload-776907" [33b066e2-28ef-46a7-95d7-b17806cdbde6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0814 01:05:54.819094   61447 system_pods.go:61] "kube-controller-manager-no-preload-776907" [1de07b1f-7e0d-4704-84dc-fbb1280fc3bf] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0814 01:05:54.819106   61447 system_pods.go:61] "kube-proxy-pgm9t" [efad60b0-c62e-4c47-974b-98fdca9d3496] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0814 01:05:54.819119   61447 system_pods.go:61] "kube-scheduler-no-preload-776907" [6a57c2f5-6194-4e84-bfd3-985a6ff2333d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0814 01:05:54.819136   61447 system_pods.go:61] "metrics-server-6867b74b74-gb2dt" [c950c58e-c5c3-4535-b10f-f4379ff03409] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 01:05:54.819157   61447 system_pods.go:61] "storage-provisioner" [d0ba9510-e0a5-4558-98e3-a9510920f93a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0814 01:05:54.819172   61447 system_pods.go:74] duration metric: took 25.05113ms to wait for pod list to return data ...
	I0814 01:05:54.819195   61447 node_conditions.go:102] verifying NodePressure condition ...
	I0814 01:05:54.826286   61447 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0814 01:05:54.826394   61447 node_conditions.go:123] node cpu capacity is 2
	I0814 01:05:54.826437   61447 node_conditions.go:105] duration metric: took 7.224617ms to run NodePressure ...
	I0814 01:05:54.826473   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:05:55.135886   61447 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0814 01:05:55.142122   61447 kubeadm.go:739] kubelet initialised
	I0814 01:05:55.142142   61447 kubeadm.go:740] duration metric: took 6.231178ms waiting for restarted kubelet to initialise ...
	I0814 01:05:55.142157   61447 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 01:05:55.147513   61447 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6f6b679f8f-dz9zk" in "kube-system" namespace to be "Ready" ...
	I0814 01:05:55.153178   61447 pod_ready.go:97] node "no-preload-776907" hosting pod "coredns-6f6b679f8f-dz9zk" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-776907" has status "Ready":"False"
	I0814 01:05:55.153200   61447 pod_ready.go:81] duration metric: took 5.659541ms for pod "coredns-6f6b679f8f-dz9zk" in "kube-system" namespace to be "Ready" ...
	E0814 01:05:55.153208   61447 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-776907" hosting pod "coredns-6f6b679f8f-dz9zk" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-776907" has status "Ready":"False"
	I0814 01:05:55.153215   61447 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-776907" in "kube-system" namespace to be "Ready" ...
	I0814 01:05:55.158158   61447 pod_ready.go:97] node "no-preload-776907" hosting pod "etcd-no-preload-776907" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-776907" has status "Ready":"False"
	I0814 01:05:55.158182   61447 pod_ready.go:81] duration metric: took 4.958453ms for pod "etcd-no-preload-776907" in "kube-system" namespace to be "Ready" ...
	E0814 01:05:55.158192   61447 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-776907" hosting pod "etcd-no-preload-776907" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-776907" has status "Ready":"False"
	I0814 01:05:55.158199   61447 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-776907" in "kube-system" namespace to be "Ready" ...
	I0814 01:05:55.164468   61447 pod_ready.go:97] node "no-preload-776907" hosting pod "kube-apiserver-no-preload-776907" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-776907" has status "Ready":"False"
	I0814 01:05:55.164490   61447 pod_ready.go:81] duration metric: took 6.286201ms for pod "kube-apiserver-no-preload-776907" in "kube-system" namespace to be "Ready" ...
	E0814 01:05:55.164499   61447 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-776907" hosting pod "kube-apiserver-no-preload-776907" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-776907" has status "Ready":"False"
	I0814 01:05:55.164506   61447 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-776907" in "kube-system" namespace to be "Ready" ...
	I0814 01:05:55.198966   61447 pod_ready.go:97] node "no-preload-776907" hosting pod "kube-controller-manager-no-preload-776907" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-776907" has status "Ready":"False"
	I0814 01:05:55.199003   61447 pod_ready.go:81] duration metric: took 34.484311ms for pod "kube-controller-manager-no-preload-776907" in "kube-system" namespace to be "Ready" ...
	E0814 01:05:55.199017   61447 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-776907" hosting pod "kube-controller-manager-no-preload-776907" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-776907" has status "Ready":"False"
	I0814 01:05:55.199026   61447 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-pgm9t" in "kube-system" namespace to be "Ready" ...
	I0814 01:05:55.598334   61447 pod_ready.go:97] node "no-preload-776907" hosting pod "kube-proxy-pgm9t" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-776907" has status "Ready":"False"
	I0814 01:05:55.598365   61447 pod_ready.go:81] duration metric: took 399.329275ms for pod "kube-proxy-pgm9t" in "kube-system" namespace to be "Ready" ...
	E0814 01:05:55.598377   61447 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-776907" hosting pod "kube-proxy-pgm9t" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-776907" has status "Ready":"False"
	I0814 01:05:55.598386   61447 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-776907" in "kube-system" namespace to be "Ready" ...
	I0814 01:05:55.998091   61447 pod_ready.go:97] node "no-preload-776907" hosting pod "kube-scheduler-no-preload-776907" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-776907" has status "Ready":"False"
	I0814 01:05:55.998127   61447 pod_ready.go:81] duration metric: took 399.731033ms for pod "kube-scheduler-no-preload-776907" in "kube-system" namespace to be "Ready" ...
	E0814 01:05:55.998142   61447 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-776907" hosting pod "kube-scheduler-no-preload-776907" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-776907" has status "Ready":"False"
	I0814 01:05:55.998152   61447 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace to be "Ready" ...
	I0814 01:05:56.397421   61447 pod_ready.go:97] node "no-preload-776907" hosting pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-776907" has status "Ready":"False"
	I0814 01:05:56.397448   61447 pod_ready.go:81] duration metric: took 399.277712ms for pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace to be "Ready" ...
	E0814 01:05:56.397458   61447 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-776907" hosting pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-776907" has status "Ready":"False"
	I0814 01:05:56.397465   61447 pod_ready.go:38] duration metric: took 1.255299191s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 01:05:56.397481   61447 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0814 01:05:56.409600   61447 ops.go:34] apiserver oom_adj: -16
	I0814 01:05:56.409643   61447 kubeadm.go:597] duration metric: took 8.215521031s to restartPrimaryControlPlane
	I0814 01:05:56.409656   61447 kubeadm.go:394] duration metric: took 8.258927601s to StartCluster
	I0814 01:05:56.409677   61447 settings.go:142] acquiring lock: {Name:mkb0f793aa2a6618ff3457f9cd2d34beec5f1b5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 01:05:56.409769   61447 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19429-9425/kubeconfig
	I0814 01:05:56.411135   61447 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19429-9425/kubeconfig: {Name:mkd99f966962f24d3ec49056c344f5320df43dba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 01:05:56.411434   61447 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.94 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0814 01:05:56.411510   61447 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0814 01:05:56.411605   61447 addons.go:69] Setting storage-provisioner=true in profile "no-preload-776907"
	I0814 01:05:56.411639   61447 addons.go:234] Setting addon storage-provisioner=true in "no-preload-776907"
	W0814 01:05:56.411651   61447 addons.go:243] addon storage-provisioner should already be in state true
	I0814 01:05:56.411692   61447 host.go:66] Checking if "no-preload-776907" exists ...
	I0814 01:05:56.411702   61447 config.go:182] Loaded profile config "no-preload-776907": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 01:05:56.411755   61447 addons.go:69] Setting default-storageclass=true in profile "no-preload-776907"
	I0814 01:05:56.411792   61447 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-776907"
	I0814 01:05:56.412127   61447 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:05:56.412169   61447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:05:56.412221   61447 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:05:56.412238   61447 addons.go:69] Setting metrics-server=true in profile "no-preload-776907"
	I0814 01:05:56.412249   61447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:05:56.412272   61447 addons.go:234] Setting addon metrics-server=true in "no-preload-776907"
	W0814 01:05:56.412289   61447 addons.go:243] addon metrics-server should already be in state true
	I0814 01:05:56.412325   61447 host.go:66] Checking if "no-preload-776907" exists ...
	I0814 01:05:56.412679   61447 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:05:56.412726   61447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:05:56.413470   61447 out.go:177] * Verifying Kubernetes components...
	I0814 01:05:56.414907   61447 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 01:05:56.432617   61447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40991
	I0814 01:05:56.433633   61447 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:05:56.433655   61447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42565
	I0814 01:05:56.433682   61447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33323
	I0814 01:05:56.434304   61447 main.go:141] libmachine: Using API Version  1
	I0814 01:05:56.434325   61447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:05:56.434348   61447 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:05:56.434768   61447 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:05:56.434828   61447 main.go:141] libmachine: Using API Version  1
	I0814 01:05:56.434849   61447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:05:56.435292   61447 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:05:56.435318   61447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:05:56.435500   61447 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:05:56.436085   61447 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:05:56.436133   61447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:05:56.436678   61447 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:05:56.438722   61447 main.go:141] libmachine: Using API Version  1
	I0814 01:05:56.438744   61447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:05:56.439300   61447 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:05:56.442254   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetState
	I0814 01:05:56.445951   61447 addons.go:234] Setting addon default-storageclass=true in "no-preload-776907"
	W0814 01:05:56.445969   61447 addons.go:243] addon default-storageclass should already be in state true
	I0814 01:05:56.445997   61447 host.go:66] Checking if "no-preload-776907" exists ...
	I0814 01:05:56.446331   61447 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:05:56.446364   61447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:05:56.457855   61447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36297
	I0814 01:05:56.459973   61447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40635
	I0814 01:05:56.460484   61447 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:05:56.461068   61447 main.go:141] libmachine: Using API Version  1
	I0814 01:05:56.461089   61447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:05:56.461565   61447 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:05:56.462741   61447 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:05:56.462899   61447 main.go:141] libmachine: Using API Version  1
	I0814 01:05:56.462913   61447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:05:56.463577   61447 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:05:56.463640   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetState
	I0814 01:05:56.464100   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetState
	I0814 01:05:56.464341   61447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38841
	I0814 01:05:56.465394   61447 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:05:56.465878   61447 main.go:141] libmachine: (no-preload-776907) Calling .DriverName
	I0814 01:05:56.465995   61447 main.go:141] libmachine: Using API Version  1
	I0814 01:05:56.466007   61447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:05:56.466617   61447 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:05:56.466684   61447 main.go:141] libmachine: (no-preload-776907) Calling .DriverName
	I0814 01:05:56.467327   61447 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:05:56.467367   61447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:05:56.468708   61447 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0814 01:05:56.468802   61447 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 01:05:56.469927   61447 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0814 01:05:56.469944   61447 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0814 01:05:56.469963   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHHostname
	I0814 01:05:56.473235   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:56.473684   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:56.473705   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:56.473879   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHPort
	I0814 01:05:56.474052   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 01:05:56.474176   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHUsername
	I0814 01:05:56.474181   61447 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 01:05:56.474230   61447 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0814 01:05:56.474244   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHHostname
	I0814 01:05:56.474328   61447 sshutil.go:53] new ssh client: &{IP:192.168.72.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/no-preload-776907/id_rsa Username:docker}
	I0814 01:05:56.477789   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:56.478291   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:56.478307   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:56.478643   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHPort
	I0814 01:05:56.478813   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 01:05:56.478932   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHUsername
	I0814 01:05:56.479056   61447 sshutil.go:53] new ssh client: &{IP:192.168.72.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/no-preload-776907/id_rsa Username:docker}
	I0814 01:05:56.506690   61447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40059
	I0814 01:05:56.507196   61447 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:05:56.507726   61447 main.go:141] libmachine: Using API Version  1
	I0814 01:05:56.507750   61447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:05:56.508129   61447 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:05:56.508352   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetState
	I0814 01:05:53.716678   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetIP
	I0814 01:05:53.719662   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:53.720132   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:05:53.720161   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:53.720382   61689 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0814 01:05:53.724276   61689 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 01:05:53.736896   61689 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-585256 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:default-k8s-diff-port-585256 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0814 01:05:53.737033   61689 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0814 01:05:53.737090   61689 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 01:05:53.786464   61689 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0814 01:05:53.786549   61689 ssh_runner.go:195] Run: which lz4
	I0814 01:05:53.791254   61689 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0814 01:05:53.796216   61689 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0814 01:05:53.796251   61689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0814 01:05:55.074296   61689 crio.go:462] duration metric: took 1.283077887s to copy over tarball
	I0814 01:05:55.074381   61689 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0814 01:05:57.330151   61689 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.255736783s)
	I0814 01:05:57.330183   61689 crio.go:469] duration metric: took 2.255855524s to extract the tarball
	I0814 01:05:57.330193   61689 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0814 01:05:57.390001   61689 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 01:05:57.438765   61689 crio.go:514] all images are preloaded for cri-o runtime.
	I0814 01:05:57.438795   61689 cache_images.go:84] Images are preloaded, skipping loading
	I0814 01:05:57.438804   61689 kubeadm.go:934] updating node { 192.168.39.110 8444 v1.31.0 crio true true} ...
	I0814 01:05:57.438939   61689 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-585256 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.110
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-585256 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0814 01:05:57.439019   61689 ssh_runner.go:195] Run: crio config
	I0814 01:05:57.487432   61689 cni.go:84] Creating CNI manager for ""
	I0814 01:05:57.487456   61689 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 01:05:57.487468   61689 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0814 01:05:57.487488   61689 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.110 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-585256 NodeName:default-k8s-diff-port-585256 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.110"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.110 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0814 01:05:57.487628   61689 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.110
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-585256"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.110
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.110"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0814 01:05:57.487683   61689 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0814 01:05:57.499806   61689 binaries.go:44] Found k8s binaries, skipping transfer
	I0814 01:05:57.499875   61689 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0814 01:05:57.508987   61689 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0814 01:05:57.527561   61689 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0814 01:05:57.546193   61689 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0814 01:05:57.566209   61689 ssh_runner.go:195] Run: grep 192.168.39.110	control-plane.minikube.internal$ /etc/hosts
	I0814 01:05:57.569852   61689 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.110	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 01:05:57.584800   61689 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 01:05:57.718643   61689 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 01:05:57.739124   61689 certs.go:68] Setting up /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/default-k8s-diff-port-585256 for IP: 192.168.39.110
	I0814 01:05:57.739153   61689 certs.go:194] generating shared ca certs ...
	I0814 01:05:57.739174   61689 certs.go:226] acquiring lock for ca certs: {Name:mke321171338291cf6d66f3acbfe43d3fabcafa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 01:05:57.739390   61689 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19429-9425/.minikube/ca.key
	I0814 01:05:57.739461   61689 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.key
	I0814 01:05:57.739476   61689 certs.go:256] generating profile certs ...
	I0814 01:05:57.739607   61689 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/default-k8s-diff-port-585256/client.key
	I0814 01:05:57.739700   61689 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/default-k8s-diff-port-585256/apiserver.key.7cbada89
	I0814 01:05:57.739764   61689 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/default-k8s-diff-port-585256/proxy-client.key
	I0814 01:05:57.739951   61689 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589.pem (1338 bytes)
	W0814 01:05:57.740000   61689 certs.go:480] ignoring /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589_empty.pem, impossibly tiny 0 bytes
	I0814 01:05:57.740017   61689 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem (1679 bytes)
	I0814 01:05:57.740054   61689 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem (1082 bytes)
	I0814 01:05:57.740096   61689 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem (1123 bytes)
	I0814 01:05:57.740128   61689 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem (1675 bytes)
	I0814 01:05:57.740198   61689 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem (1708 bytes)
	I0814 01:05:57.740914   61689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0814 01:05:57.776830   61689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0814 01:05:57.805557   61689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0814 01:05:57.838303   61689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0814 01:05:57.878807   61689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/default-k8s-diff-port-585256/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0814 01:05:57.918149   61689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/default-k8s-diff-port-585256/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0814 01:05:57.951098   61689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/default-k8s-diff-port-585256/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0814 01:05:57.979966   61689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/default-k8s-diff-port-585256/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0814 01:05:58.008045   61689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem --> /usr/share/ca-certificates/165892.pem (1708 bytes)
	I0814 01:05:56.510326   61447 main.go:141] libmachine: (no-preload-776907) Calling .DriverName
	I0814 01:05:56.510711   61447 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0814 01:05:56.510727   61447 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0814 01:05:56.510746   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHHostname
	I0814 01:05:56.513933   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:56.514347   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:56.514366   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:56.514640   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHPort
	I0814 01:05:56.514790   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 01:05:56.514921   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHUsername
	I0814 01:05:56.515041   61447 sshutil.go:53] new ssh client: &{IP:192.168.72.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/no-preload-776907/id_rsa Username:docker}
	I0814 01:05:56.648210   61447 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 01:05:56.669968   61447 node_ready.go:35] waiting up to 6m0s for node "no-preload-776907" to be "Ready" ...
	I0814 01:05:56.752258   61447 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0814 01:05:56.752282   61447 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0814 01:05:56.784534   61447 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0814 01:05:56.784570   61447 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0814 01:05:56.797555   61447 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0814 01:05:56.811711   61447 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 01:05:56.852143   61447 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0814 01:05:56.852222   61447 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0814 01:05:56.896802   61447 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0814 01:05:57.332181   61447 main.go:141] libmachine: Making call to close driver server
	I0814 01:05:57.332207   61447 main.go:141] libmachine: (no-preload-776907) Calling .Close
	I0814 01:05:57.332534   61447 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:05:57.332552   61447 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:05:57.332562   61447 main.go:141] libmachine: Making call to close driver server
	I0814 01:05:57.332570   61447 main.go:141] libmachine: (no-preload-776907) Calling .Close
	I0814 01:05:57.332892   61447 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:05:57.332908   61447 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:05:57.332999   61447 main.go:141] libmachine: (no-preload-776907) DBG | Closing plugin on server side
	I0814 01:05:57.377695   61447 main.go:141] libmachine: Making call to close driver server
	I0814 01:05:57.377726   61447 main.go:141] libmachine: (no-preload-776907) Calling .Close
	I0814 01:05:57.378310   61447 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:05:57.378335   61447 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:05:57.378307   61447 main.go:141] libmachine: (no-preload-776907) DBG | Closing plugin on server side
	I0814 01:05:58.285384   61447 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.388491618s)
	I0814 01:05:58.285399   61447 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.473604802s)
	I0814 01:05:58.285438   61447 main.go:141] libmachine: Making call to close driver server
	I0814 01:05:58.285466   61447 main.go:141] libmachine: (no-preload-776907) Calling .Close
	I0814 01:05:58.285438   61447 main.go:141] libmachine: Making call to close driver server
	I0814 01:05:58.285542   61447 main.go:141] libmachine: (no-preload-776907) Calling .Close
	I0814 01:05:58.285816   61447 main.go:141] libmachine: (no-preload-776907) DBG | Closing plugin on server side
	I0814 01:05:58.285858   61447 main.go:141] libmachine: (no-preload-776907) DBG | Closing plugin on server side
	I0814 01:05:58.285874   61447 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:05:58.285881   61447 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:05:58.285890   61447 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:05:58.285897   61447 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:05:58.285903   61447 main.go:141] libmachine: Making call to close driver server
	I0814 01:05:58.285908   61447 main.go:141] libmachine: Making call to close driver server
	I0814 01:05:58.285915   61447 main.go:141] libmachine: (no-preload-776907) Calling .Close
	I0814 01:05:58.285934   61447 main.go:141] libmachine: (no-preload-776907) Calling .Close
	I0814 01:05:58.286168   61447 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:05:58.286180   61447 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:05:58.287529   61447 main.go:141] libmachine: (no-preload-776907) DBG | Closing plugin on server side
	I0814 01:05:58.287541   61447 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:05:58.287560   61447 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:05:58.287576   61447 addons.go:475] Verifying addon metrics-server=true in "no-preload-776907"
	I0814 01:05:58.289411   61447 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0814 01:05:54.828943   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:05:54.829542   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 01:05:54.829567   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 01:05:54.829451   62858 retry.go:31] will retry after 761.368258ms: waiting for machine to come up
	I0814 01:05:55.592398   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:05:55.593051   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 01:05:55.593077   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 01:05:55.592959   62858 retry.go:31] will retry after 776.526082ms: waiting for machine to come up
	I0814 01:05:56.370701   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:05:56.371193   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 01:05:56.371214   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 01:05:56.371176   62858 retry.go:31] will retry after 1.033572565s: waiting for machine to come up
	I0814 01:05:57.407052   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:05:57.407572   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 01:05:57.407608   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 01:05:57.407514   62858 retry.go:31] will retry after 1.075443116s: waiting for machine to come up
	I0814 01:05:58.484020   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:05:58.484428   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 01:05:58.484450   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 01:05:58.484400   62858 retry.go:31] will retry after 1.753983606s: waiting for machine to come up
	I0814 01:05:58.290516   61447 addons.go:510] duration metric: took 1.879011423s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0814 01:05:58.674495   61447 node_ready.go:53] node "no-preload-776907" has status "Ready":"False"
	I0814 01:06:00.726396   61447 node_ready.go:53] node "no-preload-776907" has status "Ready":"False"
	I0814 01:05:58.035164   61689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0814 01:05:58.062151   61689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589.pem --> /usr/share/ca-certificates/16589.pem (1338 bytes)
	I0814 01:05:58.088779   61689 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0814 01:05:58.104815   61689 ssh_runner.go:195] Run: openssl version
	I0814 01:05:58.111743   61689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0814 01:05:58.122523   61689 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0814 01:05:58.126771   61689 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 13 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0814 01:05:58.126827   61689 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0814 01:05:58.132103   61689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0814 01:05:58.143604   61689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16589.pem && ln -fs /usr/share/ca-certificates/16589.pem /etc/ssl/certs/16589.pem"
	I0814 01:05:58.155065   61689 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16589.pem
	I0814 01:05:58.160457   61689 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 13 23:59 /usr/share/ca-certificates/16589.pem
	I0814 01:05:58.160511   61689 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16589.pem
	I0814 01:05:58.167417   61689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16589.pem /etc/ssl/certs/51391683.0"
	I0814 01:05:58.180825   61689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/165892.pem && ln -fs /usr/share/ca-certificates/165892.pem /etc/ssl/certs/165892.pem"
	I0814 01:05:58.193263   61689 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/165892.pem
	I0814 01:05:58.198571   61689 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 13 23:59 /usr/share/ca-certificates/165892.pem
	I0814 01:05:58.198637   61689 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/165892.pem
	I0814 01:05:58.205645   61689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/165892.pem /etc/ssl/certs/3ec20f2e.0"
	I0814 01:05:58.219088   61689 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0814 01:05:58.224431   61689 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0814 01:05:58.231762   61689 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0814 01:05:58.238996   61689 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0814 01:05:58.244758   61689 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0814 01:05:58.250112   61689 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0814 01:05:58.257224   61689 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0814 01:05:58.262563   61689 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-585256 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:default-k8s-diff-port-585256 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 01:05:58.262677   61689 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0814 01:05:58.262745   61689 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 01:05:58.309680   61689 cri.go:89] found id: ""
	I0814 01:05:58.309753   61689 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0814 01:05:58.319775   61689 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0814 01:05:58.319796   61689 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0814 01:05:58.319852   61689 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0814 01:05:58.329093   61689 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0814 01:05:58.330026   61689 kubeconfig.go:125] found "default-k8s-diff-port-585256" server: "https://192.168.39.110:8444"
	I0814 01:05:58.332001   61689 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0814 01:05:58.341206   61689 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.110
	I0814 01:05:58.341235   61689 kubeadm.go:1160] stopping kube-system containers ...
	I0814 01:05:58.341247   61689 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0814 01:05:58.341311   61689 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 01:05:58.376929   61689 cri.go:89] found id: ""
	I0814 01:05:58.376991   61689 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0814 01:05:58.393789   61689 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 01:05:58.402954   61689 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 01:05:58.402979   61689 kubeadm.go:157] found existing configuration files:
	
	I0814 01:05:58.403032   61689 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0814 01:05:58.412025   61689 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 01:05:58.412081   61689 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 01:05:58.421031   61689 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0814 01:05:58.429702   61689 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 01:05:58.429774   61689 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 01:05:58.438859   61689 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0814 01:05:58.447047   61689 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 01:05:58.447106   61689 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 01:05:58.455697   61689 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0814 01:05:58.463942   61689 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 01:05:58.464004   61689 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 01:05:58.472399   61689 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 01:05:58.481173   61689 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:05:58.591187   61689 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:05:59.150641   61689 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:05:59.356842   61689 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:05:59.416846   61689 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:05:59.500693   61689 api_server.go:52] waiting for apiserver process to appear ...
	I0814 01:05:59.500779   61689 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:00.001860   61689 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:00.500969   61689 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:01.001662   61689 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:01.030737   61689 api_server.go:72] duration metric: took 1.530044643s to wait for apiserver process to appear ...
	I0814 01:06:01.030766   61689 api_server.go:88] waiting for apiserver healthz status ...
	I0814 01:06:01.030790   61689 api_server.go:253] Checking apiserver healthz at https://192.168.39.110:8444/healthz ...
	I0814 01:06:01.031270   61689 api_server.go:269] stopped: https://192.168.39.110:8444/healthz: Get "https://192.168.39.110:8444/healthz": dial tcp 192.168.39.110:8444: connect: connection refused
	I0814 01:06:01.530913   61689 api_server.go:253] Checking apiserver healthz at https://192.168.39.110:8444/healthz ...
	I0814 01:06:00.239701   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:00.240210   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 01:06:00.240234   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 01:06:00.240157   62858 retry.go:31] will retry after 1.471169968s: waiting for machine to come up
	I0814 01:06:01.713921   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:01.714410   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 01:06:01.714449   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 01:06:01.714385   62858 retry.go:31] will retry after 2.509653415s: waiting for machine to come up
	I0814 01:06:04.225883   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:04.226391   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 01:06:04.226417   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 01:06:04.226346   62858 retry.go:31] will retry after 3.61921572s: waiting for machine to come up
	I0814 01:06:04.011296   61689 api_server.go:279] https://192.168.39.110:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0814 01:06:04.011342   61689 api_server.go:103] status: https://192.168.39.110:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0814 01:06:04.011359   61689 api_server.go:253] Checking apiserver healthz at https://192.168.39.110:8444/healthz ...
	I0814 01:06:04.030095   61689 api_server.go:279] https://192.168.39.110:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0814 01:06:04.030128   61689 api_server.go:103] status: https://192.168.39.110:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0814 01:06:04.031159   61689 api_server.go:253] Checking apiserver healthz at https://192.168.39.110:8444/healthz ...
	I0814 01:06:04.149715   61689 api_server.go:279] https://192.168.39.110:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0814 01:06:04.149760   61689 api_server.go:103] status: https://192.168.39.110:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0814 01:06:04.530942   61689 api_server.go:253] Checking apiserver healthz at https://192.168.39.110:8444/healthz ...
	I0814 01:06:04.541074   61689 api_server.go:279] https://192.168.39.110:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0814 01:06:04.541119   61689 api_server.go:103] status: https://192.168.39.110:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0814 01:06:05.031232   61689 api_server.go:253] Checking apiserver healthz at https://192.168.39.110:8444/healthz ...
	I0814 01:06:05.036252   61689 api_server.go:279] https://192.168.39.110:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0814 01:06:05.036278   61689 api_server.go:103] status: https://192.168.39.110:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0814 01:06:05.531902   61689 api_server.go:253] Checking apiserver healthz at https://192.168.39.110:8444/healthz ...
	I0814 01:06:05.536016   61689 api_server.go:279] https://192.168.39.110:8444/healthz returned 200:
	ok
	I0814 01:06:05.542693   61689 api_server.go:141] control plane version: v1.31.0
	I0814 01:06:05.542718   61689 api_server.go:131] duration metric: took 4.511944733s to wait for apiserver health ...
	I0814 01:06:05.542728   61689 cni.go:84] Creating CNI manager for ""
	I0814 01:06:05.542736   61689 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 01:06:05.544557   61689 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0814 01:06:03.174271   61447 node_ready.go:53] node "no-preload-776907" has status "Ready":"False"
	I0814 01:06:04.174287   61447 node_ready.go:49] node "no-preload-776907" has status "Ready":"True"
	I0814 01:06:04.174312   61447 node_ready.go:38] duration metric: took 7.504312709s for node "no-preload-776907" to be "Ready" ...
	I0814 01:06:04.174324   61447 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 01:06:04.181275   61447 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-dz9zk" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:04.187150   61447 pod_ready.go:92] pod "coredns-6f6b679f8f-dz9zk" in "kube-system" namespace has status "Ready":"True"
	I0814 01:06:04.187171   61447 pod_ready.go:81] duration metric: took 5.866488ms for pod "coredns-6f6b679f8f-dz9zk" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:04.187180   61447 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-776907" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:04.192673   61447 pod_ready.go:92] pod "etcd-no-preload-776907" in "kube-system" namespace has status "Ready":"True"
	I0814 01:06:04.192694   61447 pod_ready.go:81] duration metric: took 5.50752ms for pod "etcd-no-preload-776907" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:04.192705   61447 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-776907" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:06.199283   61447 pod_ready.go:102] pod "kube-apiserver-no-preload-776907" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:05.545819   61689 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0814 01:06:05.556019   61689 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0814 01:06:05.598403   61689 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 01:06:05.608687   61689 system_pods.go:59] 8 kube-system pods found
	I0814 01:06:05.608718   61689 system_pods.go:61] "coredns-6f6b679f8f-7vdsf" [ea069874-e3a9-41a4-b038-cfca429e60cc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0814 01:06:05.608730   61689 system_pods.go:61] "etcd-default-k8s-diff-port-585256" [922a7db1-2b4d-4f7b-af08-3ed730f1d6e3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0814 01:06:05.608737   61689 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-585256" [2db632ae-aaf3-4df4-85b2-7ba505297efb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0814 01:06:05.608743   61689 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-585256" [d9cc182b-9153-4606-a719-465aed72c481] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0814 01:06:05.608747   61689 system_pods.go:61] "kube-proxy-cz77l" [67d1af69-ecbd-4564-be50-f96936604345] Running
	I0814 01:06:05.608751   61689 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-585256" [f0e99120-b573-4eb6-909f-a9b79886ec47] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0814 01:06:05.608755   61689 system_pods.go:61] "metrics-server-6867b74b74-6cql9" [f1213ad4-770d-4b81-96b9-7b5e10f2a23a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 01:06:05.608760   61689 system_pods.go:61] "storage-provisioner" [589b83be-2ad6-4b16-829f-cb944487303c] Running
	I0814 01:06:05.608766   61689 system_pods.go:74] duration metric: took 10.339955ms to wait for pod list to return data ...
	I0814 01:06:05.608772   61689 node_conditions.go:102] verifying NodePressure condition ...
	I0814 01:06:05.612993   61689 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0814 01:06:05.613024   61689 node_conditions.go:123] node cpu capacity is 2
	I0814 01:06:05.613037   61689 node_conditions.go:105] duration metric: took 4.259435ms to run NodePressure ...
	I0814 01:06:05.613055   61689 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:06:05.884859   61689 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0814 01:06:05.889608   61689 kubeadm.go:739] kubelet initialised
	I0814 01:06:05.889636   61689 kubeadm.go:740] duration metric: took 4.742229ms waiting for restarted kubelet to initialise ...
	I0814 01:06:05.889644   61689 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 01:06:05.991222   61689 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6f6b679f8f-7vdsf" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:05.997411   61689 pod_ready.go:97] node "default-k8s-diff-port-585256" hosting pod "coredns-6f6b679f8f-7vdsf" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-585256" has status "Ready":"False"
	I0814 01:06:05.997442   61689 pod_ready.go:81] duration metric: took 6.186188ms for pod "coredns-6f6b679f8f-7vdsf" in "kube-system" namespace to be "Ready" ...
	E0814 01:06:05.997455   61689 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-585256" hosting pod "coredns-6f6b679f8f-7vdsf" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-585256" has status "Ready":"False"
	I0814 01:06:05.997463   61689 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-585256" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:06.008153   61689 pod_ready.go:97] node "default-k8s-diff-port-585256" hosting pod "etcd-default-k8s-diff-port-585256" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-585256" has status "Ready":"False"
	I0814 01:06:06.008188   61689 pod_ready.go:81] duration metric: took 10.714691ms for pod "etcd-default-k8s-diff-port-585256" in "kube-system" namespace to be "Ready" ...
	E0814 01:06:06.008204   61689 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-585256" hosting pod "etcd-default-k8s-diff-port-585256" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-585256" has status "Ready":"False"
	I0814 01:06:06.008213   61689 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-585256" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:06.013480   61689 pod_ready.go:97] node "default-k8s-diff-port-585256" hosting pod "kube-apiserver-default-k8s-diff-port-585256" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-585256" has status "Ready":"False"
	I0814 01:06:06.013500   61689 pod_ready.go:81] duration metric: took 5.279106ms for pod "kube-apiserver-default-k8s-diff-port-585256" in "kube-system" namespace to be "Ready" ...
	E0814 01:06:06.013510   61689 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-585256" hosting pod "kube-apiserver-default-k8s-diff-port-585256" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-585256" has status "Ready":"False"
	I0814 01:06:06.013517   61689 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-585256" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:06.022821   61689 pod_ready.go:97] node "default-k8s-diff-port-585256" hosting pod "kube-controller-manager-default-k8s-diff-port-585256" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-585256" has status "Ready":"False"
	I0814 01:06:06.022841   61689 pod_ready.go:81] duration metric: took 9.318586ms for pod "kube-controller-manager-default-k8s-diff-port-585256" in "kube-system" namespace to be "Ready" ...
	E0814 01:06:06.022851   61689 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-585256" hosting pod "kube-controller-manager-default-k8s-diff-port-585256" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-585256" has status "Ready":"False"
	I0814 01:06:06.022857   61689 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-cz77l" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:06.402225   61689 pod_ready.go:92] pod "kube-proxy-cz77l" in "kube-system" namespace has status "Ready":"True"
	I0814 01:06:06.402251   61689 pod_ready.go:81] duration metric: took 379.387097ms for pod "kube-proxy-cz77l" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:06.402267   61689 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-585256" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:07.847343   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:07.847844   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 01:06:07.847879   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 01:06:07.847800   62858 retry.go:31] will retry after 2.983420512s: waiting for machine to come up
	I0814 01:06:07.699362   61447 pod_ready.go:92] pod "kube-apiserver-no-preload-776907" in "kube-system" namespace has status "Ready":"True"
	I0814 01:06:07.699393   61447 pod_ready.go:81] duration metric: took 3.506678951s for pod "kube-apiserver-no-preload-776907" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:07.699407   61447 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-776907" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:07.704007   61447 pod_ready.go:92] pod "kube-controller-manager-no-preload-776907" in "kube-system" namespace has status "Ready":"True"
	I0814 01:06:07.704028   61447 pod_ready.go:81] duration metric: took 4.613152ms for pod "kube-controller-manager-no-preload-776907" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:07.704038   61447 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pgm9t" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:07.708027   61447 pod_ready.go:92] pod "kube-proxy-pgm9t" in "kube-system" namespace has status "Ready":"True"
	I0814 01:06:07.708044   61447 pod_ready.go:81] duration metric: took 3.999792ms for pod "kube-proxy-pgm9t" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:07.708052   61447 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-776907" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:07.774591   61447 pod_ready.go:92] pod "kube-scheduler-no-preload-776907" in "kube-system" namespace has status "Ready":"True"
	I0814 01:06:07.774621   61447 pod_ready.go:81] duration metric: took 66.56102ms for pod "kube-scheduler-no-preload-776907" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:07.774642   61447 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:09.781156   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:12.050400   61115 start.go:364] duration metric: took 54.455049928s to acquireMachinesLock for "embed-certs-901410"
	I0814 01:06:12.050448   61115 start.go:96] Skipping create...Using existing machine configuration
	I0814 01:06:12.050458   61115 fix.go:54] fixHost starting: 
	I0814 01:06:12.050897   61115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:06:12.050932   61115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:06:12.067865   61115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41559
	I0814 01:06:12.068209   61115 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:06:12.068726   61115 main.go:141] libmachine: Using API Version  1
	I0814 01:06:12.068757   61115 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:06:12.069116   61115 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:06:12.069354   61115 main.go:141] libmachine: (embed-certs-901410) Calling .DriverName
	I0814 01:06:12.069516   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetState
	I0814 01:06:12.070994   61115 fix.go:112] recreateIfNeeded on embed-certs-901410: state=Stopped err=<nil>
	I0814 01:06:12.071029   61115 main.go:141] libmachine: (embed-certs-901410) Calling .DriverName
	W0814 01:06:12.071156   61115 fix.go:138] unexpected machine state, will restart: <nil>
	I0814 01:06:12.072932   61115 out.go:177] * Restarting existing kvm2 VM for "embed-certs-901410" ...
	I0814 01:06:08.410114   61689 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-585256" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:10.909528   61689 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-585256" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:12.911385   61689 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-585256" in "kube-system" namespace has status "Ready":"True"
	I0814 01:06:12.911416   61689 pod_ready.go:81] duration metric: took 6.509140238s for pod "kube-scheduler-default-k8s-diff-port-585256" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:12.911432   61689 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:10.834861   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:10.835358   61804 main.go:141] libmachine: (old-k8s-version-179312) Found IP for machine: 192.168.61.123
	I0814 01:06:10.835381   61804 main.go:141] libmachine: (old-k8s-version-179312) Reserving static IP address...
	I0814 01:06:10.835396   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has current primary IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:10.835795   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "old-k8s-version-179312", mac: "52:54:00:b2:76:73", ip: "192.168.61.123"} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:10.835827   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | skip adding static IP to network mk-old-k8s-version-179312 - found existing host DHCP lease matching {name: "old-k8s-version-179312", mac: "52:54:00:b2:76:73", ip: "192.168.61.123"}
	I0814 01:06:10.835846   61804 main.go:141] libmachine: (old-k8s-version-179312) Reserved static IP address: 192.168.61.123
	I0814 01:06:10.835866   61804 main.go:141] libmachine: (old-k8s-version-179312) Waiting for SSH to be available...
	I0814 01:06:10.835880   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | Getting to WaitForSSH function...
	I0814 01:06:10.837965   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:10.838336   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:10.838379   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:10.838482   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | Using SSH client type: external
	I0814 01:06:10.838520   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | Using SSH private key: /home/jenkins/minikube-integration/19429-9425/.minikube/machines/old-k8s-version-179312/id_rsa (-rw-------)
	I0814 01:06:10.838549   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.123 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19429-9425/.minikube/machines/old-k8s-version-179312/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0814 01:06:10.838568   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | About to run SSH command:
	I0814 01:06:10.838578   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | exit 0
	I0814 01:06:10.965836   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | SSH cmd err, output: <nil>: 
	I0814 01:06:10.966231   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetConfigRaw
	I0814 01:06:10.966912   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetIP
	I0814 01:06:10.969194   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:10.969535   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:10.969560   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:10.969789   61804 profile.go:143] Saving config to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312/config.json ...
	I0814 01:06:10.969969   61804 machine.go:94] provisionDockerMachine start ...
	I0814 01:06:10.969987   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .DriverName
	I0814 01:06:10.970183   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHHostname
	I0814 01:06:10.972010   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:10.972332   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:10.972361   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:10.972476   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHPort
	I0814 01:06:10.972658   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 01:06:10.972807   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 01:06:10.972942   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHUsername
	I0814 01:06:10.973088   61804 main.go:141] libmachine: Using SSH client type: native
	I0814 01:06:10.973257   61804 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I0814 01:06:10.973267   61804 main.go:141] libmachine: About to run SSH command:
	hostname
	I0814 01:06:11.074077   61804 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0814 01:06:11.074111   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetMachineName
	I0814 01:06:11.074328   61804 buildroot.go:166] provisioning hostname "old-k8s-version-179312"
	I0814 01:06:11.074364   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetMachineName
	I0814 01:06:11.074666   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHHostname
	I0814 01:06:11.077309   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.077697   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:11.077730   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.077803   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHPort
	I0814 01:06:11.077990   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 01:06:11.078161   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 01:06:11.078304   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHUsername
	I0814 01:06:11.078510   61804 main.go:141] libmachine: Using SSH client type: native
	I0814 01:06:11.078729   61804 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I0814 01:06:11.078743   61804 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-179312 && echo "old-k8s-version-179312" | sudo tee /etc/hostname
	I0814 01:06:11.193209   61804 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-179312
	
	I0814 01:06:11.193241   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHHostname
	I0814 01:06:11.195907   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.196315   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:11.196342   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.196569   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHPort
	I0814 01:06:11.196774   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 01:06:11.196936   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 01:06:11.197079   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHUsername
	I0814 01:06:11.197234   61804 main.go:141] libmachine: Using SSH client type: native
	I0814 01:06:11.197448   61804 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I0814 01:06:11.197477   61804 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-179312' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-179312/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-179312' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0814 01:06:11.312005   61804 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 01:06:11.312037   61804 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19429-9425/.minikube CaCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19429-9425/.minikube}
	I0814 01:06:11.312082   61804 buildroot.go:174] setting up certificates
	I0814 01:06:11.312093   61804 provision.go:84] configureAuth start
	I0814 01:06:11.312103   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetMachineName
	I0814 01:06:11.312396   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetIP
	I0814 01:06:11.315412   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.315909   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:11.315952   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.316043   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHHostname
	I0814 01:06:11.318283   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.318603   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:11.318630   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.318791   61804 provision.go:143] copyHostCerts
	I0814 01:06:11.318852   61804 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem, removing ...
	I0814 01:06:11.318875   61804 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem
	I0814 01:06:11.318944   61804 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem (1082 bytes)
	I0814 01:06:11.319073   61804 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem, removing ...
	I0814 01:06:11.319085   61804 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem
	I0814 01:06:11.319115   61804 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem (1123 bytes)
	I0814 01:06:11.319199   61804 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem, removing ...
	I0814 01:06:11.319209   61804 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem
	I0814 01:06:11.319262   61804 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem (1675 bytes)
	I0814 01:06:11.319351   61804 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-179312 san=[127.0.0.1 192.168.61.123 localhost minikube old-k8s-version-179312]
	I0814 01:06:11.396260   61804 provision.go:177] copyRemoteCerts
	I0814 01:06:11.396338   61804 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0814 01:06:11.396372   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHHostname
	I0814 01:06:11.399365   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.399788   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:11.399824   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.399989   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHPort
	I0814 01:06:11.400186   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 01:06:11.400349   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHUsername
	I0814 01:06:11.400555   61804 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/old-k8s-version-179312/id_rsa Username:docker}
	I0814 01:06:11.483862   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0814 01:06:11.506282   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0814 01:06:11.529014   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0814 01:06:11.550986   61804 provision.go:87] duration metric: took 238.880389ms to configureAuth
	I0814 01:06:11.551022   61804 buildroot.go:189] setting minikube options for container-runtime
	I0814 01:06:11.551253   61804 config.go:182] Loaded profile config "old-k8s-version-179312": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0814 01:06:11.551330   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHHostname
	I0814 01:06:11.554244   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.554622   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:11.554655   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.554880   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHPort
	I0814 01:06:11.555073   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 01:06:11.555249   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 01:06:11.555402   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHUsername
	I0814 01:06:11.555590   61804 main.go:141] libmachine: Using SSH client type: native
	I0814 01:06:11.555834   61804 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I0814 01:06:11.555856   61804 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0814 01:06:11.824529   61804 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0814 01:06:11.824553   61804 machine.go:97] duration metric: took 854.572333ms to provisionDockerMachine
	I0814 01:06:11.824569   61804 start.go:293] postStartSetup for "old-k8s-version-179312" (driver="kvm2")
	I0814 01:06:11.824581   61804 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0814 01:06:11.824626   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .DriverName
	I0814 01:06:11.824929   61804 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0814 01:06:11.824952   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHHostname
	I0814 01:06:11.828165   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.828510   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:11.828545   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.828693   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHPort
	I0814 01:06:11.828883   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 01:06:11.829032   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHUsername
	I0814 01:06:11.829206   61804 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/old-k8s-version-179312/id_rsa Username:docker}
	I0814 01:06:11.909667   61804 ssh_runner.go:195] Run: cat /etc/os-release
	I0814 01:06:11.913426   61804 info.go:137] Remote host: Buildroot 2023.02.9
	I0814 01:06:11.913452   61804 filesync.go:126] Scanning /home/jenkins/minikube-integration/19429-9425/.minikube/addons for local assets ...
	I0814 01:06:11.913530   61804 filesync.go:126] Scanning /home/jenkins/minikube-integration/19429-9425/.minikube/files for local assets ...
	I0814 01:06:11.913630   61804 filesync.go:149] local asset: /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem -> 165892.pem in /etc/ssl/certs
	I0814 01:06:11.913753   61804 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0814 01:06:11.923687   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem --> /etc/ssl/certs/165892.pem (1708 bytes)
	I0814 01:06:11.946123   61804 start.go:296] duration metric: took 121.53594ms for postStartSetup
	I0814 01:06:11.946172   61804 fix.go:56] duration metric: took 19.859362691s for fixHost
	I0814 01:06:11.946192   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHHostname
	I0814 01:06:11.948880   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.949241   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:11.949264   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.949490   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHPort
	I0814 01:06:11.949702   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 01:06:11.949889   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 01:06:11.950031   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHUsername
	I0814 01:06:11.950210   61804 main.go:141] libmachine: Using SSH client type: native
	I0814 01:06:11.950390   61804 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I0814 01:06:11.950403   61804 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0814 01:06:12.050230   61804 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723597572.007643909
	
	I0814 01:06:12.050252   61804 fix.go:216] guest clock: 1723597572.007643909
	I0814 01:06:12.050259   61804 fix.go:229] Guest: 2024-08-14 01:06:12.007643909 +0000 UTC Remote: 2024-08-14 01:06:11.946176003 +0000 UTC m=+272.466568091 (delta=61.467906ms)
	I0814 01:06:12.050292   61804 fix.go:200] guest clock delta is within tolerance: 61.467906ms
	I0814 01:06:12.050297   61804 start.go:83] releasing machines lock for "old-k8s-version-179312", held for 19.963518958s
	I0814 01:06:12.050328   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .DriverName
	I0814 01:06:12.050593   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetIP
	I0814 01:06:12.053723   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:12.054140   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:12.054170   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:12.054376   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .DriverName
	I0814 01:06:12.054804   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .DriverName
	I0814 01:06:12.054992   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .DriverName
	I0814 01:06:12.055076   61804 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0814 01:06:12.055137   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHHostname
	I0814 01:06:12.055191   61804 ssh_runner.go:195] Run: cat /version.json
	I0814 01:06:12.055216   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHHostname
	I0814 01:06:12.058027   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:12.058378   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:12.058404   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:12.058455   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:12.058684   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHPort
	I0814 01:06:12.058796   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:12.058828   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:12.058874   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 01:06:12.059041   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHUsername
	I0814 01:06:12.059107   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHPort
	I0814 01:06:12.059179   61804 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/old-k8s-version-179312/id_rsa Username:docker}
	I0814 01:06:12.059276   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 01:06:12.059582   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHUsername
	I0814 01:06:12.059721   61804 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/old-k8s-version-179312/id_rsa Username:docker}
	I0814 01:06:12.169671   61804 ssh_runner.go:195] Run: systemctl --version
	I0814 01:06:12.175640   61804 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0814 01:06:12.326156   61804 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0814 01:06:12.332951   61804 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0814 01:06:12.333015   61804 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0814 01:06:12.351706   61804 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0814 01:06:12.351737   61804 start.go:495] detecting cgroup driver to use...
	I0814 01:06:12.351808   61804 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0814 01:06:12.367945   61804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0814 01:06:12.381540   61804 docker.go:217] disabling cri-docker service (if available) ...
	I0814 01:06:12.381607   61804 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0814 01:06:12.394497   61804 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0814 01:06:12.408848   61804 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0814 01:06:12.530080   61804 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0814 01:06:12.705566   61804 docker.go:233] disabling docker service ...
	I0814 01:06:12.705627   61804 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0814 01:06:12.721274   61804 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0814 01:06:12.736855   61804 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0814 01:06:12.851178   61804 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0814 01:06:12.973876   61804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0814 01:06:12.987600   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 01:06:13.004553   61804 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0814 01:06:13.004656   61804 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:06:13.014424   61804 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0814 01:06:13.014507   61804 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:06:13.024038   61804 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:06:13.033588   61804 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:06:13.043124   61804 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0814 01:06:13.052585   61804 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0814 01:06:13.061221   61804 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0814 01:06:13.061308   61804 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0814 01:06:13.075277   61804 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0814 01:06:13.087018   61804 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 01:06:13.227288   61804 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0814 01:06:13.372753   61804 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0814 01:06:13.372848   61804 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0814 01:06:13.377444   61804 start.go:563] Will wait 60s for crictl version
	I0814 01:06:13.377499   61804 ssh_runner.go:195] Run: which crictl
	I0814 01:06:13.381068   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0814 01:06:13.430604   61804 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0814 01:06:13.430694   61804 ssh_runner.go:195] Run: crio --version
	I0814 01:06:13.460827   61804 ssh_runner.go:195] Run: crio --version
	I0814 01:06:13.491550   61804 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0814 01:06:13.492760   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetIP
	I0814 01:06:13.495846   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:13.496218   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:13.496255   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:13.496435   61804 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0814 01:06:13.500489   61804 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 01:06:13.512643   61804 kubeadm.go:883] updating cluster {Name:old-k8s-version-179312 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-179312 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.123 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0814 01:06:13.512785   61804 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0814 01:06:13.512842   61804 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 01:06:13.560050   61804 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0814 01:06:13.560112   61804 ssh_runner.go:195] Run: which lz4
	I0814 01:06:13.564105   61804 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0814 01:06:13.567985   61804 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0814 01:06:13.568014   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0814 01:06:12.074155   61115 main.go:141] libmachine: (embed-certs-901410) Calling .Start
	I0814 01:06:12.074285   61115 main.go:141] libmachine: (embed-certs-901410) Ensuring networks are active...
	I0814 01:06:12.074948   61115 main.go:141] libmachine: (embed-certs-901410) Ensuring network default is active
	I0814 01:06:12.075282   61115 main.go:141] libmachine: (embed-certs-901410) Ensuring network mk-embed-certs-901410 is active
	I0814 01:06:12.075694   61115 main.go:141] libmachine: (embed-certs-901410) Getting domain xml...
	I0814 01:06:12.076354   61115 main.go:141] libmachine: (embed-certs-901410) Creating domain...
	I0814 01:06:13.425468   61115 main.go:141] libmachine: (embed-certs-901410) Waiting to get IP...
	I0814 01:06:13.426367   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:13.426876   61115 main.go:141] libmachine: (embed-certs-901410) DBG | unable to find current IP address of domain embed-certs-901410 in network mk-embed-certs-901410
	I0814 01:06:13.426936   61115 main.go:141] libmachine: (embed-certs-901410) DBG | I0814 01:06:13.426842   63044 retry.go:31] will retry after 280.861769ms: waiting for machine to come up
	I0814 01:06:13.709645   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:13.710369   61115 main.go:141] libmachine: (embed-certs-901410) DBG | unable to find current IP address of domain embed-certs-901410 in network mk-embed-certs-901410
	I0814 01:06:13.710524   61115 main.go:141] libmachine: (embed-certs-901410) DBG | I0814 01:06:13.710442   63044 retry.go:31] will retry after 316.02196ms: waiting for machine to come up
	I0814 01:06:14.028197   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:14.028722   61115 main.go:141] libmachine: (embed-certs-901410) DBG | unable to find current IP address of domain embed-certs-901410 in network mk-embed-certs-901410
	I0814 01:06:14.028751   61115 main.go:141] libmachine: (embed-certs-901410) DBG | I0814 01:06:14.028683   63044 retry.go:31] will retry after 317.388844ms: waiting for machine to come up
	I0814 01:06:14.347390   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:14.347888   61115 main.go:141] libmachine: (embed-certs-901410) DBG | unable to find current IP address of domain embed-certs-901410 in network mk-embed-certs-901410
	I0814 01:06:14.347917   61115 main.go:141] libmachine: (embed-certs-901410) DBG | I0814 01:06:14.347834   63044 retry.go:31] will retry after 422.687955ms: waiting for machine to come up
	I0814 01:06:14.772182   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:14.772756   61115 main.go:141] libmachine: (embed-certs-901410) DBG | unable to find current IP address of domain embed-certs-901410 in network mk-embed-certs-901410
	I0814 01:06:14.772785   61115 main.go:141] libmachine: (embed-certs-901410) DBG | I0814 01:06:14.772704   63044 retry.go:31] will retry after 517.722001ms: waiting for machine to come up
	I0814 01:06:11.781300   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:13.782226   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:15.782509   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:14.919068   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:16.920536   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:15.010425   61804 crio.go:462] duration metric: took 1.446361159s to copy over tarball
	I0814 01:06:15.010503   61804 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0814 01:06:17.960543   61804 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.950002604s)
	I0814 01:06:17.960583   61804 crio.go:469] duration metric: took 2.950131362s to extract the tarball
	I0814 01:06:17.960595   61804 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0814 01:06:18.002898   61804 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 01:06:18.039862   61804 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0814 01:06:18.039887   61804 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0814 01:06:18.039949   61804 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 01:06:18.039976   61804 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0814 01:06:18.040029   61804 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0814 01:06:18.040037   61804 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 01:06:18.040076   61804 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0814 01:06:18.040092   61804 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0814 01:06:18.040279   61804 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0814 01:06:18.040285   61804 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0814 01:06:18.041502   61804 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 01:06:18.041605   61804 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0814 01:06:18.041642   61804 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 01:06:18.041655   61804 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0814 01:06:18.041683   61804 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0814 01:06:18.041709   61804 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0814 01:06:18.041712   61804 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0814 01:06:18.041643   61804 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0814 01:06:18.267865   61804 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0814 01:06:18.300630   61804 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0814 01:06:18.309691   61804 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0814 01:06:18.312711   61804 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0814 01:06:18.319830   61804 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 01:06:18.333483   61804 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0814 01:06:18.333571   61804 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0814 01:06:18.333617   61804 ssh_runner.go:195] Run: which crictl
	I0814 01:06:18.333854   61804 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0814 01:06:18.355530   61804 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0814 01:06:18.460940   61804 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0814 01:06:18.460989   61804 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0814 01:06:18.460991   61804 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0814 01:06:18.461028   61804 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0814 01:06:18.461038   61804 ssh_runner.go:195] Run: which crictl
	I0814 01:06:18.461072   61804 ssh_runner.go:195] Run: which crictl
	I0814 01:06:18.466105   61804 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0814 01:06:18.466146   61804 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 01:06:18.466158   61804 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0814 01:06:18.466194   61804 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0814 01:06:18.466200   61804 ssh_runner.go:195] Run: which crictl
	I0814 01:06:18.466232   61804 ssh_runner.go:195] Run: which crictl
	I0814 01:06:18.466109   61804 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0814 01:06:18.466290   61804 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0814 01:06:18.466163   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0814 01:06:18.466338   61804 ssh_runner.go:195] Run: which crictl
	I0814 01:06:18.471203   61804 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0814 01:06:18.471244   61804 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0814 01:06:18.471327   61804 ssh_runner.go:195] Run: which crictl
	I0814 01:06:18.477596   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0814 01:06:18.477709   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 01:06:18.477741   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0814 01:06:18.536417   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0814 01:06:18.536483   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0814 01:06:18.536443   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0814 01:06:18.536516   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0814 01:06:18.560937   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0814 01:06:18.560979   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0814 01:06:18.571932   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 01:06:18.690215   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0814 01:06:18.690271   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0814 01:06:18.690385   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0814 01:06:18.690416   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0814 01:06:18.710801   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0814 01:06:18.722130   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0814 01:06:18.722180   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 01:06:18.854942   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0814 01:06:18.854975   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0814 01:06:18.855019   61804 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0814 01:06:18.855064   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0814 01:06:18.855069   61804 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0814 01:06:18.855143   61804 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0814 01:06:18.855197   61804 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0814 01:06:18.917832   61804 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0814 01:06:18.917892   61804 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0814 01:06:18.919778   61804 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0814 01:06:18.937014   61804 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 01:06:19.077956   61804 cache_images.go:92] duration metric: took 1.038051355s to LoadCachedImages
	W0814 01:06:19.078050   61804 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0814 01:06:19.078068   61804 kubeadm.go:934] updating node { 192.168.61.123 8443 v1.20.0 crio true true} ...
	I0814 01:06:19.078198   61804 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-179312 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.123
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-179312 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0814 01:06:19.078309   61804 ssh_runner.go:195] Run: crio config
	I0814 01:06:19.126091   61804 cni.go:84] Creating CNI manager for ""
	I0814 01:06:19.126114   61804 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 01:06:19.126129   61804 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0814 01:06:19.126159   61804 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.123 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-179312 NodeName:old-k8s-version-179312 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.123"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.123 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0814 01:06:19.126325   61804 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.123
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-179312"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.123
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.123"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0814 01:06:19.126402   61804 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0814 01:06:19.136422   61804 binaries.go:44] Found k8s binaries, skipping transfer
	I0814 01:06:19.136481   61804 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0814 01:06:19.145476   61804 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0814 01:06:19.161780   61804 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0814 01:06:19.178893   61804 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0814 01:06:19.196515   61804 ssh_runner.go:195] Run: grep 192.168.61.123	control-plane.minikube.internal$ /etc/hosts
	I0814 01:06:19.200204   61804 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.123	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 01:06:19.211943   61804 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 01:06:19.333517   61804 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 01:06:19.350008   61804 certs.go:68] Setting up /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312 for IP: 192.168.61.123
	I0814 01:06:19.350055   61804 certs.go:194] generating shared ca certs ...
	I0814 01:06:19.350094   61804 certs.go:226] acquiring lock for ca certs: {Name:mke321171338291cf6d66f3acbfe43d3fabcafa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 01:06:19.350294   61804 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19429-9425/.minikube/ca.key
	I0814 01:06:19.350371   61804 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.key
	I0814 01:06:19.350387   61804 certs.go:256] generating profile certs ...
	I0814 01:06:19.350530   61804 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312/client.key
	I0814 01:06:19.350603   61804 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312/apiserver.key.6e56bf34
	I0814 01:06:19.350667   61804 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312/proxy-client.key
	I0814 01:06:19.350846   61804 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589.pem (1338 bytes)
	W0814 01:06:19.350928   61804 certs.go:480] ignoring /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589_empty.pem, impossibly tiny 0 bytes
	I0814 01:06:19.350958   61804 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem (1679 bytes)
	I0814 01:06:19.350995   61804 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem (1082 bytes)
	I0814 01:06:19.351032   61804 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem (1123 bytes)
	I0814 01:06:19.351076   61804 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem (1675 bytes)
	I0814 01:06:19.351152   61804 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem (1708 bytes)
	I0814 01:06:19.352060   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0814 01:06:19.400249   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0814 01:06:19.430497   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0814 01:06:19.478315   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0814 01:06:19.507327   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0814 01:06:15.292336   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:15.292816   61115 main.go:141] libmachine: (embed-certs-901410) DBG | unable to find current IP address of domain embed-certs-901410 in network mk-embed-certs-901410
	I0814 01:06:15.292847   61115 main.go:141] libmachine: (embed-certs-901410) DBG | I0814 01:06:15.292765   63044 retry.go:31] will retry after 585.844986ms: waiting for machine to come up
	I0814 01:06:15.880233   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:15.880833   61115 main.go:141] libmachine: (embed-certs-901410) DBG | unable to find current IP address of domain embed-certs-901410 in network mk-embed-certs-901410
	I0814 01:06:15.880903   61115 main.go:141] libmachine: (embed-certs-901410) DBG | I0814 01:06:15.880810   63044 retry.go:31] will retry after 827.81891ms: waiting for machine to come up
	I0814 01:06:16.710168   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:16.710630   61115 main.go:141] libmachine: (embed-certs-901410) DBG | unable to find current IP address of domain embed-certs-901410 in network mk-embed-certs-901410
	I0814 01:06:16.710671   61115 main.go:141] libmachine: (embed-certs-901410) DBG | I0814 01:06:16.710577   63044 retry.go:31] will retry after 1.430172339s: waiting for machine to come up
	I0814 01:06:18.142094   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:18.142557   61115 main.go:141] libmachine: (embed-certs-901410) DBG | unable to find current IP address of domain embed-certs-901410 in network mk-embed-certs-901410
	I0814 01:06:18.142604   61115 main.go:141] libmachine: (embed-certs-901410) DBG | I0814 01:06:18.142477   63044 retry.go:31] will retry after 1.240583508s: waiting for machine to come up
	I0814 01:06:19.384686   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:19.385102   61115 main.go:141] libmachine: (embed-certs-901410) DBG | unable to find current IP address of domain embed-certs-901410 in network mk-embed-certs-901410
	I0814 01:06:19.385132   61115 main.go:141] libmachine: (embed-certs-901410) DBG | I0814 01:06:19.385044   63044 retry.go:31] will retry after 2.005758756s: waiting for machine to come up
	I0814 01:06:18.281722   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:20.571594   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:19.619695   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:21.918897   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:19.535095   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0814 01:06:19.564128   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0814 01:06:19.600227   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0814 01:06:19.624712   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0814 01:06:19.649975   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589.pem --> /usr/share/ca-certificates/16589.pem (1338 bytes)
	I0814 01:06:19.673278   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem --> /usr/share/ca-certificates/165892.pem (1708 bytes)
	I0814 01:06:19.697408   61804 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0814 01:06:19.716197   61804 ssh_runner.go:195] Run: openssl version
	I0814 01:06:19.723669   61804 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/165892.pem && ln -fs /usr/share/ca-certificates/165892.pem /etc/ssl/certs/165892.pem"
	I0814 01:06:19.737165   61804 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/165892.pem
	I0814 01:06:19.742731   61804 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 13 23:59 /usr/share/ca-certificates/165892.pem
	I0814 01:06:19.742778   61804 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/165892.pem
	I0814 01:06:19.750009   61804 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/165892.pem /etc/ssl/certs/3ec20f2e.0"
	I0814 01:06:19.761830   61804 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0814 01:06:19.772601   61804 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0814 01:06:19.777222   61804 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 13 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0814 01:06:19.777311   61804 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0814 01:06:19.784554   61804 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0814 01:06:19.794731   61804 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16589.pem && ln -fs /usr/share/ca-certificates/16589.pem /etc/ssl/certs/16589.pem"
	I0814 01:06:19.804326   61804 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16589.pem
	I0814 01:06:19.808528   61804 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 13 23:59 /usr/share/ca-certificates/16589.pem
	I0814 01:06:19.808589   61804 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16589.pem
	I0814 01:06:19.815518   61804 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16589.pem /etc/ssl/certs/51391683.0"
	I0814 01:06:19.828687   61804 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0814 01:06:19.833943   61804 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0814 01:06:19.839826   61804 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0814 01:06:19.845576   61804 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0814 01:06:19.851700   61804 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0814 01:06:19.857179   61804 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0814 01:06:19.862728   61804 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0814 01:06:19.868172   61804 kubeadm.go:392] StartCluster: {Name:old-k8s-version-179312 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-179312 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.123 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 01:06:19.868280   61804 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0814 01:06:19.868327   61804 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 01:06:19.905130   61804 cri.go:89] found id: ""
	I0814 01:06:19.905208   61804 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0814 01:06:19.915743   61804 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0814 01:06:19.915763   61804 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0814 01:06:19.915812   61804 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0814 01:06:19.926673   61804 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0814 01:06:19.928112   61804 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-179312" does not appear in /home/jenkins/minikube-integration/19429-9425/kubeconfig
	I0814 01:06:19.929057   61804 kubeconfig.go:62] /home/jenkins/minikube-integration/19429-9425/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-179312" cluster setting kubeconfig missing "old-k8s-version-179312" context setting]
	I0814 01:06:19.931588   61804 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19429-9425/kubeconfig: {Name:mkd99f966962f24d3ec49056c344f5320df43dba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 01:06:19.938507   61804 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0814 01:06:19.947574   61804 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.123
	I0814 01:06:19.947601   61804 kubeadm.go:1160] stopping kube-system containers ...
	I0814 01:06:19.947641   61804 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0814 01:06:19.947698   61804 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 01:06:19.986219   61804 cri.go:89] found id: ""
	I0814 01:06:19.986301   61804 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0814 01:06:20.001325   61804 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 01:06:20.010260   61804 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 01:06:20.010278   61804 kubeadm.go:157] found existing configuration files:
	
	I0814 01:06:20.010320   61804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 01:06:20.018691   61804 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 01:06:20.018753   61804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 01:06:20.027627   61804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 01:06:20.035892   61804 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 01:06:20.035948   61804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 01:06:20.044508   61804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 01:06:20.052714   61804 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 01:06:20.052760   61804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 01:06:20.062524   61804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 01:06:20.070978   61804 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 01:06:20.071037   61804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 01:06:20.079423   61804 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 01:06:20.088368   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:06:20.206955   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:06:21.197237   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:06:21.439928   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:06:21.552279   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:06:21.636249   61804 api_server.go:52] waiting for apiserver process to appear ...
	I0814 01:06:21.636337   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:22.136661   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:22.636861   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:23.136511   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:23.636583   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:24.136899   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:21.392188   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:21.392717   61115 main.go:141] libmachine: (embed-certs-901410) DBG | unable to find current IP address of domain embed-certs-901410 in network mk-embed-certs-901410
	I0814 01:06:21.392744   61115 main.go:141] libmachine: (embed-certs-901410) DBG | I0814 01:06:21.392636   63044 retry.go:31] will retry after 2.297974145s: waiting for machine to come up
	I0814 01:06:23.692024   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:23.692545   61115 main.go:141] libmachine: (embed-certs-901410) DBG | unable to find current IP address of domain embed-certs-901410 in network mk-embed-certs-901410
	I0814 01:06:23.692574   61115 main.go:141] libmachine: (embed-certs-901410) DBG | I0814 01:06:23.692496   63044 retry.go:31] will retry after 2.273164713s: waiting for machine to come up
	I0814 01:06:22.780588   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:24.781349   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:23.919847   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:26.417563   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:24.636605   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:25.136809   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:25.636474   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:26.137253   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:26.636758   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:27.137184   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:27.637201   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:28.137082   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:28.637409   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:29.136794   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:25.967275   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:25.967771   61115 main.go:141] libmachine: (embed-certs-901410) DBG | unable to find current IP address of domain embed-certs-901410 in network mk-embed-certs-901410
	I0814 01:06:25.967799   61115 main.go:141] libmachine: (embed-certs-901410) DBG | I0814 01:06:25.967714   63044 retry.go:31] will retry after 3.279375715s: waiting for machine to come up
	I0814 01:06:29.249387   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.249873   61115 main.go:141] libmachine: (embed-certs-901410) Found IP for machine: 192.168.50.210
	I0814 01:06:29.249893   61115 main.go:141] libmachine: (embed-certs-901410) Reserving static IP address...
	I0814 01:06:29.249911   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has current primary IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.250345   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "embed-certs-901410", mac: "52:54:00:fa:4e:56", ip: "192.168.50.210"} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:06:29.250380   61115 main.go:141] libmachine: (embed-certs-901410) DBG | skip adding static IP to network mk-embed-certs-901410 - found existing host DHCP lease matching {name: "embed-certs-901410", mac: "52:54:00:fa:4e:56", ip: "192.168.50.210"}
	I0814 01:06:29.250394   61115 main.go:141] libmachine: (embed-certs-901410) Reserved static IP address: 192.168.50.210
	I0814 01:06:29.250409   61115 main.go:141] libmachine: (embed-certs-901410) Waiting for SSH to be available...
	I0814 01:06:29.250425   61115 main.go:141] libmachine: (embed-certs-901410) DBG | Getting to WaitForSSH function...
	I0814 01:06:29.252472   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.252801   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:06:29.252825   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.252933   61115 main.go:141] libmachine: (embed-certs-901410) DBG | Using SSH client type: external
	I0814 01:06:29.252973   61115 main.go:141] libmachine: (embed-certs-901410) DBG | Using SSH private key: /home/jenkins/minikube-integration/19429-9425/.minikube/machines/embed-certs-901410/id_rsa (-rw-------)
	I0814 01:06:29.253015   61115 main.go:141] libmachine: (embed-certs-901410) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.210 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19429-9425/.minikube/machines/embed-certs-901410/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0814 01:06:29.253031   61115 main.go:141] libmachine: (embed-certs-901410) DBG | About to run SSH command:
	I0814 01:06:29.253044   61115 main.go:141] libmachine: (embed-certs-901410) DBG | exit 0
	I0814 01:06:29.381821   61115 main.go:141] libmachine: (embed-certs-901410) DBG | SSH cmd err, output: <nil>: 
	I0814 01:06:29.382216   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetConfigRaw
	I0814 01:06:29.382909   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetIP
	I0814 01:06:29.385247   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.385611   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:06:29.385648   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.385918   61115 profile.go:143] Saving config to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/embed-certs-901410/config.json ...
	I0814 01:06:29.386116   61115 machine.go:94] provisionDockerMachine start ...
	I0814 01:06:29.386151   61115 main.go:141] libmachine: (embed-certs-901410) Calling .DriverName
	I0814 01:06:29.386370   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHHostname
	I0814 01:06:29.388690   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.389026   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:06:29.389054   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.389185   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHPort
	I0814 01:06:29.389353   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 01:06:29.389510   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 01:06:29.389658   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHUsername
	I0814 01:06:29.389812   61115 main.go:141] libmachine: Using SSH client type: native
	I0814 01:06:29.390022   61115 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.210 22 <nil> <nil>}
	I0814 01:06:29.390033   61115 main.go:141] libmachine: About to run SSH command:
	hostname
	I0814 01:06:29.502650   61115 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0814 01:06:29.502704   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetMachineName
	I0814 01:06:29.502923   61115 buildroot.go:166] provisioning hostname "embed-certs-901410"
	I0814 01:06:29.502947   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetMachineName
	I0814 01:06:29.503141   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHHostname
	I0814 01:06:29.505440   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.505866   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:06:29.505903   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.506078   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHPort
	I0814 01:06:29.506278   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 01:06:29.506425   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 01:06:29.506558   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHUsername
	I0814 01:06:29.506733   61115 main.go:141] libmachine: Using SSH client type: native
	I0814 01:06:29.506942   61115 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.210 22 <nil> <nil>}
	I0814 01:06:29.506961   61115 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-901410 && echo "embed-certs-901410" | sudo tee /etc/hostname
	I0814 01:06:29.632717   61115 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-901410
	
	I0814 01:06:29.632749   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHHostname
	I0814 01:06:29.635919   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.636318   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:06:29.636346   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.636582   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHPort
	I0814 01:06:29.636804   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 01:06:29.637010   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 01:06:29.637205   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHUsername
	I0814 01:06:29.637413   61115 main.go:141] libmachine: Using SSH client type: native
	I0814 01:06:29.637574   61115 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.210 22 <nil> <nil>}
	I0814 01:06:29.637590   61115 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-901410' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-901410/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-901410' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0814 01:06:29.759030   61115 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 01:06:29.759059   61115 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19429-9425/.minikube CaCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19429-9425/.minikube}
	I0814 01:06:29.759100   61115 buildroot.go:174] setting up certificates
	I0814 01:06:29.759114   61115 provision.go:84] configureAuth start
	I0814 01:06:29.759126   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetMachineName
	I0814 01:06:29.759412   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetIP
	I0814 01:06:29.761597   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.761918   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:06:29.761946   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.762095   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHHostname
	I0814 01:06:29.763969   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.764320   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:06:29.764353   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.764497   61115 provision.go:143] copyHostCerts
	I0814 01:06:29.764568   61115 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem, removing ...
	I0814 01:06:29.764582   61115 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem
	I0814 01:06:29.764653   61115 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem (1082 bytes)
	I0814 01:06:29.764781   61115 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem, removing ...
	I0814 01:06:29.764791   61115 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem
	I0814 01:06:29.764814   61115 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem (1123 bytes)
	I0814 01:06:29.764875   61115 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem, removing ...
	I0814 01:06:29.764882   61115 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem
	I0814 01:06:29.764899   61115 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem (1675 bytes)
	I0814 01:06:29.764954   61115 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem org=jenkins.embed-certs-901410 san=[127.0.0.1 192.168.50.210 embed-certs-901410 localhost minikube]
	I0814 01:06:29.870234   61115 provision.go:177] copyRemoteCerts
	I0814 01:06:29.870290   61115 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0814 01:06:29.870314   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHHostname
	I0814 01:06:29.872903   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.873188   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:06:29.873220   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.873388   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHPort
	I0814 01:06:29.873582   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 01:06:29.873748   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHUsername
	I0814 01:06:29.873849   61115 sshutil.go:53] new ssh client: &{IP:192.168.50.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/embed-certs-901410/id_rsa Username:docker}
	I0814 01:06:29.959592   61115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0814 01:06:29.982484   61115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0814 01:06:30.005257   61115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0814 01:06:30.029571   61115 provision.go:87] duration metric: took 270.444778ms to configureAuth
	I0814 01:06:30.029595   61115 buildroot.go:189] setting minikube options for container-runtime
	I0814 01:06:30.029773   61115 config.go:182] Loaded profile config "embed-certs-901410": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 01:06:30.029836   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHHostname
	I0814 01:06:30.032696   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:30.033078   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:06:30.033115   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:30.033301   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHPort
	I0814 01:06:30.033492   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 01:06:30.033658   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 01:06:30.033798   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHUsername
	I0814 01:06:30.033953   61115 main.go:141] libmachine: Using SSH client type: native
	I0814 01:06:30.034162   61115 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.210 22 <nil> <nil>}
	I0814 01:06:30.034182   61115 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0814 01:06:27.281267   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:29.284406   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:30.310330   61115 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0814 01:06:30.310362   61115 machine.go:97] duration metric: took 924.221855ms to provisionDockerMachine
	I0814 01:06:30.310376   61115 start.go:293] postStartSetup for "embed-certs-901410" (driver="kvm2")
	I0814 01:06:30.310391   61115 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0814 01:06:30.310412   61115 main.go:141] libmachine: (embed-certs-901410) Calling .DriverName
	I0814 01:06:30.310792   61115 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0814 01:06:30.310829   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHHostname
	I0814 01:06:30.313781   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:30.314184   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:06:30.314211   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:30.314417   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHPort
	I0814 01:06:30.314605   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 01:06:30.314775   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHUsername
	I0814 01:06:30.314921   61115 sshutil.go:53] new ssh client: &{IP:192.168.50.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/embed-certs-901410/id_rsa Username:docker}
	I0814 01:06:30.400094   61115 ssh_runner.go:195] Run: cat /etc/os-release
	I0814 01:06:30.403861   61115 info.go:137] Remote host: Buildroot 2023.02.9
	I0814 01:06:30.403879   61115 filesync.go:126] Scanning /home/jenkins/minikube-integration/19429-9425/.minikube/addons for local assets ...
	I0814 01:06:30.403936   61115 filesync.go:126] Scanning /home/jenkins/minikube-integration/19429-9425/.minikube/files for local assets ...
	I0814 01:06:30.404014   61115 filesync.go:149] local asset: /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem -> 165892.pem in /etc/ssl/certs
	I0814 01:06:30.404128   61115 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0814 01:06:30.412469   61115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem --> /etc/ssl/certs/165892.pem (1708 bytes)
	I0814 01:06:30.434728   61115 start.go:296] duration metric: took 124.33735ms for postStartSetup
	I0814 01:06:30.434768   61115 fix.go:56] duration metric: took 18.384308902s for fixHost
	I0814 01:06:30.434792   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHHostname
	I0814 01:06:30.437730   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:30.438155   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:06:30.438177   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:30.438320   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHPort
	I0814 01:06:30.438510   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 01:06:30.438677   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 01:06:30.438818   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHUsername
	I0814 01:06:30.439014   61115 main.go:141] libmachine: Using SSH client type: native
	I0814 01:06:30.439219   61115 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.210 22 <nil> <nil>}
	I0814 01:06:30.439234   61115 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0814 01:06:30.550947   61115 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723597590.505165718
	
	I0814 01:06:30.550974   61115 fix.go:216] guest clock: 1723597590.505165718
	I0814 01:06:30.550984   61115 fix.go:229] Guest: 2024-08-14 01:06:30.505165718 +0000 UTC Remote: 2024-08-14 01:06:30.434773276 +0000 UTC m=+355.429845421 (delta=70.392442ms)
	I0814 01:06:30.551009   61115 fix.go:200] guest clock delta is within tolerance: 70.392442ms
	I0814 01:06:30.551018   61115 start.go:83] releasing machines lock for "embed-certs-901410", held for 18.500591627s
	I0814 01:06:30.551046   61115 main.go:141] libmachine: (embed-certs-901410) Calling .DriverName
	I0814 01:06:30.551330   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetIP
	I0814 01:06:30.553946   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:30.554367   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:06:30.554403   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:30.554586   61115 main.go:141] libmachine: (embed-certs-901410) Calling .DriverName
	I0814 01:06:30.555088   61115 main.go:141] libmachine: (embed-certs-901410) Calling .DriverName
	I0814 01:06:30.555280   61115 main.go:141] libmachine: (embed-certs-901410) Calling .DriverName
	I0814 01:06:30.555371   61115 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0814 01:06:30.555415   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHHostname
	I0814 01:06:30.555523   61115 ssh_runner.go:195] Run: cat /version.json
	I0814 01:06:30.555549   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHHostname
	I0814 01:06:30.558280   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:30.558369   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:30.558704   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:06:30.558730   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:30.558909   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHPort
	I0814 01:06:30.558922   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:06:30.558945   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:30.559110   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHPort
	I0814 01:06:30.559121   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 01:06:30.559307   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHUsername
	I0814 01:06:30.559319   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 01:06:30.559477   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHUsername
	I0814 01:06:30.559473   61115 sshutil.go:53] new ssh client: &{IP:192.168.50.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/embed-certs-901410/id_rsa Username:docker}
	I0814 01:06:30.559633   61115 sshutil.go:53] new ssh client: &{IP:192.168.50.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/embed-certs-901410/id_rsa Username:docker}
	I0814 01:06:30.650942   61115 ssh_runner.go:195] Run: systemctl --version
	I0814 01:06:30.686931   61115 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0814 01:06:30.834893   61115 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0814 01:06:30.840573   61115 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0814 01:06:30.840644   61115 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0814 01:06:30.856179   61115 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0814 01:06:30.856200   61115 start.go:495] detecting cgroup driver to use...
	I0814 01:06:30.856268   61115 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0814 01:06:30.872056   61115 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0814 01:06:30.884525   61115 docker.go:217] disabling cri-docker service (if available) ...
	I0814 01:06:30.884604   61115 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0814 01:06:30.897219   61115 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0814 01:06:30.910649   61115 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0814 01:06:31.031843   61115 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0814 01:06:31.170959   61115 docker.go:233] disabling docker service ...
	I0814 01:06:31.171034   61115 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0814 01:06:31.185812   61115 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0814 01:06:31.198349   61115 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0814 01:06:31.334492   61115 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0814 01:06:31.448638   61115 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0814 01:06:31.462494   61115 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 01:06:31.479307   61115 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0814 01:06:31.479376   61115 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:06:31.489135   61115 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0814 01:06:31.489202   61115 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:06:31.500174   61115 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:06:31.509884   61115 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:06:31.519412   61115 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0814 01:06:31.529352   61115 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:06:31.539360   61115 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:06:31.555761   61115 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:06:31.566278   61115 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0814 01:06:31.575191   61115 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0814 01:06:31.575242   61115 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0814 01:06:31.587429   61115 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0814 01:06:31.596637   61115 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 01:06:31.702555   61115 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0814 01:06:31.836836   61115 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0814 01:06:31.836908   61115 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0814 01:06:31.841202   61115 start.go:563] Will wait 60s for crictl version
	I0814 01:06:31.841272   61115 ssh_runner.go:195] Run: which crictl
	I0814 01:06:31.844681   61115 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0814 01:06:31.882260   61115 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0814 01:06:31.882348   61115 ssh_runner.go:195] Run: crio --version
	I0814 01:06:31.908181   61115 ssh_runner.go:195] Run: crio --version
	I0814 01:06:31.938158   61115 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0814 01:06:28.917018   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:30.917940   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:32.919466   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:29.636401   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:30.136547   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:30.636748   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:31.136557   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:31.636752   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:32.137082   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:32.637429   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:33.136895   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:33.636703   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:34.136811   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:31.939399   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetIP
	I0814 01:06:31.942325   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:31.942622   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:06:31.942660   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:31.942828   61115 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0814 01:06:31.947071   61115 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 01:06:31.958632   61115 kubeadm.go:883] updating cluster {Name:embed-certs-901410 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:embed-certs-901410 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.210 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0814 01:06:31.958783   61115 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0814 01:06:31.958853   61115 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 01:06:31.996526   61115 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0814 01:06:31.996602   61115 ssh_runner.go:195] Run: which lz4
	I0814 01:06:32.000322   61115 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0814 01:06:32.004629   61115 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0814 01:06:32.004661   61115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0814 01:06:33.171433   61115 crio.go:462] duration metric: took 1.171173942s to copy over tarball
	I0814 01:06:33.171504   61115 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0814 01:06:31.781468   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:33.781547   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:35.781641   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:35.418170   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:37.920694   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:34.637429   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:35.137322   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:35.636955   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:36.136713   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:36.636457   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:37.137396   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:37.637271   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:38.137099   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:38.637303   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:39.136673   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:35.285022   61115 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.11348357s)
	I0814 01:06:35.285047   61115 crio.go:469] duration metric: took 2.113589929s to extract the tarball
	I0814 01:06:35.285054   61115 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0814 01:06:35.320814   61115 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 01:06:35.362145   61115 crio.go:514] all images are preloaded for cri-o runtime.
	I0814 01:06:35.362169   61115 cache_images.go:84] Images are preloaded, skipping loading
	I0814 01:06:35.362177   61115 kubeadm.go:934] updating node { 192.168.50.210 8443 v1.31.0 crio true true} ...
	I0814 01:06:35.362289   61115 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-901410 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.210
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-901410 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0814 01:06:35.362359   61115 ssh_runner.go:195] Run: crio config
	I0814 01:06:35.413412   61115 cni.go:84] Creating CNI manager for ""
	I0814 01:06:35.413433   61115 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 01:06:35.413442   61115 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0814 01:06:35.413461   61115 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.210 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-901410 NodeName:embed-certs-901410 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.210"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.210 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0814 01:06:35.413620   61115 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.210
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-901410"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.210
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.210"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0814 01:06:35.413681   61115 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0814 01:06:35.424217   61115 binaries.go:44] Found k8s binaries, skipping transfer
	I0814 01:06:35.424287   61115 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0814 01:06:35.433358   61115 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0814 01:06:35.448828   61115 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0814 01:06:35.463579   61115 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0814 01:06:35.478423   61115 ssh_runner.go:195] Run: grep 192.168.50.210	control-plane.minikube.internal$ /etc/hosts
	I0814 01:06:35.482005   61115 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.210	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 01:06:35.493411   61115 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 01:06:35.625613   61115 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 01:06:35.642901   61115 certs.go:68] Setting up /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/embed-certs-901410 for IP: 192.168.50.210
	I0814 01:06:35.642927   61115 certs.go:194] generating shared ca certs ...
	I0814 01:06:35.642955   61115 certs.go:226] acquiring lock for ca certs: {Name:mke321171338291cf6d66f3acbfe43d3fabcafa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 01:06:35.643119   61115 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19429-9425/.minikube/ca.key
	I0814 01:06:35.643172   61115 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.key
	I0814 01:06:35.643184   61115 certs.go:256] generating profile certs ...
	I0814 01:06:35.643301   61115 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/embed-certs-901410/client.key
	I0814 01:06:35.643390   61115 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/embed-certs-901410/apiserver.key.0b2ea541
	I0814 01:06:35.643439   61115 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/embed-certs-901410/proxy-client.key
	I0814 01:06:35.643591   61115 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589.pem (1338 bytes)
	W0814 01:06:35.643630   61115 certs.go:480] ignoring /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589_empty.pem, impossibly tiny 0 bytes
	I0814 01:06:35.643648   61115 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem (1679 bytes)
	I0814 01:06:35.643682   61115 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem (1082 bytes)
	I0814 01:06:35.643727   61115 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem (1123 bytes)
	I0814 01:06:35.643768   61115 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem (1675 bytes)
	I0814 01:06:35.643825   61115 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem (1708 bytes)
	I0814 01:06:35.644478   61115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0814 01:06:35.681297   61115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0814 01:06:35.730067   61115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0814 01:06:35.763133   61115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0814 01:06:35.790593   61115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/embed-certs-901410/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0814 01:06:35.815663   61115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/embed-certs-901410/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0814 01:06:35.840763   61115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/embed-certs-901410/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0814 01:06:35.863820   61115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/embed-certs-901410/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0814 01:06:35.887018   61115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589.pem --> /usr/share/ca-certificates/16589.pem (1338 bytes)
	I0814 01:06:35.909408   61115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem --> /usr/share/ca-certificates/165892.pem (1708 bytes)
	I0814 01:06:35.934175   61115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0814 01:06:35.957179   61115 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0814 01:06:35.972922   61115 ssh_runner.go:195] Run: openssl version
	I0814 01:06:35.978523   61115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16589.pem && ln -fs /usr/share/ca-certificates/16589.pem /etc/ssl/certs/16589.pem"
	I0814 01:06:35.987896   61115 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16589.pem
	I0814 01:06:35.991861   61115 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 13 23:59 /usr/share/ca-certificates/16589.pem
	I0814 01:06:35.991922   61115 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16589.pem
	I0814 01:06:35.997354   61115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16589.pem /etc/ssl/certs/51391683.0"
	I0814 01:06:36.007366   61115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/165892.pem && ln -fs /usr/share/ca-certificates/165892.pem /etc/ssl/certs/165892.pem"
	I0814 01:06:36.017502   61115 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/165892.pem
	I0814 01:06:36.021456   61115 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 13 23:59 /usr/share/ca-certificates/165892.pem
	I0814 01:06:36.021506   61115 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/165892.pem
	I0814 01:06:36.026605   61115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/165892.pem /etc/ssl/certs/3ec20f2e.0"
	I0814 01:06:36.035758   61115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0814 01:06:36.044976   61115 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0814 01:06:36.048866   61115 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 13 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0814 01:06:36.048905   61115 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0814 01:06:36.053841   61115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0814 01:06:36.062901   61115 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0814 01:06:36.066905   61115 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0814 01:06:36.072359   61115 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0814 01:06:36.077384   61115 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0814 01:06:36.082634   61115 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0814 01:06:36.087734   61115 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0814 01:06:36.093076   61115 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0814 01:06:36.098239   61115 kubeadm.go:392] StartCluster: {Name:embed-certs-901410 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:embed-certs-901410 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.210 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 01:06:36.098366   61115 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0814 01:06:36.098414   61115 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 01:06:36.137745   61115 cri.go:89] found id: ""
	I0814 01:06:36.137812   61115 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0814 01:06:36.151288   61115 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0814 01:06:36.151304   61115 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0814 01:06:36.151346   61115 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0814 01:06:36.160854   61115 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0814 01:06:36.162454   61115 kubeconfig.go:125] found "embed-certs-901410" server: "https://192.168.50.210:8443"
	I0814 01:06:36.165608   61115 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0814 01:06:36.174251   61115 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.210
	I0814 01:06:36.174272   61115 kubeadm.go:1160] stopping kube-system containers ...
	I0814 01:06:36.174307   61115 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0814 01:06:36.174355   61115 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 01:06:36.208617   61115 cri.go:89] found id: ""
	I0814 01:06:36.208689   61115 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0814 01:06:36.223217   61115 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 01:06:36.231791   61115 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 01:06:36.231807   61115 kubeadm.go:157] found existing configuration files:
	
	I0814 01:06:36.231846   61115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 01:06:36.239738   61115 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 01:06:36.239779   61115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 01:06:36.248183   61115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 01:06:36.256052   61115 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 01:06:36.256099   61115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 01:06:36.264174   61115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 01:06:36.271909   61115 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 01:06:36.271951   61115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 01:06:36.280467   61115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 01:06:36.288795   61115 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 01:06:36.288841   61115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 01:06:36.297142   61115 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 01:06:36.305326   61115 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:06:36.419654   61115 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:06:37.266994   61115 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:06:37.469417   61115 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:06:37.544102   61115 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:06:37.616596   61115 api_server.go:52] waiting for apiserver process to appear ...
	I0814 01:06:37.616684   61115 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:38.117278   61115 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:38.616805   61115 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:39.117789   61115 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:39.616986   61115 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:39.684640   61115 api_server.go:72] duration metric: took 2.068036759s to wait for apiserver process to appear ...
	I0814 01:06:39.684668   61115 api_server.go:88] waiting for apiserver healthz status ...
	I0814 01:06:39.684690   61115 api_server.go:253] Checking apiserver healthz at https://192.168.50.210:8443/healthz ...
	I0814 01:06:39.685138   61115 api_server.go:269] stopped: https://192.168.50.210:8443/healthz: Get "https://192.168.50.210:8443/healthz": dial tcp 192.168.50.210:8443: connect: connection refused
	I0814 01:06:37.782873   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:40.281438   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:40.418079   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:42.418440   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:40.184807   61115 api_server.go:253] Checking apiserver healthz at https://192.168.50.210:8443/healthz ...
	I0814 01:06:42.435930   61115 api_server.go:279] https://192.168.50.210:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0814 01:06:42.435960   61115 api_server.go:103] status: https://192.168.50.210:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0814 01:06:42.435997   61115 api_server.go:253] Checking apiserver healthz at https://192.168.50.210:8443/healthz ...
	I0814 01:06:42.464919   61115 api_server.go:279] https://192.168.50.210:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0814 01:06:42.464949   61115 api_server.go:103] status: https://192.168.50.210:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0814 01:06:42.685218   61115 api_server.go:253] Checking apiserver healthz at https://192.168.50.210:8443/healthz ...
	I0814 01:06:42.691065   61115 api_server.go:279] https://192.168.50.210:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0814 01:06:42.691089   61115 api_server.go:103] status: https://192.168.50.210:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0814 01:06:43.185274   61115 api_server.go:253] Checking apiserver healthz at https://192.168.50.210:8443/healthz ...
	I0814 01:06:43.191160   61115 api_server.go:279] https://192.168.50.210:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0814 01:06:43.191189   61115 api_server.go:103] status: https://192.168.50.210:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0814 01:06:43.685407   61115 api_server.go:253] Checking apiserver healthz at https://192.168.50.210:8443/healthz ...
	I0814 01:06:43.689515   61115 api_server.go:279] https://192.168.50.210:8443/healthz returned 200:
	ok
	I0814 01:06:43.695408   61115 api_server.go:141] control plane version: v1.31.0
	I0814 01:06:43.695435   61115 api_server.go:131] duration metric: took 4.010759094s to wait for apiserver health ...
	I0814 01:06:43.695445   61115 cni.go:84] Creating CNI manager for ""
	I0814 01:06:43.695454   61115 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 01:06:43.696966   61115 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0814 01:06:39.637384   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:40.136562   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:40.637447   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:41.137212   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:41.636824   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:42.136790   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:42.637352   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:43.137237   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:43.637327   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:44.136777   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:43.698444   61115 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0814 01:06:43.713840   61115 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0814 01:06:43.754611   61115 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 01:06:43.765369   61115 system_pods.go:59] 8 kube-system pods found
	I0814 01:06:43.765402   61115 system_pods.go:61] "coredns-6f6b679f8f-fpz8f" [0fae381f-1394-4a55-9735-61197051e0da] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0814 01:06:43.765410   61115 system_pods.go:61] "etcd-embed-certs-901410" [238a87a0-88ab-4663-bc2f-6bf2cb641902] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0814 01:06:43.765421   61115 system_pods.go:61] "kube-apiserver-embed-certs-901410" [0847b62e-42c4-4616-9412-a1547f991ea5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0814 01:06:43.765427   61115 system_pods.go:61] "kube-controller-manager-embed-certs-901410" [868c288a-504f-4bc6-9af3-8d3eff0a4e66] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0814 01:06:43.765431   61115 system_pods.go:61] "kube-proxy-gtr77" [f7b7a6b1-e47f-4982-8247-2adf9ce6690b] Running
	I0814 01:06:43.765436   61115 system_pods.go:61] "kube-scheduler-embed-certs-901410" [803a8501-9a24-436d-8439-2e05ed2b6e2c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0814 01:06:43.765443   61115 system_pods.go:61] "metrics-server-6867b74b74-82tmq" [4683e8c4-92a5-4b81-86c8-55da6044e780] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 01:06:43.765447   61115 system_pods.go:61] "storage-provisioner" [796497c7-c7b4-4207-9dbb-970702bab314] Running
	I0814 01:06:43.765453   61115 system_pods.go:74] duration metric: took 10.823914ms to wait for pod list to return data ...
	I0814 01:06:43.765468   61115 node_conditions.go:102] verifying NodePressure condition ...
	I0814 01:06:43.769292   61115 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0814 01:06:43.769319   61115 node_conditions.go:123] node cpu capacity is 2
	I0814 01:06:43.769334   61115 node_conditions.go:105] duration metric: took 3.855137ms to run NodePressure ...
	I0814 01:06:43.769355   61115 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:06:44.041384   61115 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0814 01:06:44.045549   61115 kubeadm.go:739] kubelet initialised
	I0814 01:06:44.045569   61115 kubeadm.go:740] duration metric: took 4.15887ms waiting for restarted kubelet to initialise ...
	I0814 01:06:44.045576   61115 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 01:06:44.050480   61115 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6f6b679f8f-fpz8f" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:42.281812   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:44.795089   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:44.917037   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:46.918399   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:44.636971   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:45.137082   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:45.636661   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:46.136690   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:46.636597   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:47.136601   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:47.636799   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:48.136486   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:48.637415   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:49.136703   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:46.057380   61115 pod_ready.go:102] pod "coredns-6f6b679f8f-fpz8f" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:48.556914   61115 pod_ready.go:102] pod "coredns-6f6b679f8f-fpz8f" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:49.561672   61115 pod_ready.go:92] pod "coredns-6f6b679f8f-fpz8f" in "kube-system" namespace has status "Ready":"True"
	I0814 01:06:49.561693   61115 pod_ready.go:81] duration metric: took 5.511190087s for pod "coredns-6f6b679f8f-fpz8f" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:49.561705   61115 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-901410" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:47.281700   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:49.780884   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:49.418739   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:51.918181   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:49.636646   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:50.137134   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:50.637310   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:51.136913   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:51.636930   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:52.137158   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:52.636489   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:53.137140   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:53.637032   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:54.137345   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:51.567510   61115 pod_ready.go:102] pod "etcd-embed-certs-901410" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:52.567550   61115 pod_ready.go:92] pod "etcd-embed-certs-901410" in "kube-system" namespace has status "Ready":"True"
	I0814 01:06:52.567575   61115 pod_ready.go:81] duration metric: took 3.005862861s for pod "etcd-embed-certs-901410" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:52.567584   61115 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-901410" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:52.572128   61115 pod_ready.go:92] pod "kube-apiserver-embed-certs-901410" in "kube-system" namespace has status "Ready":"True"
	I0814 01:06:52.572150   61115 pod_ready.go:81] duration metric: took 4.558756ms for pod "kube-apiserver-embed-certs-901410" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:52.572160   61115 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-901410" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:52.575875   61115 pod_ready.go:92] pod "kube-controller-manager-embed-certs-901410" in "kube-system" namespace has status "Ready":"True"
	I0814 01:06:52.575894   61115 pod_ready.go:81] duration metric: took 3.728258ms for pod "kube-controller-manager-embed-certs-901410" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:52.575903   61115 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-gtr77" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:52.579889   61115 pod_ready.go:92] pod "kube-proxy-gtr77" in "kube-system" namespace has status "Ready":"True"
	I0814 01:06:52.579908   61115 pod_ready.go:81] duration metric: took 3.999715ms for pod "kube-proxy-gtr77" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:52.579916   61115 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-901410" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:52.583481   61115 pod_ready.go:92] pod "kube-scheduler-embed-certs-901410" in "kube-system" namespace has status "Ready":"True"
	I0814 01:06:52.583499   61115 pod_ready.go:81] duration metric: took 3.577393ms for pod "kube-scheduler-embed-certs-901410" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:52.583508   61115 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:54.590479   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:51.781057   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:54.280478   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:54.418737   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:56.917785   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:54.636613   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:55.137191   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:55.637149   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:56.137437   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:56.637155   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:57.136629   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:57.636616   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:58.136691   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:58.637180   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:59.137246   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:57.091108   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:59.590751   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:56.781427   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:59.280620   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:01.281835   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:58.918424   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:01.418091   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:59.636603   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:00.137399   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:00.636477   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:01.136689   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:01.636867   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:02.136874   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:02.636850   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:03.136568   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:03.636915   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:04.137185   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:02.090113   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:04.589929   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:03.780774   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:05.781084   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:03.918432   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:06.417245   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:04.636433   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:05.136514   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:05.637177   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:06.136522   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:06.636384   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:07.136753   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:07.636417   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:08.137158   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:08.636665   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:09.137281   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:07.089678   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:09.590309   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:07.781208   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:10.281385   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:08.917707   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:10.917814   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:09.637102   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:10.136575   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:10.637290   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:11.136999   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:11.636523   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:12.136756   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:12.637369   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:13.136763   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:13.637275   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:14.137363   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:12.090323   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:14.092742   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:12.780837   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:14.781484   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:13.424099   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:15.917599   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:17.918631   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:14.636871   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:15.136819   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:15.636660   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:16.136568   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:16.637322   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:17.137088   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:17.637082   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:18.136469   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:18.637351   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:19.136899   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:16.589319   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:18.590539   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:17.279827   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:19.280727   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:20.418308   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:22.418709   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:19.636984   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:20.137256   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:20.636678   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:21.136871   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:21.637264   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:07:21.637336   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:07:21.674035   61804 cri.go:89] found id: ""
	I0814 01:07:21.674081   61804 logs.go:276] 0 containers: []
	W0814 01:07:21.674091   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:07:21.674100   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:07:21.674150   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:07:21.706567   61804 cri.go:89] found id: ""
	I0814 01:07:21.706594   61804 logs.go:276] 0 containers: []
	W0814 01:07:21.706602   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:07:21.706608   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:07:21.706670   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:07:21.744892   61804 cri.go:89] found id: ""
	I0814 01:07:21.744917   61804 logs.go:276] 0 containers: []
	W0814 01:07:21.744927   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:07:21.744933   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:07:21.744987   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:07:21.780766   61804 cri.go:89] found id: ""
	I0814 01:07:21.780791   61804 logs.go:276] 0 containers: []
	W0814 01:07:21.780799   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:07:21.780805   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:07:21.780861   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:07:21.813710   61804 cri.go:89] found id: ""
	I0814 01:07:21.813737   61804 logs.go:276] 0 containers: []
	W0814 01:07:21.813744   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:07:21.813750   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:07:21.813800   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:07:21.851621   61804 cri.go:89] found id: ""
	I0814 01:07:21.851649   61804 logs.go:276] 0 containers: []
	W0814 01:07:21.851657   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:07:21.851663   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:07:21.851713   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:07:21.885176   61804 cri.go:89] found id: ""
	I0814 01:07:21.885207   61804 logs.go:276] 0 containers: []
	W0814 01:07:21.885218   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:07:21.885226   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:07:21.885293   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:07:21.922273   61804 cri.go:89] found id: ""
	I0814 01:07:21.922303   61804 logs.go:276] 0 containers: []
	W0814 01:07:21.922319   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:07:21.922330   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:07:21.922344   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:07:21.975619   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:07:21.975657   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:07:21.989295   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:07:21.989330   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:07:22.117376   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:07:22.117406   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:07:22.117421   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:07:22.190366   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:07:22.190407   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:07:21.094685   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:23.592014   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:21.781584   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:24.281405   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:24.919338   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:27.417053   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:24.727910   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:24.741649   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:07:24.741722   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:07:24.778658   61804 cri.go:89] found id: ""
	I0814 01:07:24.778684   61804 logs.go:276] 0 containers: []
	W0814 01:07:24.778693   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:07:24.778699   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:07:24.778761   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:07:24.811263   61804 cri.go:89] found id: ""
	I0814 01:07:24.811290   61804 logs.go:276] 0 containers: []
	W0814 01:07:24.811314   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:07:24.811321   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:07:24.811385   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:07:24.847414   61804 cri.go:89] found id: ""
	I0814 01:07:24.847442   61804 logs.go:276] 0 containers: []
	W0814 01:07:24.847450   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:07:24.847456   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:07:24.847512   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:07:24.888714   61804 cri.go:89] found id: ""
	I0814 01:07:24.888737   61804 logs.go:276] 0 containers: []
	W0814 01:07:24.888745   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:07:24.888750   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:07:24.888828   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:07:24.937957   61804 cri.go:89] found id: ""
	I0814 01:07:24.937983   61804 logs.go:276] 0 containers: []
	W0814 01:07:24.937994   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:07:24.938002   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:07:24.938086   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:07:24.990489   61804 cri.go:89] found id: ""
	I0814 01:07:24.990514   61804 logs.go:276] 0 containers: []
	W0814 01:07:24.990522   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:07:24.990530   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:07:24.990592   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:07:25.033458   61804 cri.go:89] found id: ""
	I0814 01:07:25.033489   61804 logs.go:276] 0 containers: []
	W0814 01:07:25.033500   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:07:25.033508   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:07:25.033594   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:07:25.065009   61804 cri.go:89] found id: ""
	I0814 01:07:25.065039   61804 logs.go:276] 0 containers: []
	W0814 01:07:25.065049   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:07:25.065062   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:07:25.065074   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:07:25.116806   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:07:25.116841   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:07:25.131759   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:07:25.131790   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:07:25.206389   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:07:25.206415   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:07:25.206435   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:07:25.284603   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:07:25.284632   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:07:27.823371   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:27.836369   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:07:27.836452   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:07:27.876906   61804 cri.go:89] found id: ""
	I0814 01:07:27.876937   61804 logs.go:276] 0 containers: []
	W0814 01:07:27.876950   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:07:27.876960   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:07:27.877039   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:07:27.912449   61804 cri.go:89] found id: ""
	I0814 01:07:27.912481   61804 logs.go:276] 0 containers: []
	W0814 01:07:27.912494   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:07:27.912501   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:07:27.912568   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:07:27.945584   61804 cri.go:89] found id: ""
	I0814 01:07:27.945611   61804 logs.go:276] 0 containers: []
	W0814 01:07:27.945620   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:07:27.945628   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:07:27.945693   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:07:27.982470   61804 cri.go:89] found id: ""
	I0814 01:07:27.982498   61804 logs.go:276] 0 containers: []
	W0814 01:07:27.982508   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:07:27.982517   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:07:27.982592   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:07:28.020494   61804 cri.go:89] found id: ""
	I0814 01:07:28.020521   61804 logs.go:276] 0 containers: []
	W0814 01:07:28.020529   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:07:28.020535   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:07:28.020604   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:07:28.055810   61804 cri.go:89] found id: ""
	I0814 01:07:28.055835   61804 logs.go:276] 0 containers: []
	W0814 01:07:28.055846   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:07:28.055854   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:07:28.055917   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:07:28.092241   61804 cri.go:89] found id: ""
	I0814 01:07:28.092266   61804 logs.go:276] 0 containers: []
	W0814 01:07:28.092273   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:07:28.092279   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:07:28.092336   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:07:28.128234   61804 cri.go:89] found id: ""
	I0814 01:07:28.128259   61804 logs.go:276] 0 containers: []
	W0814 01:07:28.128266   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:07:28.128275   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:07:28.128292   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:07:28.169651   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:07:28.169682   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:07:28.223578   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:07:28.223614   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:07:28.237283   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:07:28.237317   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:07:28.310610   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:07:28.310633   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:07:28.310657   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:07:26.090425   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:28.090637   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:26.781404   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:29.280644   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:31.281808   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:29.917201   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:31.918087   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:30.892125   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:30.904416   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:07:30.904487   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:07:30.938158   61804 cri.go:89] found id: ""
	I0814 01:07:30.938186   61804 logs.go:276] 0 containers: []
	W0814 01:07:30.938197   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:07:30.938204   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:07:30.938273   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:07:30.969960   61804 cri.go:89] found id: ""
	I0814 01:07:30.969990   61804 logs.go:276] 0 containers: []
	W0814 01:07:30.970000   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:07:30.970006   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:07:30.970094   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:07:31.003442   61804 cri.go:89] found id: ""
	I0814 01:07:31.003472   61804 logs.go:276] 0 containers: []
	W0814 01:07:31.003484   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:07:31.003492   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:07:31.003547   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:07:31.036819   61804 cri.go:89] found id: ""
	I0814 01:07:31.036852   61804 logs.go:276] 0 containers: []
	W0814 01:07:31.036866   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:07:31.036874   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:07:31.036943   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:07:31.070521   61804 cri.go:89] found id: ""
	I0814 01:07:31.070546   61804 logs.go:276] 0 containers: []
	W0814 01:07:31.070556   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:07:31.070570   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:07:31.070627   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:07:31.111200   61804 cri.go:89] found id: ""
	I0814 01:07:31.111223   61804 logs.go:276] 0 containers: []
	W0814 01:07:31.111230   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:07:31.111236   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:07:31.111299   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:07:31.143931   61804 cri.go:89] found id: ""
	I0814 01:07:31.143965   61804 logs.go:276] 0 containers: []
	W0814 01:07:31.143973   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:07:31.143978   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:07:31.144027   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:07:31.176742   61804 cri.go:89] found id: ""
	I0814 01:07:31.176765   61804 logs.go:276] 0 containers: []
	W0814 01:07:31.176773   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:07:31.176782   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:07:31.176800   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:07:31.247117   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:07:31.247145   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:07:31.247159   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:07:31.327763   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:07:31.327797   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:07:31.368715   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:07:31.368753   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:07:31.421802   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:07:31.421833   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:07:33.936162   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:33.949580   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:07:33.949647   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:07:33.982423   61804 cri.go:89] found id: ""
	I0814 01:07:33.982452   61804 logs.go:276] 0 containers: []
	W0814 01:07:33.982464   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:07:33.982472   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:07:33.982532   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:07:34.015547   61804 cri.go:89] found id: ""
	I0814 01:07:34.015580   61804 logs.go:276] 0 containers: []
	W0814 01:07:34.015591   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:07:34.015598   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:07:34.015660   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:07:34.047814   61804 cri.go:89] found id: ""
	I0814 01:07:34.047837   61804 logs.go:276] 0 containers: []
	W0814 01:07:34.047845   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:07:34.047851   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:07:34.047914   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:07:34.080509   61804 cri.go:89] found id: ""
	I0814 01:07:34.080539   61804 logs.go:276] 0 containers: []
	W0814 01:07:34.080552   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:07:34.080561   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:07:34.080629   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:07:34.114693   61804 cri.go:89] found id: ""
	I0814 01:07:34.114723   61804 logs.go:276] 0 containers: []
	W0814 01:07:34.114735   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:07:34.114742   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:07:34.114812   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:07:34.148294   61804 cri.go:89] found id: ""
	I0814 01:07:34.148321   61804 logs.go:276] 0 containers: []
	W0814 01:07:34.148334   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:07:34.148344   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:07:34.148410   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:07:34.182913   61804 cri.go:89] found id: ""
	I0814 01:07:34.182938   61804 logs.go:276] 0 containers: []
	W0814 01:07:34.182947   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:07:34.182953   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:07:34.183002   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:07:34.215609   61804 cri.go:89] found id: ""
	I0814 01:07:34.215639   61804 logs.go:276] 0 containers: []
	W0814 01:07:34.215649   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:07:34.215662   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:07:34.215688   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:07:34.278627   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:07:34.278657   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:07:34.278674   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:07:34.353824   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:07:34.353863   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:07:34.390511   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:07:34.390551   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:07:34.440170   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:07:34.440205   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:07:30.589452   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:33.089231   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:33.780724   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:35.781648   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:34.417300   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:36.418300   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:36.955228   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:36.968676   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:07:36.968752   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:07:37.005738   61804 cri.go:89] found id: ""
	I0814 01:07:37.005770   61804 logs.go:276] 0 containers: []
	W0814 01:07:37.005781   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:07:37.005800   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:07:37.005876   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:07:37.038556   61804 cri.go:89] found id: ""
	I0814 01:07:37.038586   61804 logs.go:276] 0 containers: []
	W0814 01:07:37.038594   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:07:37.038599   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:07:37.038659   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:07:37.073835   61804 cri.go:89] found id: ""
	I0814 01:07:37.073870   61804 logs.go:276] 0 containers: []
	W0814 01:07:37.073881   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:07:37.073890   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:07:37.073952   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:07:37.109720   61804 cri.go:89] found id: ""
	I0814 01:07:37.109754   61804 logs.go:276] 0 containers: []
	W0814 01:07:37.109766   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:07:37.109774   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:07:37.109837   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:07:37.141361   61804 cri.go:89] found id: ""
	I0814 01:07:37.141391   61804 logs.go:276] 0 containers: []
	W0814 01:07:37.141401   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:07:37.141409   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:07:37.141460   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:07:37.172803   61804 cri.go:89] found id: ""
	I0814 01:07:37.172833   61804 logs.go:276] 0 containers: []
	W0814 01:07:37.172841   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:07:37.172847   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:07:37.172898   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:07:37.205074   61804 cri.go:89] found id: ""
	I0814 01:07:37.205101   61804 logs.go:276] 0 containers: []
	W0814 01:07:37.205110   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:07:37.205116   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:07:37.205172   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:07:37.237440   61804 cri.go:89] found id: ""
	I0814 01:07:37.237462   61804 logs.go:276] 0 containers: []
	W0814 01:07:37.237472   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:07:37.237484   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:07:37.237499   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:07:37.286411   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:07:37.286442   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:07:37.299649   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:07:37.299673   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:07:37.363165   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:07:37.363188   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:07:37.363209   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:07:37.440551   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:07:37.440589   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:07:35.090686   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:37.091438   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:39.590158   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:38.281686   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:40.780496   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:38.919024   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:41.417327   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:39.980740   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:39.992656   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:07:39.992724   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:07:40.026980   61804 cri.go:89] found id: ""
	I0814 01:07:40.027009   61804 logs.go:276] 0 containers: []
	W0814 01:07:40.027020   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:07:40.027027   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:07:40.027093   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:07:40.059474   61804 cri.go:89] found id: ""
	I0814 01:07:40.059509   61804 logs.go:276] 0 containers: []
	W0814 01:07:40.059521   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:07:40.059528   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:07:40.059602   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:07:40.092222   61804 cri.go:89] found id: ""
	I0814 01:07:40.092251   61804 logs.go:276] 0 containers: []
	W0814 01:07:40.092260   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:07:40.092265   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:07:40.092314   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:07:40.123458   61804 cri.go:89] found id: ""
	I0814 01:07:40.123487   61804 logs.go:276] 0 containers: []
	W0814 01:07:40.123495   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:07:40.123501   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:07:40.123557   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:07:40.155410   61804 cri.go:89] found id: ""
	I0814 01:07:40.155433   61804 logs.go:276] 0 containers: []
	W0814 01:07:40.155461   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:07:40.155467   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:07:40.155517   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:07:40.186726   61804 cri.go:89] found id: ""
	I0814 01:07:40.186750   61804 logs.go:276] 0 containers: []
	W0814 01:07:40.186774   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:07:40.186782   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:07:40.186842   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:07:40.223940   61804 cri.go:89] found id: ""
	I0814 01:07:40.223964   61804 logs.go:276] 0 containers: []
	W0814 01:07:40.223974   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:07:40.223981   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:07:40.224039   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:07:40.255483   61804 cri.go:89] found id: ""
	I0814 01:07:40.255511   61804 logs.go:276] 0 containers: []
	W0814 01:07:40.255520   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:07:40.255532   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:07:40.255547   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:07:40.307368   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:07:40.307400   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:07:40.320297   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:07:40.320323   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:07:40.382358   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:07:40.382390   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:07:40.382406   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:07:40.464226   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:07:40.464312   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:07:43.001144   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:43.015011   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:07:43.015090   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:07:43.047581   61804 cri.go:89] found id: ""
	I0814 01:07:43.047617   61804 logs.go:276] 0 containers: []
	W0814 01:07:43.047629   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:07:43.047636   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:07:43.047709   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:07:43.081737   61804 cri.go:89] found id: ""
	I0814 01:07:43.081769   61804 logs.go:276] 0 containers: []
	W0814 01:07:43.081780   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:07:43.081788   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:07:43.081858   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:07:43.116828   61804 cri.go:89] found id: ""
	I0814 01:07:43.116851   61804 logs.go:276] 0 containers: []
	W0814 01:07:43.116860   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:07:43.116865   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:07:43.116918   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:07:43.149154   61804 cri.go:89] found id: ""
	I0814 01:07:43.149183   61804 logs.go:276] 0 containers: []
	W0814 01:07:43.149195   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:07:43.149203   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:07:43.149270   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:07:43.183298   61804 cri.go:89] found id: ""
	I0814 01:07:43.183327   61804 logs.go:276] 0 containers: []
	W0814 01:07:43.183335   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:07:43.183341   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:07:43.183402   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:07:43.217844   61804 cri.go:89] found id: ""
	I0814 01:07:43.217875   61804 logs.go:276] 0 containers: []
	W0814 01:07:43.217885   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:07:43.217894   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:07:43.217957   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:07:43.254501   61804 cri.go:89] found id: ""
	I0814 01:07:43.254529   61804 logs.go:276] 0 containers: []
	W0814 01:07:43.254540   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:07:43.254549   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:07:43.254621   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:07:43.288499   61804 cri.go:89] found id: ""
	I0814 01:07:43.288520   61804 logs.go:276] 0 containers: []
	W0814 01:07:43.288528   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:07:43.288538   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:07:43.288553   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:07:43.364920   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:07:43.364957   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:07:43.402536   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:07:43.402563   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:07:43.454370   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:07:43.454403   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:07:43.467972   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:07:43.468000   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:07:43.541823   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:07:42.089879   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:44.090254   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:42.781141   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:45.280856   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:43.418435   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:45.918224   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:47.918468   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:46.042614   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:46.055014   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:07:46.055074   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:07:46.088632   61804 cri.go:89] found id: ""
	I0814 01:07:46.088664   61804 logs.go:276] 0 containers: []
	W0814 01:07:46.088676   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:07:46.088684   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:07:46.088755   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:07:46.121747   61804 cri.go:89] found id: ""
	I0814 01:07:46.121774   61804 logs.go:276] 0 containers: []
	W0814 01:07:46.121782   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:07:46.121788   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:07:46.121837   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:07:46.157301   61804 cri.go:89] found id: ""
	I0814 01:07:46.157329   61804 logs.go:276] 0 containers: []
	W0814 01:07:46.157340   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:07:46.157348   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:07:46.157412   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:07:46.188543   61804 cri.go:89] found id: ""
	I0814 01:07:46.188575   61804 logs.go:276] 0 containers: []
	W0814 01:07:46.188586   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:07:46.188594   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:07:46.188657   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:07:46.219762   61804 cri.go:89] found id: ""
	I0814 01:07:46.219787   61804 logs.go:276] 0 containers: []
	W0814 01:07:46.219795   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:07:46.219801   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:07:46.219849   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:07:46.253187   61804 cri.go:89] found id: ""
	I0814 01:07:46.253223   61804 logs.go:276] 0 containers: []
	W0814 01:07:46.253234   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:07:46.253242   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:07:46.253326   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:07:46.287614   61804 cri.go:89] found id: ""
	I0814 01:07:46.287647   61804 logs.go:276] 0 containers: []
	W0814 01:07:46.287656   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:07:46.287662   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:07:46.287716   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:07:46.323558   61804 cri.go:89] found id: ""
	I0814 01:07:46.323588   61804 logs.go:276] 0 containers: []
	W0814 01:07:46.323599   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:07:46.323611   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:07:46.323628   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:07:46.336110   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:07:46.336139   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:07:46.398541   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:07:46.398568   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:07:46.398584   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:07:46.476132   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:07:46.476166   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:07:46.521433   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:07:46.521470   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:07:49.071324   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:49.083741   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:07:49.083816   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:07:49.117788   61804 cri.go:89] found id: ""
	I0814 01:07:49.117816   61804 logs.go:276] 0 containers: []
	W0814 01:07:49.117828   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:07:49.117836   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:07:49.117903   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:07:49.153363   61804 cri.go:89] found id: ""
	I0814 01:07:49.153398   61804 logs.go:276] 0 containers: []
	W0814 01:07:49.153409   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:07:49.153417   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:07:49.153488   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:07:49.186229   61804 cri.go:89] found id: ""
	I0814 01:07:49.186253   61804 logs.go:276] 0 containers: []
	W0814 01:07:49.186261   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:07:49.186267   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:07:49.186327   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:07:49.218463   61804 cri.go:89] found id: ""
	I0814 01:07:49.218485   61804 logs.go:276] 0 containers: []
	W0814 01:07:49.218492   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:07:49.218498   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:07:49.218559   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:07:49.250172   61804 cri.go:89] found id: ""
	I0814 01:07:49.250204   61804 logs.go:276] 0 containers: []
	W0814 01:07:49.250214   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:07:49.250222   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:07:49.250287   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:07:49.285574   61804 cri.go:89] found id: ""
	I0814 01:07:49.285602   61804 logs.go:276] 0 containers: []
	W0814 01:07:49.285612   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:07:49.285620   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:07:49.285679   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:07:49.317583   61804 cri.go:89] found id: ""
	I0814 01:07:49.317614   61804 logs.go:276] 0 containers: []
	W0814 01:07:49.317625   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:07:49.317632   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:07:49.317690   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:07:49.350486   61804 cri.go:89] found id: ""
	I0814 01:07:49.350513   61804 logs.go:276] 0 containers: []
	W0814 01:07:49.350524   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:07:49.350535   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:07:49.350550   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:07:49.401242   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:07:49.401278   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:07:49.415776   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:07:49.415805   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:07:49.487135   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:07:49.487207   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:07:49.487229   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:07:46.092233   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:48.589232   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:47.780910   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:49.781008   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:50.418178   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:52.917953   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:49.569068   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:07:49.569103   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:07:52.108074   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:52.120495   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:07:52.120568   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:07:52.155022   61804 cri.go:89] found id: ""
	I0814 01:07:52.155047   61804 logs.go:276] 0 containers: []
	W0814 01:07:52.155055   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:07:52.155063   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:07:52.155131   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:07:52.186783   61804 cri.go:89] found id: ""
	I0814 01:07:52.186813   61804 logs.go:276] 0 containers: []
	W0814 01:07:52.186837   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:07:52.186854   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:07:52.186908   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:07:52.219089   61804 cri.go:89] found id: ""
	I0814 01:07:52.219118   61804 logs.go:276] 0 containers: []
	W0814 01:07:52.219129   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:07:52.219136   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:07:52.219200   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:07:52.252343   61804 cri.go:89] found id: ""
	I0814 01:07:52.252378   61804 logs.go:276] 0 containers: []
	W0814 01:07:52.252391   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:07:52.252399   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:07:52.252460   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:07:52.288827   61804 cri.go:89] found id: ""
	I0814 01:07:52.288848   61804 logs.go:276] 0 containers: []
	W0814 01:07:52.288855   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:07:52.288861   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:07:52.288913   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:07:52.322201   61804 cri.go:89] found id: ""
	I0814 01:07:52.322228   61804 logs.go:276] 0 containers: []
	W0814 01:07:52.322240   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:07:52.322247   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:07:52.322327   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:07:52.357482   61804 cri.go:89] found id: ""
	I0814 01:07:52.357508   61804 logs.go:276] 0 containers: []
	W0814 01:07:52.357519   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:07:52.357527   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:07:52.357599   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:07:52.390481   61804 cri.go:89] found id: ""
	I0814 01:07:52.390508   61804 logs.go:276] 0 containers: []
	W0814 01:07:52.390515   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:07:52.390523   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:07:52.390536   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:07:52.403144   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:07:52.403171   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:07:52.474148   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:07:52.474170   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:07:52.474182   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:07:52.555353   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:07:52.555396   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:07:52.592151   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:07:52.592180   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:07:50.589355   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:52.590468   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:52.282598   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:54.780753   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:55.418165   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:57.418294   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:55.143835   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:55.156285   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:07:55.156360   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:07:55.195624   61804 cri.go:89] found id: ""
	I0814 01:07:55.195655   61804 logs.go:276] 0 containers: []
	W0814 01:07:55.195666   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:07:55.195673   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:07:55.195735   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:07:55.230384   61804 cri.go:89] found id: ""
	I0814 01:07:55.230409   61804 logs.go:276] 0 containers: []
	W0814 01:07:55.230419   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:07:55.230426   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:07:55.230491   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:07:55.264774   61804 cri.go:89] found id: ""
	I0814 01:07:55.264802   61804 logs.go:276] 0 containers: []
	W0814 01:07:55.264812   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:07:55.264819   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:07:55.264905   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:07:55.297679   61804 cri.go:89] found id: ""
	I0814 01:07:55.297706   61804 logs.go:276] 0 containers: []
	W0814 01:07:55.297715   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:07:55.297721   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:07:55.297780   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:07:55.331555   61804 cri.go:89] found id: ""
	I0814 01:07:55.331591   61804 logs.go:276] 0 containers: []
	W0814 01:07:55.331602   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:07:55.331609   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:07:55.331685   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:07:55.362351   61804 cri.go:89] found id: ""
	I0814 01:07:55.362374   61804 logs.go:276] 0 containers: []
	W0814 01:07:55.362381   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:07:55.362388   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:07:55.362434   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:07:55.397261   61804 cri.go:89] found id: ""
	I0814 01:07:55.397292   61804 logs.go:276] 0 containers: []
	W0814 01:07:55.397301   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:07:55.397308   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:07:55.397355   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:07:55.431333   61804 cri.go:89] found id: ""
	I0814 01:07:55.431363   61804 logs.go:276] 0 containers: []
	W0814 01:07:55.431376   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:07:55.431388   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:07:55.431403   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:07:55.445865   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:07:55.445901   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:07:55.511474   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:07:55.511494   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:07:55.511505   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:07:55.596934   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:07:55.596966   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:07:55.632440   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:07:55.632477   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:07:58.183656   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:58.196717   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:07:58.196776   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:07:58.231854   61804 cri.go:89] found id: ""
	I0814 01:07:58.231890   61804 logs.go:276] 0 containers: []
	W0814 01:07:58.231902   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:07:58.231910   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:07:58.231972   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:07:58.267169   61804 cri.go:89] found id: ""
	I0814 01:07:58.267201   61804 logs.go:276] 0 containers: []
	W0814 01:07:58.267211   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:07:58.267218   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:07:58.267277   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:07:58.301552   61804 cri.go:89] found id: ""
	I0814 01:07:58.301581   61804 logs.go:276] 0 containers: []
	W0814 01:07:58.301589   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:07:58.301596   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:07:58.301652   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:07:58.334399   61804 cri.go:89] found id: ""
	I0814 01:07:58.334427   61804 logs.go:276] 0 containers: []
	W0814 01:07:58.334434   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:07:58.334440   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:07:58.334490   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:07:58.366748   61804 cri.go:89] found id: ""
	I0814 01:07:58.366777   61804 logs.go:276] 0 containers: []
	W0814 01:07:58.366787   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:07:58.366794   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:07:58.366860   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:07:58.401078   61804 cri.go:89] found id: ""
	I0814 01:07:58.401108   61804 logs.go:276] 0 containers: []
	W0814 01:07:58.401117   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:07:58.401123   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:07:58.401179   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:07:58.433766   61804 cri.go:89] found id: ""
	I0814 01:07:58.433795   61804 logs.go:276] 0 containers: []
	W0814 01:07:58.433807   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:07:58.433813   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:07:58.433863   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:07:58.467187   61804 cri.go:89] found id: ""
	I0814 01:07:58.467211   61804 logs.go:276] 0 containers: []
	W0814 01:07:58.467219   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:07:58.467227   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:07:58.467241   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:07:58.520695   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:07:58.520733   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:07:58.535262   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:07:58.535288   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:07:58.601335   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:07:58.601354   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:07:58.601367   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:07:58.683365   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:07:58.683411   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:07:55.089601   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:57.089754   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:59.590432   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:56.783376   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:59.282603   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:59.917309   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:01.917515   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:01.221305   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:01.233782   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:01.233863   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:01.265991   61804 cri.go:89] found id: ""
	I0814 01:08:01.266019   61804 logs.go:276] 0 containers: []
	W0814 01:08:01.266030   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:01.266048   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:01.266116   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:01.300802   61804 cri.go:89] found id: ""
	I0814 01:08:01.300825   61804 logs.go:276] 0 containers: []
	W0814 01:08:01.300840   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:01.300851   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:01.300918   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:01.334762   61804 cri.go:89] found id: ""
	I0814 01:08:01.334788   61804 logs.go:276] 0 containers: []
	W0814 01:08:01.334796   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:01.334803   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:01.334858   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:01.367051   61804 cri.go:89] found id: ""
	I0814 01:08:01.367075   61804 logs.go:276] 0 containers: []
	W0814 01:08:01.367083   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:01.367089   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:01.367147   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:01.401875   61804 cri.go:89] found id: ""
	I0814 01:08:01.401904   61804 logs.go:276] 0 containers: []
	W0814 01:08:01.401915   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:01.401922   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:01.401982   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:01.435930   61804 cri.go:89] found id: ""
	I0814 01:08:01.435958   61804 logs.go:276] 0 containers: []
	W0814 01:08:01.435975   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:01.435994   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:01.436056   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:01.470913   61804 cri.go:89] found id: ""
	I0814 01:08:01.470943   61804 logs.go:276] 0 containers: []
	W0814 01:08:01.470958   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:01.470966   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:01.471030   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:01.506552   61804 cri.go:89] found id: ""
	I0814 01:08:01.506584   61804 logs.go:276] 0 containers: []
	W0814 01:08:01.506595   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:01.506607   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:01.506621   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:01.557203   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:01.557245   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:01.570729   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:01.570754   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:01.636244   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:01.636268   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:01.636282   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:01.720905   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:01.720937   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:04.261326   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:04.274952   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:04.275020   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:04.309640   61804 cri.go:89] found id: ""
	I0814 01:08:04.309695   61804 logs.go:276] 0 containers: []
	W0814 01:08:04.309708   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:04.309717   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:04.309784   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:04.343333   61804 cri.go:89] found id: ""
	I0814 01:08:04.343368   61804 logs.go:276] 0 containers: []
	W0814 01:08:04.343380   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:04.343388   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:04.343446   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:04.377058   61804 cri.go:89] found id: ""
	I0814 01:08:04.377090   61804 logs.go:276] 0 containers: []
	W0814 01:08:04.377101   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:04.377109   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:04.377170   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:04.411932   61804 cri.go:89] found id: ""
	I0814 01:08:04.411961   61804 logs.go:276] 0 containers: []
	W0814 01:08:04.411973   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:04.411980   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:04.412039   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:04.449523   61804 cri.go:89] found id: ""
	I0814 01:08:04.449557   61804 logs.go:276] 0 containers: []
	W0814 01:08:04.449569   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:04.449577   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:04.449639   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:04.505818   61804 cri.go:89] found id: ""
	I0814 01:08:04.505844   61804 logs.go:276] 0 containers: []
	W0814 01:08:04.505852   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:04.505858   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:04.505911   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:01.594524   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:04.089421   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:01.780659   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:03.780893   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:06.281784   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:03.917861   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:06.417117   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:04.540720   61804 cri.go:89] found id: ""
	I0814 01:08:04.540747   61804 logs.go:276] 0 containers: []
	W0814 01:08:04.540754   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:04.540759   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:04.540822   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:04.575188   61804 cri.go:89] found id: ""
	I0814 01:08:04.575218   61804 logs.go:276] 0 containers: []
	W0814 01:08:04.575230   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:04.575241   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:04.575254   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:04.624557   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:04.624593   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:04.637679   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:04.637707   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:04.707655   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:04.707676   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:04.707690   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:04.792530   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:04.792564   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:07.333726   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:07.346667   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:07.346762   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:07.379773   61804 cri.go:89] found id: ""
	I0814 01:08:07.379809   61804 logs.go:276] 0 containers: []
	W0814 01:08:07.379821   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:07.379832   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:07.379898   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:07.413473   61804 cri.go:89] found id: ""
	I0814 01:08:07.413508   61804 logs.go:276] 0 containers: []
	W0814 01:08:07.413519   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:07.413528   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:07.413592   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:07.448033   61804 cri.go:89] found id: ""
	I0814 01:08:07.448065   61804 logs.go:276] 0 containers: []
	W0814 01:08:07.448076   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:07.448084   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:07.448149   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:07.483015   61804 cri.go:89] found id: ""
	I0814 01:08:07.483043   61804 logs.go:276] 0 containers: []
	W0814 01:08:07.483051   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:07.483057   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:07.483116   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:07.516222   61804 cri.go:89] found id: ""
	I0814 01:08:07.516245   61804 logs.go:276] 0 containers: []
	W0814 01:08:07.516253   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:07.516259   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:07.516309   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:07.552179   61804 cri.go:89] found id: ""
	I0814 01:08:07.552203   61804 logs.go:276] 0 containers: []
	W0814 01:08:07.552211   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:07.552217   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:07.552269   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:07.585804   61804 cri.go:89] found id: ""
	I0814 01:08:07.585832   61804 logs.go:276] 0 containers: []
	W0814 01:08:07.585842   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:07.585850   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:07.585913   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:07.620731   61804 cri.go:89] found id: ""
	I0814 01:08:07.620757   61804 logs.go:276] 0 containers: []
	W0814 01:08:07.620766   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:07.620774   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:07.620786   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:07.662648   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:07.662686   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:07.713380   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:07.713418   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:07.726770   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:07.726801   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:07.794679   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:07.794705   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:07.794720   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:06.090545   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:08.093404   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:08.780821   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:11.281395   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:08.417151   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:10.418613   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:12.916869   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:10.370665   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:10.383986   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:10.384046   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:10.417596   61804 cri.go:89] found id: ""
	I0814 01:08:10.417622   61804 logs.go:276] 0 containers: []
	W0814 01:08:10.417634   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:10.417642   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:10.417703   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:10.453782   61804 cri.go:89] found id: ""
	I0814 01:08:10.453813   61804 logs.go:276] 0 containers: []
	W0814 01:08:10.453824   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:10.453832   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:10.453895   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:10.486795   61804 cri.go:89] found id: ""
	I0814 01:08:10.486821   61804 logs.go:276] 0 containers: []
	W0814 01:08:10.486831   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:10.486839   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:10.486930   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:10.519249   61804 cri.go:89] found id: ""
	I0814 01:08:10.519285   61804 logs.go:276] 0 containers: []
	W0814 01:08:10.519296   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:10.519304   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:10.519369   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:10.551791   61804 cri.go:89] found id: ""
	I0814 01:08:10.551818   61804 logs.go:276] 0 containers: []
	W0814 01:08:10.551825   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:10.551834   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:10.551892   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:10.584630   61804 cri.go:89] found id: ""
	I0814 01:08:10.584658   61804 logs.go:276] 0 containers: []
	W0814 01:08:10.584669   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:10.584679   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:10.584742   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:10.616870   61804 cri.go:89] found id: ""
	I0814 01:08:10.616898   61804 logs.go:276] 0 containers: []
	W0814 01:08:10.616911   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:10.616918   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:10.616984   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:10.650681   61804 cri.go:89] found id: ""
	I0814 01:08:10.650709   61804 logs.go:276] 0 containers: []
	W0814 01:08:10.650721   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:10.650731   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:10.650748   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:10.663021   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:10.663047   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:10.731788   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:10.731813   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:10.731829   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:10.812174   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:10.812213   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:10.854260   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:10.854287   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:13.414862   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:13.428537   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:13.428595   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:13.460800   61804 cri.go:89] found id: ""
	I0814 01:08:13.460836   61804 logs.go:276] 0 containers: []
	W0814 01:08:13.460850   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:13.460859   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:13.460933   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:13.494240   61804 cri.go:89] found id: ""
	I0814 01:08:13.494264   61804 logs.go:276] 0 containers: []
	W0814 01:08:13.494274   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:13.494282   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:13.494370   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:13.526684   61804 cri.go:89] found id: ""
	I0814 01:08:13.526715   61804 logs.go:276] 0 containers: []
	W0814 01:08:13.526726   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:13.526734   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:13.526797   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:13.560258   61804 cri.go:89] found id: ""
	I0814 01:08:13.560281   61804 logs.go:276] 0 containers: []
	W0814 01:08:13.560289   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:13.560296   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:13.560353   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:13.592615   61804 cri.go:89] found id: ""
	I0814 01:08:13.592641   61804 logs.go:276] 0 containers: []
	W0814 01:08:13.592653   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:13.592668   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:13.592732   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:13.627268   61804 cri.go:89] found id: ""
	I0814 01:08:13.627291   61804 logs.go:276] 0 containers: []
	W0814 01:08:13.627299   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:13.627305   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:13.627363   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:13.661932   61804 cri.go:89] found id: ""
	I0814 01:08:13.661955   61804 logs.go:276] 0 containers: []
	W0814 01:08:13.661963   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:13.661968   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:13.662024   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:13.694724   61804 cri.go:89] found id: ""
	I0814 01:08:13.694750   61804 logs.go:276] 0 containers: []
	W0814 01:08:13.694760   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:13.694770   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:13.694785   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:13.759415   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:13.759436   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:13.759449   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:13.835496   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:13.835532   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:13.873749   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:13.873779   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:13.927612   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:13.927647   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:10.590789   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:13.090113   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:13.781937   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:16.281253   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:14.920559   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:17.418625   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:16.440696   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:16.455648   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:16.455734   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:16.490557   61804 cri.go:89] found id: ""
	I0814 01:08:16.490587   61804 logs.go:276] 0 containers: []
	W0814 01:08:16.490599   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:16.490606   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:16.490667   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:16.524268   61804 cri.go:89] found id: ""
	I0814 01:08:16.524294   61804 logs.go:276] 0 containers: []
	W0814 01:08:16.524303   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:16.524315   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:16.524379   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:16.562651   61804 cri.go:89] found id: ""
	I0814 01:08:16.562686   61804 logs.go:276] 0 containers: []
	W0814 01:08:16.562696   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:16.562708   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:16.562771   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:16.598581   61804 cri.go:89] found id: ""
	I0814 01:08:16.598605   61804 logs.go:276] 0 containers: []
	W0814 01:08:16.598613   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:16.598619   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:16.598669   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:16.646849   61804 cri.go:89] found id: ""
	I0814 01:08:16.646872   61804 logs.go:276] 0 containers: []
	W0814 01:08:16.646880   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:16.646886   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:16.646939   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:16.698695   61804 cri.go:89] found id: ""
	I0814 01:08:16.698720   61804 logs.go:276] 0 containers: []
	W0814 01:08:16.698727   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:16.698733   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:16.698793   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:16.748149   61804 cri.go:89] found id: ""
	I0814 01:08:16.748182   61804 logs.go:276] 0 containers: []
	W0814 01:08:16.748193   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:16.748201   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:16.748263   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:16.783334   61804 cri.go:89] found id: ""
	I0814 01:08:16.783362   61804 logs.go:276] 0 containers: []
	W0814 01:08:16.783371   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:16.783378   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:16.783389   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:16.833178   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:16.833211   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:16.845843   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:16.845873   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:16.916728   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:16.916754   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:16.916770   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:17.001194   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:17.001236   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:15.588888   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:17.589309   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:19.593806   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:18.780869   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:20.780899   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:19.918779   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:22.417464   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:19.540300   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:19.554740   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:19.554823   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:19.590452   61804 cri.go:89] found id: ""
	I0814 01:08:19.590478   61804 logs.go:276] 0 containers: []
	W0814 01:08:19.590489   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:19.590498   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:19.590559   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:19.623643   61804 cri.go:89] found id: ""
	I0814 01:08:19.623673   61804 logs.go:276] 0 containers: []
	W0814 01:08:19.623683   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:19.623691   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:19.623759   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:19.659205   61804 cri.go:89] found id: ""
	I0814 01:08:19.659228   61804 logs.go:276] 0 containers: []
	W0814 01:08:19.659236   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:19.659243   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:19.659312   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:19.695038   61804 cri.go:89] found id: ""
	I0814 01:08:19.695061   61804 logs.go:276] 0 containers: []
	W0814 01:08:19.695068   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:19.695075   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:19.695132   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:19.728525   61804 cri.go:89] found id: ""
	I0814 01:08:19.728555   61804 logs.go:276] 0 containers: []
	W0814 01:08:19.728568   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:19.728585   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:19.728652   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:19.764153   61804 cri.go:89] found id: ""
	I0814 01:08:19.764180   61804 logs.go:276] 0 containers: []
	W0814 01:08:19.764191   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:19.764198   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:19.764261   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:19.803346   61804 cri.go:89] found id: ""
	I0814 01:08:19.803382   61804 logs.go:276] 0 containers: []
	W0814 01:08:19.803392   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:19.803400   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:19.803462   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:19.835783   61804 cri.go:89] found id: ""
	I0814 01:08:19.835811   61804 logs.go:276] 0 containers: []
	W0814 01:08:19.835818   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:19.835827   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:19.835839   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:19.889917   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:19.889961   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:19.903826   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:19.903858   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:19.977790   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:19.977813   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:19.977832   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:20.053634   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:20.053672   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:22.598821   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:22.612128   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:22.612209   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:22.647840   61804 cri.go:89] found id: ""
	I0814 01:08:22.647864   61804 logs.go:276] 0 containers: []
	W0814 01:08:22.647873   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:22.647880   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:22.647942   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:22.681572   61804 cri.go:89] found id: ""
	I0814 01:08:22.681594   61804 logs.go:276] 0 containers: []
	W0814 01:08:22.681601   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:22.681606   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:22.681670   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:22.715737   61804 cri.go:89] found id: ""
	I0814 01:08:22.715785   61804 logs.go:276] 0 containers: []
	W0814 01:08:22.715793   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:22.715799   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:22.715856   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:22.750605   61804 cri.go:89] found id: ""
	I0814 01:08:22.750628   61804 logs.go:276] 0 containers: []
	W0814 01:08:22.750636   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:22.750643   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:22.750693   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:22.786410   61804 cri.go:89] found id: ""
	I0814 01:08:22.786434   61804 logs.go:276] 0 containers: []
	W0814 01:08:22.786442   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:22.786447   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:22.786502   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:22.821799   61804 cri.go:89] found id: ""
	I0814 01:08:22.821830   61804 logs.go:276] 0 containers: []
	W0814 01:08:22.821840   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:22.821846   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:22.821923   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:22.861218   61804 cri.go:89] found id: ""
	I0814 01:08:22.861243   61804 logs.go:276] 0 containers: []
	W0814 01:08:22.861254   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:22.861261   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:22.861324   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:22.896371   61804 cri.go:89] found id: ""
	I0814 01:08:22.896398   61804 logs.go:276] 0 containers: []
	W0814 01:08:22.896408   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:22.896419   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:22.896434   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:22.951998   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:22.952035   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:22.966214   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:22.966239   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:23.035790   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:23.035812   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:23.035824   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:23.119675   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:23.119708   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:22.090427   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:24.100671   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:22.781758   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:25.280556   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:24.419130   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:26.918236   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:25.657771   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:25.671521   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:25.671607   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:25.708419   61804 cri.go:89] found id: ""
	I0814 01:08:25.708451   61804 logs.go:276] 0 containers: []
	W0814 01:08:25.708460   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:25.708466   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:25.708514   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:25.743263   61804 cri.go:89] found id: ""
	I0814 01:08:25.743296   61804 logs.go:276] 0 containers: []
	W0814 01:08:25.743309   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:25.743318   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:25.743384   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:25.773544   61804 cri.go:89] found id: ""
	I0814 01:08:25.773570   61804 logs.go:276] 0 containers: []
	W0814 01:08:25.773580   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:25.773588   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:25.773649   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:25.805316   61804 cri.go:89] found id: ""
	I0814 01:08:25.805339   61804 logs.go:276] 0 containers: []
	W0814 01:08:25.805347   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:25.805353   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:25.805404   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:25.837785   61804 cri.go:89] found id: ""
	I0814 01:08:25.837810   61804 logs.go:276] 0 containers: []
	W0814 01:08:25.837818   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:25.837824   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:25.837893   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:25.877145   61804 cri.go:89] found id: ""
	I0814 01:08:25.877171   61804 logs.go:276] 0 containers: []
	W0814 01:08:25.877182   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:25.877190   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:25.877236   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:25.913823   61804 cri.go:89] found id: ""
	I0814 01:08:25.913861   61804 logs.go:276] 0 containers: []
	W0814 01:08:25.913872   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:25.913880   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:25.913946   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:25.947397   61804 cri.go:89] found id: ""
	I0814 01:08:25.947419   61804 logs.go:276] 0 containers: []
	W0814 01:08:25.947427   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:25.947435   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:25.947446   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:26.023754   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:26.023812   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:26.060030   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:26.060068   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:26.110625   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:26.110663   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:26.123952   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:26.123991   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:26.194210   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:28.694490   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:28.706976   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:28.707040   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:28.739739   61804 cri.go:89] found id: ""
	I0814 01:08:28.739768   61804 logs.go:276] 0 containers: []
	W0814 01:08:28.739775   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:28.739781   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:28.739831   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:28.771179   61804 cri.go:89] found id: ""
	I0814 01:08:28.771217   61804 logs.go:276] 0 containers: []
	W0814 01:08:28.771228   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:28.771237   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:28.771303   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:28.805634   61804 cri.go:89] found id: ""
	I0814 01:08:28.805661   61804 logs.go:276] 0 containers: []
	W0814 01:08:28.805670   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:28.805675   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:28.805727   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:28.840796   61804 cri.go:89] found id: ""
	I0814 01:08:28.840819   61804 logs.go:276] 0 containers: []
	W0814 01:08:28.840827   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:28.840833   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:28.840893   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:28.879627   61804 cri.go:89] found id: ""
	I0814 01:08:28.879656   61804 logs.go:276] 0 containers: []
	W0814 01:08:28.879668   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:28.879675   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:28.879734   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:28.916568   61804 cri.go:89] found id: ""
	I0814 01:08:28.916588   61804 logs.go:276] 0 containers: []
	W0814 01:08:28.916597   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:28.916602   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:28.916658   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:28.952959   61804 cri.go:89] found id: ""
	I0814 01:08:28.952986   61804 logs.go:276] 0 containers: []
	W0814 01:08:28.952996   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:28.953003   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:28.953061   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:28.993496   61804 cri.go:89] found id: ""
	I0814 01:08:28.993527   61804 logs.go:276] 0 containers: []
	W0814 01:08:28.993538   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:28.993550   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:28.993565   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:29.079181   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:29.079219   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:29.121692   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:29.121718   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:29.174008   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:29.174068   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:29.188872   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:29.188904   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:29.254381   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:26.589068   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:28.590266   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:27.281232   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:29.781697   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:28.918512   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:31.418087   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:31.754986   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:31.767581   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:31.767656   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:31.803826   61804 cri.go:89] found id: ""
	I0814 01:08:31.803853   61804 logs.go:276] 0 containers: []
	W0814 01:08:31.803861   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:31.803867   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:31.803927   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:31.837958   61804 cri.go:89] found id: ""
	I0814 01:08:31.837986   61804 logs.go:276] 0 containers: []
	W0814 01:08:31.837996   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:31.838004   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:31.838077   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:31.869567   61804 cri.go:89] found id: ""
	I0814 01:08:31.869595   61804 logs.go:276] 0 containers: []
	W0814 01:08:31.869604   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:31.869612   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:31.869680   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:31.906943   61804 cri.go:89] found id: ""
	I0814 01:08:31.906973   61804 logs.go:276] 0 containers: []
	W0814 01:08:31.906985   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:31.906992   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:31.907059   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:31.940969   61804 cri.go:89] found id: ""
	I0814 01:08:31.941006   61804 logs.go:276] 0 containers: []
	W0814 01:08:31.941017   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:31.941025   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:31.941081   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:31.974546   61804 cri.go:89] found id: ""
	I0814 01:08:31.974578   61804 logs.go:276] 0 containers: []
	W0814 01:08:31.974588   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:31.974596   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:31.974657   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:32.007586   61804 cri.go:89] found id: ""
	I0814 01:08:32.007619   61804 logs.go:276] 0 containers: []
	W0814 01:08:32.007633   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:32.007641   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:32.007703   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:32.040073   61804 cri.go:89] found id: ""
	I0814 01:08:32.040104   61804 logs.go:276] 0 containers: []
	W0814 01:08:32.040116   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:32.040128   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:32.040142   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:32.094938   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:32.094978   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:32.107967   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:32.108002   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:32.176290   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:32.176314   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:32.176326   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:32.251231   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:32.251269   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:30.590569   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:33.089507   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:32.287689   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:34.781273   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:33.918103   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:36.417197   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:34.791693   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:34.804519   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:34.804582   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:34.838907   61804 cri.go:89] found id: ""
	I0814 01:08:34.838933   61804 logs.go:276] 0 containers: []
	W0814 01:08:34.838941   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:34.838947   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:34.839008   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:34.869650   61804 cri.go:89] found id: ""
	I0814 01:08:34.869676   61804 logs.go:276] 0 containers: []
	W0814 01:08:34.869684   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:34.869689   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:34.869739   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:34.903598   61804 cri.go:89] found id: ""
	I0814 01:08:34.903635   61804 logs.go:276] 0 containers: []
	W0814 01:08:34.903648   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:34.903655   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:34.903719   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:34.937101   61804 cri.go:89] found id: ""
	I0814 01:08:34.937131   61804 logs.go:276] 0 containers: []
	W0814 01:08:34.937143   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:34.937151   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:34.937214   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:34.969880   61804 cri.go:89] found id: ""
	I0814 01:08:34.969913   61804 logs.go:276] 0 containers: []
	W0814 01:08:34.969925   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:34.969933   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:34.969990   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:35.004158   61804 cri.go:89] found id: ""
	I0814 01:08:35.004185   61804 logs.go:276] 0 containers: []
	W0814 01:08:35.004194   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:35.004200   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:35.004267   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:35.037368   61804 cri.go:89] found id: ""
	I0814 01:08:35.037397   61804 logs.go:276] 0 containers: []
	W0814 01:08:35.037407   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:35.037415   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:35.037467   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:35.071051   61804 cri.go:89] found id: ""
	I0814 01:08:35.071080   61804 logs.go:276] 0 containers: []
	W0814 01:08:35.071089   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:35.071102   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:35.071116   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:35.147845   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:35.147879   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:35.189235   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:35.189271   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:35.242094   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:35.242132   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:35.255405   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:35.255430   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:35.325820   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:37.826188   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:37.839036   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:37.839117   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:37.876368   61804 cri.go:89] found id: ""
	I0814 01:08:37.876397   61804 logs.go:276] 0 containers: []
	W0814 01:08:37.876406   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:37.876411   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:37.876468   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:37.916680   61804 cri.go:89] found id: ""
	I0814 01:08:37.916717   61804 logs.go:276] 0 containers: []
	W0814 01:08:37.916727   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:37.916735   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:37.916802   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:37.951025   61804 cri.go:89] found id: ""
	I0814 01:08:37.951048   61804 logs.go:276] 0 containers: []
	W0814 01:08:37.951056   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:37.951062   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:37.951122   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:37.984837   61804 cri.go:89] found id: ""
	I0814 01:08:37.984865   61804 logs.go:276] 0 containers: []
	W0814 01:08:37.984873   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:37.984878   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:37.984928   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:38.018722   61804 cri.go:89] found id: ""
	I0814 01:08:38.018744   61804 logs.go:276] 0 containers: []
	W0814 01:08:38.018752   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:38.018757   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:38.018815   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:38.052306   61804 cri.go:89] found id: ""
	I0814 01:08:38.052337   61804 logs.go:276] 0 containers: []
	W0814 01:08:38.052350   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:38.052358   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:38.052419   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:38.086752   61804 cri.go:89] found id: ""
	I0814 01:08:38.086784   61804 logs.go:276] 0 containers: []
	W0814 01:08:38.086801   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:38.086811   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:38.086877   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:38.119201   61804 cri.go:89] found id: ""
	I0814 01:08:38.119228   61804 logs.go:276] 0 containers: []
	W0814 01:08:38.119235   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:38.119243   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:38.119255   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:38.171460   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:38.171492   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:38.184712   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:38.184739   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:38.248529   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:38.248552   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:38.248568   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:38.324517   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:38.324556   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:35.092682   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:37.590633   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:39.590761   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:37.280984   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:39.780961   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:38.417262   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:40.417822   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:42.918615   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:40.865218   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:40.877772   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:40.877847   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:40.910171   61804 cri.go:89] found id: ""
	I0814 01:08:40.910197   61804 logs.go:276] 0 containers: []
	W0814 01:08:40.910204   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:40.910210   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:40.910257   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:40.947205   61804 cri.go:89] found id: ""
	I0814 01:08:40.947234   61804 logs.go:276] 0 containers: []
	W0814 01:08:40.947244   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:40.947249   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:40.947304   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:40.979404   61804 cri.go:89] found id: ""
	I0814 01:08:40.979428   61804 logs.go:276] 0 containers: []
	W0814 01:08:40.979436   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:40.979442   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:40.979500   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:41.017710   61804 cri.go:89] found id: ""
	I0814 01:08:41.017737   61804 logs.go:276] 0 containers: []
	W0814 01:08:41.017746   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:41.017752   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:41.017799   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:41.052240   61804 cri.go:89] found id: ""
	I0814 01:08:41.052269   61804 logs.go:276] 0 containers: []
	W0814 01:08:41.052278   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:41.052286   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:41.052353   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:41.084124   61804 cri.go:89] found id: ""
	I0814 01:08:41.084151   61804 logs.go:276] 0 containers: []
	W0814 01:08:41.084159   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:41.084165   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:41.084230   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:41.120994   61804 cri.go:89] found id: ""
	I0814 01:08:41.121027   61804 logs.go:276] 0 containers: []
	W0814 01:08:41.121039   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:41.121047   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:41.121106   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:41.155794   61804 cri.go:89] found id: ""
	I0814 01:08:41.155829   61804 logs.go:276] 0 containers: []
	W0814 01:08:41.155842   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:41.155854   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:41.155873   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:41.209146   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:41.209191   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:41.222112   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:41.222141   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:41.298512   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:41.298533   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:41.298550   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:41.378609   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:41.378645   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:43.924469   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:43.936857   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:43.936935   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:43.969234   61804 cri.go:89] found id: ""
	I0814 01:08:43.969267   61804 logs.go:276] 0 containers: []
	W0814 01:08:43.969276   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:43.969282   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:43.969348   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:44.003814   61804 cri.go:89] found id: ""
	I0814 01:08:44.003841   61804 logs.go:276] 0 containers: []
	W0814 01:08:44.003852   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:44.003860   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:44.003929   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:44.037828   61804 cri.go:89] found id: ""
	I0814 01:08:44.037858   61804 logs.go:276] 0 containers: []
	W0814 01:08:44.037869   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:44.037877   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:44.037931   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:44.077084   61804 cri.go:89] found id: ""
	I0814 01:08:44.077110   61804 logs.go:276] 0 containers: []
	W0814 01:08:44.077118   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:44.077124   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:44.077174   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:44.111028   61804 cri.go:89] found id: ""
	I0814 01:08:44.111054   61804 logs.go:276] 0 containers: []
	W0814 01:08:44.111063   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:44.111070   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:44.111122   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:44.143178   61804 cri.go:89] found id: ""
	I0814 01:08:44.143211   61804 logs.go:276] 0 containers: []
	W0814 01:08:44.143222   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:44.143229   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:44.143293   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:44.177606   61804 cri.go:89] found id: ""
	I0814 01:08:44.177636   61804 logs.go:276] 0 containers: []
	W0814 01:08:44.177648   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:44.177657   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:44.177723   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:44.210941   61804 cri.go:89] found id: ""
	I0814 01:08:44.210965   61804 logs.go:276] 0 containers: []
	W0814 01:08:44.210973   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:44.210982   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:44.210995   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:44.224219   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:44.224248   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:44.289411   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:44.289431   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:44.289442   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:44.369680   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:44.369720   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:44.407705   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:44.407742   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:42.088924   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:44.090237   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:41.781814   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:44.281794   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:45.418397   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:47.419132   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:46.962321   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:46.975711   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:46.975843   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:47.008529   61804 cri.go:89] found id: ""
	I0814 01:08:47.008642   61804 logs.go:276] 0 containers: []
	W0814 01:08:47.008651   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:47.008657   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:47.008707   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:47.042469   61804 cri.go:89] found id: ""
	I0814 01:08:47.042498   61804 logs.go:276] 0 containers: []
	W0814 01:08:47.042509   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:47.042518   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:47.042586   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:47.081186   61804 cri.go:89] found id: ""
	I0814 01:08:47.081214   61804 logs.go:276] 0 containers: []
	W0814 01:08:47.081222   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:47.081229   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:47.081286   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:47.117727   61804 cri.go:89] found id: ""
	I0814 01:08:47.117754   61804 logs.go:276] 0 containers: []
	W0814 01:08:47.117765   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:47.117773   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:47.117858   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:47.151247   61804 cri.go:89] found id: ""
	I0814 01:08:47.151283   61804 logs.go:276] 0 containers: []
	W0814 01:08:47.151298   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:47.151307   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:47.151370   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:47.185640   61804 cri.go:89] found id: ""
	I0814 01:08:47.185671   61804 logs.go:276] 0 containers: []
	W0814 01:08:47.185681   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:47.185689   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:47.185755   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:47.220597   61804 cri.go:89] found id: ""
	I0814 01:08:47.220625   61804 logs.go:276] 0 containers: []
	W0814 01:08:47.220633   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:47.220641   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:47.220714   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:47.257099   61804 cri.go:89] found id: ""
	I0814 01:08:47.257131   61804 logs.go:276] 0 containers: []
	W0814 01:08:47.257147   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:47.257162   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:47.257179   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:47.307503   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:47.307538   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:47.320882   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:47.320907   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:47.394519   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:47.394553   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:47.394567   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:47.475998   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:47.476058   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:46.091154   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:48.590382   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:46.780699   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:48.780773   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:51.281235   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:49.421293   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:51.918374   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:50.019454   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:50.033470   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:50.033550   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:50.070782   61804 cri.go:89] found id: ""
	I0814 01:08:50.070806   61804 logs.go:276] 0 containers: []
	W0814 01:08:50.070813   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:50.070819   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:50.070881   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:50.104047   61804 cri.go:89] found id: ""
	I0814 01:08:50.104083   61804 logs.go:276] 0 containers: []
	W0814 01:08:50.104092   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:50.104101   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:50.104172   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:50.139445   61804 cri.go:89] found id: ""
	I0814 01:08:50.139472   61804 logs.go:276] 0 containers: []
	W0814 01:08:50.139480   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:50.139487   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:50.139545   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:50.173077   61804 cri.go:89] found id: ""
	I0814 01:08:50.173109   61804 logs.go:276] 0 containers: []
	W0814 01:08:50.173118   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:50.173126   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:50.173189   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:50.204234   61804 cri.go:89] found id: ""
	I0814 01:08:50.204264   61804 logs.go:276] 0 containers: []
	W0814 01:08:50.204273   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:50.204281   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:50.204342   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:50.237005   61804 cri.go:89] found id: ""
	I0814 01:08:50.237034   61804 logs.go:276] 0 containers: []
	W0814 01:08:50.237044   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:50.237052   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:50.237107   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:50.270171   61804 cri.go:89] found id: ""
	I0814 01:08:50.270197   61804 logs.go:276] 0 containers: []
	W0814 01:08:50.270204   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:50.270209   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:50.270274   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:50.304932   61804 cri.go:89] found id: ""
	I0814 01:08:50.304959   61804 logs.go:276] 0 containers: []
	W0814 01:08:50.304968   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:50.304980   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:50.305000   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:50.317524   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:50.317552   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:50.384790   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:50.384817   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:50.384833   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:50.461398   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:50.461432   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:50.518516   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:50.518545   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:53.069835   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:53.082707   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:53.082777   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:53.119053   61804 cri.go:89] found id: ""
	I0814 01:08:53.119075   61804 logs.go:276] 0 containers: []
	W0814 01:08:53.119083   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:53.119089   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:53.119138   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:53.154565   61804 cri.go:89] found id: ""
	I0814 01:08:53.154598   61804 logs.go:276] 0 containers: []
	W0814 01:08:53.154610   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:53.154618   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:53.154690   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:53.187144   61804 cri.go:89] found id: ""
	I0814 01:08:53.187171   61804 logs.go:276] 0 containers: []
	W0814 01:08:53.187178   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:53.187184   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:53.187236   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:53.220965   61804 cri.go:89] found id: ""
	I0814 01:08:53.220989   61804 logs.go:276] 0 containers: []
	W0814 01:08:53.220998   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:53.221004   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:53.221062   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:53.256825   61804 cri.go:89] found id: ""
	I0814 01:08:53.256857   61804 logs.go:276] 0 containers: []
	W0814 01:08:53.256868   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:53.256875   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:53.256941   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:53.295733   61804 cri.go:89] found id: ""
	I0814 01:08:53.295761   61804 logs.go:276] 0 containers: []
	W0814 01:08:53.295768   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:53.295774   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:53.295822   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:53.328928   61804 cri.go:89] found id: ""
	I0814 01:08:53.328959   61804 logs.go:276] 0 containers: []
	W0814 01:08:53.328970   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:53.328979   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:53.329049   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:53.362866   61804 cri.go:89] found id: ""
	I0814 01:08:53.362896   61804 logs.go:276] 0 containers: []
	W0814 01:08:53.362907   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:53.362919   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:53.362934   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:53.375681   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:53.375718   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:53.439108   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:53.439132   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:53.439148   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:53.524801   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:53.524838   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:53.560832   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:53.560866   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:51.091445   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:53.589472   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:53.780960   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:56.281731   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:54.417207   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:56.417442   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:56.117383   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:56.129668   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:56.129729   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:56.161928   61804 cri.go:89] found id: ""
	I0814 01:08:56.161953   61804 logs.go:276] 0 containers: []
	W0814 01:08:56.161966   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:56.161971   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:56.162017   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:56.192303   61804 cri.go:89] found id: ""
	I0814 01:08:56.192332   61804 logs.go:276] 0 containers: []
	W0814 01:08:56.192343   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:56.192360   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:56.192428   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:56.226668   61804 cri.go:89] found id: ""
	I0814 01:08:56.226696   61804 logs.go:276] 0 containers: []
	W0814 01:08:56.226707   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:56.226715   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:56.226776   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:56.284959   61804 cri.go:89] found id: ""
	I0814 01:08:56.284987   61804 logs.go:276] 0 containers: []
	W0814 01:08:56.284998   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:56.285006   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:56.285066   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:56.317591   61804 cri.go:89] found id: ""
	I0814 01:08:56.317623   61804 logs.go:276] 0 containers: []
	W0814 01:08:56.317633   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:56.317640   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:56.317707   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:56.350119   61804 cri.go:89] found id: ""
	I0814 01:08:56.350146   61804 logs.go:276] 0 containers: []
	W0814 01:08:56.350157   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:56.350165   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:56.350223   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:56.382204   61804 cri.go:89] found id: ""
	I0814 01:08:56.382231   61804 logs.go:276] 0 containers: []
	W0814 01:08:56.382239   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:56.382244   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:56.382295   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:56.415098   61804 cri.go:89] found id: ""
	I0814 01:08:56.415130   61804 logs.go:276] 0 containers: []
	W0814 01:08:56.415140   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:56.415160   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:56.415174   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:56.466056   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:56.466094   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:56.480989   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:56.481019   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:56.550348   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:56.550371   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:56.550387   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:56.629331   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:56.629371   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:59.166791   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:59.179818   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:59.179907   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:59.212759   61804 cri.go:89] found id: ""
	I0814 01:08:59.212781   61804 logs.go:276] 0 containers: []
	W0814 01:08:59.212789   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:59.212796   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:59.212851   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:59.248330   61804 cri.go:89] found id: ""
	I0814 01:08:59.248354   61804 logs.go:276] 0 containers: []
	W0814 01:08:59.248362   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:59.248368   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:59.248420   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:59.282101   61804 cri.go:89] found id: ""
	I0814 01:08:59.282123   61804 logs.go:276] 0 containers: []
	W0814 01:08:59.282136   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:59.282142   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:59.282190   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:59.318477   61804 cri.go:89] found id: ""
	I0814 01:08:59.318502   61804 logs.go:276] 0 containers: []
	W0814 01:08:59.318510   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:59.318516   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:59.318566   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:59.352473   61804 cri.go:89] found id: ""
	I0814 01:08:59.352499   61804 logs.go:276] 0 containers: []
	W0814 01:08:59.352507   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:59.352514   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:59.352583   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:59.386004   61804 cri.go:89] found id: ""
	I0814 01:08:59.386032   61804 logs.go:276] 0 containers: []
	W0814 01:08:59.386056   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:59.386065   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:59.386127   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:59.424280   61804 cri.go:89] found id: ""
	I0814 01:08:59.424309   61804 logs.go:276] 0 containers: []
	W0814 01:08:59.424334   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:59.424340   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:59.424390   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:59.461555   61804 cri.go:89] found id: ""
	I0814 01:08:59.461579   61804 logs.go:276] 0 containers: []
	W0814 01:08:59.461587   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:59.461596   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:59.461608   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:59.501997   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:59.502032   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:56.089181   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:58.089349   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:58.780740   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:01.280817   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:58.417590   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:00.417914   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:02.418923   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:59.554228   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:59.554276   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:59.569169   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:59.569201   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:59.635758   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:59.635779   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:59.635793   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:02.211233   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:02.223647   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:02.223733   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:02.257172   61804 cri.go:89] found id: ""
	I0814 01:09:02.257204   61804 logs.go:276] 0 containers: []
	W0814 01:09:02.257215   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:02.257222   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:02.257286   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:02.290090   61804 cri.go:89] found id: ""
	I0814 01:09:02.290123   61804 logs.go:276] 0 containers: []
	W0814 01:09:02.290132   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:02.290139   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:02.290207   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:02.324436   61804 cri.go:89] found id: ""
	I0814 01:09:02.324461   61804 logs.go:276] 0 containers: []
	W0814 01:09:02.324469   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:02.324474   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:02.324531   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:02.357092   61804 cri.go:89] found id: ""
	I0814 01:09:02.357116   61804 logs.go:276] 0 containers: []
	W0814 01:09:02.357124   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:02.357130   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:02.357191   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:02.390237   61804 cri.go:89] found id: ""
	I0814 01:09:02.390265   61804 logs.go:276] 0 containers: []
	W0814 01:09:02.390278   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:02.390287   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:02.390357   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:02.425960   61804 cri.go:89] found id: ""
	I0814 01:09:02.425988   61804 logs.go:276] 0 containers: []
	W0814 01:09:02.425996   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:02.426002   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:02.426072   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:02.459644   61804 cri.go:89] found id: ""
	I0814 01:09:02.459683   61804 logs.go:276] 0 containers: []
	W0814 01:09:02.459694   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:02.459702   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:02.459764   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:02.496147   61804 cri.go:89] found id: ""
	I0814 01:09:02.496169   61804 logs.go:276] 0 containers: []
	W0814 01:09:02.496182   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:02.496190   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:02.496202   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:02.576512   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:02.576547   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:02.612410   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:02.612440   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:02.665810   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:02.665850   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:02.680992   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:02.681020   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:02.781868   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:00.089915   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:02.090971   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:04.589030   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:03.780689   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:05.784928   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:04.917086   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:06.918108   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:05.282001   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:05.294986   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:05.295064   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:05.326520   61804 cri.go:89] found id: ""
	I0814 01:09:05.326547   61804 logs.go:276] 0 containers: []
	W0814 01:09:05.326555   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:05.326562   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:05.326618   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:05.358458   61804 cri.go:89] found id: ""
	I0814 01:09:05.358482   61804 logs.go:276] 0 containers: []
	W0814 01:09:05.358490   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:05.358497   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:05.358556   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:05.393729   61804 cri.go:89] found id: ""
	I0814 01:09:05.393763   61804 logs.go:276] 0 containers: []
	W0814 01:09:05.393771   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:05.393777   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:05.393824   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:05.433291   61804 cri.go:89] found id: ""
	I0814 01:09:05.433319   61804 logs.go:276] 0 containers: []
	W0814 01:09:05.433327   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:05.433334   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:05.433384   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:05.467163   61804 cri.go:89] found id: ""
	I0814 01:09:05.467187   61804 logs.go:276] 0 containers: []
	W0814 01:09:05.467198   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:05.467206   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:05.467284   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:05.499718   61804 cri.go:89] found id: ""
	I0814 01:09:05.499747   61804 logs.go:276] 0 containers: []
	W0814 01:09:05.499758   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:05.499768   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:05.499819   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:05.532818   61804 cri.go:89] found id: ""
	I0814 01:09:05.532847   61804 logs.go:276] 0 containers: []
	W0814 01:09:05.532859   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:05.532867   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:05.532920   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:05.566908   61804 cri.go:89] found id: ""
	I0814 01:09:05.566936   61804 logs.go:276] 0 containers: []
	W0814 01:09:05.566947   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:05.566957   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:05.566969   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:05.621247   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:05.621283   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:05.635566   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:05.635606   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:05.698579   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:05.698606   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:05.698622   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:05.780861   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:05.780897   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:08.322931   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:08.336836   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:08.336918   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:08.369802   61804 cri.go:89] found id: ""
	I0814 01:09:08.369833   61804 logs.go:276] 0 containers: []
	W0814 01:09:08.369842   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:08.369849   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:08.369899   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:08.415414   61804 cri.go:89] found id: ""
	I0814 01:09:08.415441   61804 logs.go:276] 0 containers: []
	W0814 01:09:08.415451   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:08.415459   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:08.415525   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:08.477026   61804 cri.go:89] found id: ""
	I0814 01:09:08.477058   61804 logs.go:276] 0 containers: []
	W0814 01:09:08.477069   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:08.477077   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:08.477145   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:08.522385   61804 cri.go:89] found id: ""
	I0814 01:09:08.522417   61804 logs.go:276] 0 containers: []
	W0814 01:09:08.522429   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:08.522438   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:08.522502   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:08.555803   61804 cri.go:89] found id: ""
	I0814 01:09:08.555839   61804 logs.go:276] 0 containers: []
	W0814 01:09:08.555848   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:08.555855   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:08.555922   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:08.589910   61804 cri.go:89] found id: ""
	I0814 01:09:08.589932   61804 logs.go:276] 0 containers: []
	W0814 01:09:08.589939   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:08.589945   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:08.589992   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:08.622278   61804 cri.go:89] found id: ""
	I0814 01:09:08.622313   61804 logs.go:276] 0 containers: []
	W0814 01:09:08.622321   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:08.622328   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:08.622381   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:08.655221   61804 cri.go:89] found id: ""
	I0814 01:09:08.655248   61804 logs.go:276] 0 containers: []
	W0814 01:09:08.655257   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:08.655266   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:08.655280   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:08.691932   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:08.691965   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:08.742551   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:08.742586   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:08.755590   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:08.755619   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:08.822365   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:08.822384   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:08.822401   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:06.589889   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:09.089601   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:08.281249   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:10.781156   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:09.418153   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:11.418222   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:11.397107   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:11.409425   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:11.409498   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:11.442680   61804 cri.go:89] found id: ""
	I0814 01:09:11.442711   61804 logs.go:276] 0 containers: []
	W0814 01:09:11.442724   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:11.442732   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:11.442791   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:11.482991   61804 cri.go:89] found id: ""
	I0814 01:09:11.483016   61804 logs.go:276] 0 containers: []
	W0814 01:09:11.483023   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:11.483034   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:11.483099   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:11.516069   61804 cri.go:89] found id: ""
	I0814 01:09:11.516091   61804 logs.go:276] 0 containers: []
	W0814 01:09:11.516100   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:11.516105   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:11.516154   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:11.549745   61804 cri.go:89] found id: ""
	I0814 01:09:11.549773   61804 logs.go:276] 0 containers: []
	W0814 01:09:11.549780   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:11.549787   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:11.549851   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:11.582542   61804 cri.go:89] found id: ""
	I0814 01:09:11.582569   61804 logs.go:276] 0 containers: []
	W0814 01:09:11.582577   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:11.582583   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:11.582642   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:11.616238   61804 cri.go:89] found id: ""
	I0814 01:09:11.616261   61804 logs.go:276] 0 containers: []
	W0814 01:09:11.616269   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:11.616275   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:11.616330   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:11.650238   61804 cri.go:89] found id: ""
	I0814 01:09:11.650286   61804 logs.go:276] 0 containers: []
	W0814 01:09:11.650301   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:11.650311   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:11.650384   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:11.683100   61804 cri.go:89] found id: ""
	I0814 01:09:11.683128   61804 logs.go:276] 0 containers: []
	W0814 01:09:11.683139   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:11.683149   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:11.683169   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:11.760248   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:11.760292   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:11.798965   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:11.798996   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:11.853109   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:11.853145   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:11.865645   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:11.865682   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:11.935478   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:14.436076   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:14.448846   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:14.448927   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:14.483833   61804 cri.go:89] found id: ""
	I0814 01:09:14.483873   61804 logs.go:276] 0 containers: []
	W0814 01:09:14.483882   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:14.483887   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:14.483940   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:11.089723   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:13.090681   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:12.781680   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:14.782443   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:13.918681   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:16.417982   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:14.522643   61804 cri.go:89] found id: ""
	I0814 01:09:14.522670   61804 logs.go:276] 0 containers: []
	W0814 01:09:14.522678   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:14.522683   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:14.522783   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:14.564084   61804 cri.go:89] found id: ""
	I0814 01:09:14.564111   61804 logs.go:276] 0 containers: []
	W0814 01:09:14.564121   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:14.564129   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:14.564193   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:14.603532   61804 cri.go:89] found id: ""
	I0814 01:09:14.603560   61804 logs.go:276] 0 containers: []
	W0814 01:09:14.603571   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:14.603578   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:14.603641   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:14.644420   61804 cri.go:89] found id: ""
	I0814 01:09:14.644443   61804 logs.go:276] 0 containers: []
	W0814 01:09:14.644450   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:14.644455   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:14.644503   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:14.681652   61804 cri.go:89] found id: ""
	I0814 01:09:14.681685   61804 logs.go:276] 0 containers: []
	W0814 01:09:14.681693   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:14.681701   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:14.681757   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:14.715830   61804 cri.go:89] found id: ""
	I0814 01:09:14.715852   61804 logs.go:276] 0 containers: []
	W0814 01:09:14.715860   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:14.715866   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:14.715912   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:14.752305   61804 cri.go:89] found id: ""
	I0814 01:09:14.752336   61804 logs.go:276] 0 containers: []
	W0814 01:09:14.752343   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:14.752352   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:14.752367   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:14.765250   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:14.765287   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:14.834427   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:14.834453   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:14.834470   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:14.914683   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:14.914721   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:14.959497   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:14.959534   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:17.513077   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:17.526300   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:17.526409   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:17.563670   61804 cri.go:89] found id: ""
	I0814 01:09:17.563700   61804 logs.go:276] 0 containers: []
	W0814 01:09:17.563709   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:17.563715   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:17.563768   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:17.599019   61804 cri.go:89] found id: ""
	I0814 01:09:17.599048   61804 logs.go:276] 0 containers: []
	W0814 01:09:17.599070   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:17.599078   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:17.599158   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:17.633378   61804 cri.go:89] found id: ""
	I0814 01:09:17.633407   61804 logs.go:276] 0 containers: []
	W0814 01:09:17.633422   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:17.633430   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:17.633494   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:17.667180   61804 cri.go:89] found id: ""
	I0814 01:09:17.667213   61804 logs.go:276] 0 containers: []
	W0814 01:09:17.667225   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:17.667233   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:17.667293   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:17.704552   61804 cri.go:89] found id: ""
	I0814 01:09:17.704582   61804 logs.go:276] 0 containers: []
	W0814 01:09:17.704595   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:17.704603   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:17.704670   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:17.735937   61804 cri.go:89] found id: ""
	I0814 01:09:17.735966   61804 logs.go:276] 0 containers: []
	W0814 01:09:17.735978   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:17.735987   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:17.736057   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:17.772223   61804 cri.go:89] found id: ""
	I0814 01:09:17.772251   61804 logs.go:276] 0 containers: []
	W0814 01:09:17.772263   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:17.772271   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:17.772335   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:17.807432   61804 cri.go:89] found id: ""
	I0814 01:09:17.807462   61804 logs.go:276] 0 containers: []
	W0814 01:09:17.807474   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:17.807485   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:17.807499   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:17.860093   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:17.860135   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:17.874608   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:17.874644   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:17.948791   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:17.948812   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:17.948827   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:18.024743   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:18.024778   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:15.590951   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:18.090491   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:17.296200   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:19.780540   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:18.419867   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:20.917387   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:22.918933   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:20.559854   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:20.572920   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:20.573004   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:20.609163   61804 cri.go:89] found id: ""
	I0814 01:09:20.609189   61804 logs.go:276] 0 containers: []
	W0814 01:09:20.609200   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:20.609205   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:20.609253   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:20.646826   61804 cri.go:89] found id: ""
	I0814 01:09:20.646852   61804 logs.go:276] 0 containers: []
	W0814 01:09:20.646859   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:20.646865   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:20.646913   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:20.682403   61804 cri.go:89] found id: ""
	I0814 01:09:20.682432   61804 logs.go:276] 0 containers: []
	W0814 01:09:20.682443   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:20.682452   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:20.682515   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:20.717678   61804 cri.go:89] found id: ""
	I0814 01:09:20.717700   61804 logs.go:276] 0 containers: []
	W0814 01:09:20.717708   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:20.717713   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:20.717761   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:20.748451   61804 cri.go:89] found id: ""
	I0814 01:09:20.748481   61804 logs.go:276] 0 containers: []
	W0814 01:09:20.748492   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:20.748501   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:20.748567   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:20.785684   61804 cri.go:89] found id: ""
	I0814 01:09:20.785712   61804 logs.go:276] 0 containers: []
	W0814 01:09:20.785722   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:20.785729   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:20.785792   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:20.826195   61804 cri.go:89] found id: ""
	I0814 01:09:20.826225   61804 logs.go:276] 0 containers: []
	W0814 01:09:20.826233   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:20.826239   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:20.826305   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:20.860155   61804 cri.go:89] found id: ""
	I0814 01:09:20.860181   61804 logs.go:276] 0 containers: []
	W0814 01:09:20.860190   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:20.860198   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:20.860209   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:20.909428   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:20.909464   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:20.923178   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:20.923208   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:20.994502   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:20.994537   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:20.994556   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:21.074097   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:21.074138   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:23.615557   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:23.628906   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:23.628976   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:23.661923   61804 cri.go:89] found id: ""
	I0814 01:09:23.661954   61804 logs.go:276] 0 containers: []
	W0814 01:09:23.661966   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:23.661973   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:23.662033   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:23.693786   61804 cri.go:89] found id: ""
	I0814 01:09:23.693815   61804 logs.go:276] 0 containers: []
	W0814 01:09:23.693828   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:23.693844   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:23.693938   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:23.726707   61804 cri.go:89] found id: ""
	I0814 01:09:23.726739   61804 logs.go:276] 0 containers: []
	W0814 01:09:23.726750   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:23.726758   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:23.726823   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:23.757433   61804 cri.go:89] found id: ""
	I0814 01:09:23.757457   61804 logs.go:276] 0 containers: []
	W0814 01:09:23.757465   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:23.757471   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:23.757521   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:23.789493   61804 cri.go:89] found id: ""
	I0814 01:09:23.789516   61804 logs.go:276] 0 containers: []
	W0814 01:09:23.789523   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:23.789529   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:23.789589   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:23.824641   61804 cri.go:89] found id: ""
	I0814 01:09:23.824668   61804 logs.go:276] 0 containers: []
	W0814 01:09:23.824676   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:23.824685   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:23.824758   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:23.857651   61804 cri.go:89] found id: ""
	I0814 01:09:23.857678   61804 logs.go:276] 0 containers: []
	W0814 01:09:23.857688   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:23.857697   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:23.857761   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:23.898116   61804 cri.go:89] found id: ""
	I0814 01:09:23.898138   61804 logs.go:276] 0 containers: []
	W0814 01:09:23.898145   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:23.898154   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:23.898169   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:23.982086   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:23.982121   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:24.018340   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:24.018372   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:24.067264   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:24.067300   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:24.081648   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:24.081681   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:24.156566   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:20.590620   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:23.090160   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:21.781174   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:23.782333   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:26.282145   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:25.417101   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:27.417596   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:26.656930   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:26.669540   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:26.669616   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:26.701786   61804 cri.go:89] found id: ""
	I0814 01:09:26.701819   61804 logs.go:276] 0 containers: []
	W0814 01:09:26.701828   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:26.701834   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:26.701897   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:26.734372   61804 cri.go:89] found id: ""
	I0814 01:09:26.734397   61804 logs.go:276] 0 containers: []
	W0814 01:09:26.734405   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:26.734410   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:26.734463   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:26.767100   61804 cri.go:89] found id: ""
	I0814 01:09:26.767125   61804 logs.go:276] 0 containers: []
	W0814 01:09:26.767140   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:26.767148   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:26.767210   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:26.802145   61804 cri.go:89] found id: ""
	I0814 01:09:26.802168   61804 logs.go:276] 0 containers: []
	W0814 01:09:26.802177   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:26.802182   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:26.802230   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:26.835588   61804 cri.go:89] found id: ""
	I0814 01:09:26.835616   61804 logs.go:276] 0 containers: []
	W0814 01:09:26.835624   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:26.835630   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:26.835685   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:26.868104   61804 cri.go:89] found id: ""
	I0814 01:09:26.868130   61804 logs.go:276] 0 containers: []
	W0814 01:09:26.868138   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:26.868144   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:26.868209   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:26.899709   61804 cri.go:89] found id: ""
	I0814 01:09:26.899736   61804 logs.go:276] 0 containers: []
	W0814 01:09:26.899755   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:26.899764   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:26.899824   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:26.934964   61804 cri.go:89] found id: ""
	I0814 01:09:26.934989   61804 logs.go:276] 0 containers: []
	W0814 01:09:26.934996   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:26.935005   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:26.935023   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:26.970832   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:26.970859   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:27.022349   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:27.022390   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:27.035656   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:27.035683   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:27.115414   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:27.115441   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:27.115458   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:25.090543   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:27.590088   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:29.590449   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:28.781004   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:30.781622   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:29.920036   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:32.417796   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:29.701338   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:29.713890   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:29.713947   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:29.745724   61804 cri.go:89] found id: ""
	I0814 01:09:29.745749   61804 logs.go:276] 0 containers: []
	W0814 01:09:29.745756   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:29.745763   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:29.745816   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:29.777020   61804 cri.go:89] found id: ""
	I0814 01:09:29.777047   61804 logs.go:276] 0 containers: []
	W0814 01:09:29.777057   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:29.777065   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:29.777130   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:29.813355   61804 cri.go:89] found id: ""
	I0814 01:09:29.813386   61804 logs.go:276] 0 containers: []
	W0814 01:09:29.813398   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:29.813406   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:29.813464   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:29.845184   61804 cri.go:89] found id: ""
	I0814 01:09:29.845212   61804 logs.go:276] 0 containers: []
	W0814 01:09:29.845222   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:29.845227   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:29.845288   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:29.881128   61804 cri.go:89] found id: ""
	I0814 01:09:29.881158   61804 logs.go:276] 0 containers: []
	W0814 01:09:29.881169   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:29.881177   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:29.881249   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:29.912034   61804 cri.go:89] found id: ""
	I0814 01:09:29.912078   61804 logs.go:276] 0 containers: []
	W0814 01:09:29.912091   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:29.912100   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:29.912173   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:29.950345   61804 cri.go:89] found id: ""
	I0814 01:09:29.950378   61804 logs.go:276] 0 containers: []
	W0814 01:09:29.950386   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:29.950392   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:29.950454   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:29.989118   61804 cri.go:89] found id: ""
	I0814 01:09:29.989150   61804 logs.go:276] 0 containers: []
	W0814 01:09:29.989161   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:29.989172   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:29.989186   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:30.042231   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:30.042262   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:30.056231   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:30.056262   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:30.130840   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:30.130871   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:30.130891   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:30.209332   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:30.209372   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:32.751036   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:32.765011   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:32.765072   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:32.802505   61804 cri.go:89] found id: ""
	I0814 01:09:32.802533   61804 logs.go:276] 0 containers: []
	W0814 01:09:32.802543   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:32.802548   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:32.802600   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:32.835127   61804 cri.go:89] found id: ""
	I0814 01:09:32.835165   61804 logs.go:276] 0 containers: []
	W0814 01:09:32.835174   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:32.835179   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:32.835230   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:32.871768   61804 cri.go:89] found id: ""
	I0814 01:09:32.871793   61804 logs.go:276] 0 containers: []
	W0814 01:09:32.871800   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:32.871814   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:32.871865   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:32.907601   61804 cri.go:89] found id: ""
	I0814 01:09:32.907625   61804 logs.go:276] 0 containers: []
	W0814 01:09:32.907634   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:32.907640   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:32.907693   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:32.942615   61804 cri.go:89] found id: ""
	I0814 01:09:32.942640   61804 logs.go:276] 0 containers: []
	W0814 01:09:32.942649   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:32.942655   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:32.942707   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:32.975436   61804 cri.go:89] found id: ""
	I0814 01:09:32.975467   61804 logs.go:276] 0 containers: []
	W0814 01:09:32.975478   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:32.975486   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:32.975546   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:33.008982   61804 cri.go:89] found id: ""
	I0814 01:09:33.009013   61804 logs.go:276] 0 containers: []
	W0814 01:09:33.009021   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:33.009027   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:33.009077   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:33.042312   61804 cri.go:89] found id: ""
	I0814 01:09:33.042345   61804 logs.go:276] 0 containers: []
	W0814 01:09:33.042362   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:33.042371   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:33.042383   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:33.102102   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:33.102145   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:33.116497   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:33.116527   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:33.191821   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:33.191847   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:33.191862   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:33.272510   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:33.272562   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:32.090206   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:34.589260   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:33.280565   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:35.280918   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:34.417839   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:36.417950   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:35.813246   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:35.826224   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:35.826304   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:35.859220   61804 cri.go:89] found id: ""
	I0814 01:09:35.859252   61804 logs.go:276] 0 containers: []
	W0814 01:09:35.859263   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:35.859274   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:35.859349   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:35.896460   61804 cri.go:89] found id: ""
	I0814 01:09:35.896485   61804 logs.go:276] 0 containers: []
	W0814 01:09:35.896494   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:35.896500   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:35.896559   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:35.929796   61804 cri.go:89] found id: ""
	I0814 01:09:35.929819   61804 logs.go:276] 0 containers: []
	W0814 01:09:35.929827   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:35.929832   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:35.929883   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:35.963928   61804 cri.go:89] found id: ""
	I0814 01:09:35.963954   61804 logs.go:276] 0 containers: []
	W0814 01:09:35.963965   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:35.963972   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:35.964033   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:36.004613   61804 cri.go:89] found id: ""
	I0814 01:09:36.004644   61804 logs.go:276] 0 containers: []
	W0814 01:09:36.004654   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:36.004660   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:36.004729   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:36.039212   61804 cri.go:89] found id: ""
	I0814 01:09:36.039241   61804 logs.go:276] 0 containers: []
	W0814 01:09:36.039249   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:36.039256   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:36.039311   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:36.072917   61804 cri.go:89] found id: ""
	I0814 01:09:36.072945   61804 logs.go:276] 0 containers: []
	W0814 01:09:36.072954   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:36.072960   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:36.073020   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:36.113542   61804 cri.go:89] found id: ""
	I0814 01:09:36.113573   61804 logs.go:276] 0 containers: []
	W0814 01:09:36.113584   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:36.113594   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:36.113610   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:36.152043   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:36.152071   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:36.203163   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:36.203200   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:36.216733   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:36.216764   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:36.288171   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:36.288193   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:36.288206   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:38.868008   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:38.881009   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:38.881089   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:38.914485   61804 cri.go:89] found id: ""
	I0814 01:09:38.914515   61804 logs.go:276] 0 containers: []
	W0814 01:09:38.914527   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:38.914535   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:38.914595   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:38.950810   61804 cri.go:89] found id: ""
	I0814 01:09:38.950841   61804 logs.go:276] 0 containers: []
	W0814 01:09:38.950852   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:38.950860   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:38.950913   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:38.984938   61804 cri.go:89] found id: ""
	I0814 01:09:38.984964   61804 logs.go:276] 0 containers: []
	W0814 01:09:38.984972   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:38.984980   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:38.985050   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:39.017383   61804 cri.go:89] found id: ""
	I0814 01:09:39.017408   61804 logs.go:276] 0 containers: []
	W0814 01:09:39.017415   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:39.017421   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:39.017467   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:39.050669   61804 cri.go:89] found id: ""
	I0814 01:09:39.050694   61804 logs.go:276] 0 containers: []
	W0814 01:09:39.050706   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:39.050712   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:39.050777   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:39.083840   61804 cri.go:89] found id: ""
	I0814 01:09:39.083870   61804 logs.go:276] 0 containers: []
	W0814 01:09:39.083882   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:39.083903   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:39.083973   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:39.117880   61804 cri.go:89] found id: ""
	I0814 01:09:39.117905   61804 logs.go:276] 0 containers: []
	W0814 01:09:39.117913   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:39.117920   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:39.117989   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:39.151956   61804 cri.go:89] found id: ""
	I0814 01:09:39.151981   61804 logs.go:276] 0 containers: []
	W0814 01:09:39.151991   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:39.152002   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:39.152017   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:39.229820   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:39.229860   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:39.266989   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:39.267023   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:39.317673   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:39.317709   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:39.332968   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:39.332997   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:39.401164   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:36.591033   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:39.089990   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:37.282218   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:39.781653   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:38.918816   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:41.417142   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:41.901891   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:41.914735   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:41.914810   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:41.950605   61804 cri.go:89] found id: ""
	I0814 01:09:41.950633   61804 logs.go:276] 0 containers: []
	W0814 01:09:41.950641   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:41.950648   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:41.950699   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:41.984517   61804 cri.go:89] found id: ""
	I0814 01:09:41.984541   61804 logs.go:276] 0 containers: []
	W0814 01:09:41.984549   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:41.984555   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:41.984609   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:42.018378   61804 cri.go:89] found id: ""
	I0814 01:09:42.018405   61804 logs.go:276] 0 containers: []
	W0814 01:09:42.018413   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:42.018418   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:42.018475   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:42.057088   61804 cri.go:89] found id: ""
	I0814 01:09:42.057126   61804 logs.go:276] 0 containers: []
	W0814 01:09:42.057134   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:42.057140   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:42.057208   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:42.093523   61804 cri.go:89] found id: ""
	I0814 01:09:42.093548   61804 logs.go:276] 0 containers: []
	W0814 01:09:42.093564   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:42.093569   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:42.093621   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:42.127036   61804 cri.go:89] found id: ""
	I0814 01:09:42.127059   61804 logs.go:276] 0 containers: []
	W0814 01:09:42.127067   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:42.127072   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:42.127123   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:42.161194   61804 cri.go:89] found id: ""
	I0814 01:09:42.161218   61804 logs.go:276] 0 containers: []
	W0814 01:09:42.161226   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:42.161231   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:42.161279   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:42.195595   61804 cri.go:89] found id: ""
	I0814 01:09:42.195624   61804 logs.go:276] 0 containers: []
	W0814 01:09:42.195633   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:42.195643   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:42.195656   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:42.251942   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:42.251974   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:42.309142   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:42.309179   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:42.322696   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:42.322724   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:42.389877   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:42.389903   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:42.389918   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:41.589650   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:43.589804   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:42.281108   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:44.780495   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:43.417531   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:45.419122   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:47.918282   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:44.974486   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:44.986981   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:44.987044   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:45.023400   61804 cri.go:89] found id: ""
	I0814 01:09:45.023426   61804 logs.go:276] 0 containers: []
	W0814 01:09:45.023435   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:45.023441   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:45.023492   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:45.057923   61804 cri.go:89] found id: ""
	I0814 01:09:45.057948   61804 logs.go:276] 0 containers: []
	W0814 01:09:45.057961   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:45.057968   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:45.058024   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:45.092882   61804 cri.go:89] found id: ""
	I0814 01:09:45.092908   61804 logs.go:276] 0 containers: []
	W0814 01:09:45.092918   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:45.092924   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:45.092987   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:45.128802   61804 cri.go:89] found id: ""
	I0814 01:09:45.128832   61804 logs.go:276] 0 containers: []
	W0814 01:09:45.128840   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:45.128846   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:45.128909   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:45.164528   61804 cri.go:89] found id: ""
	I0814 01:09:45.164556   61804 logs.go:276] 0 containers: []
	W0814 01:09:45.164564   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:45.164571   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:45.164619   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:45.198115   61804 cri.go:89] found id: ""
	I0814 01:09:45.198145   61804 logs.go:276] 0 containers: []
	W0814 01:09:45.198157   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:45.198164   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:45.198231   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:45.230356   61804 cri.go:89] found id: ""
	I0814 01:09:45.230389   61804 logs.go:276] 0 containers: []
	W0814 01:09:45.230401   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:45.230409   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:45.230471   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:45.268342   61804 cri.go:89] found id: ""
	I0814 01:09:45.268367   61804 logs.go:276] 0 containers: []
	W0814 01:09:45.268376   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:45.268384   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:45.268398   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:45.321257   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:45.321294   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:45.334182   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:45.334206   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:45.409140   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:45.409162   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:45.409178   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:45.493974   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:45.494013   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:48.032466   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:48.045704   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:48.045783   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:48.084634   61804 cri.go:89] found id: ""
	I0814 01:09:48.084663   61804 logs.go:276] 0 containers: []
	W0814 01:09:48.084674   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:48.084683   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:48.084743   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:48.121917   61804 cri.go:89] found id: ""
	I0814 01:09:48.121941   61804 logs.go:276] 0 containers: []
	W0814 01:09:48.121948   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:48.121953   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:48.122014   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:48.156005   61804 cri.go:89] found id: ""
	I0814 01:09:48.156029   61804 logs.go:276] 0 containers: []
	W0814 01:09:48.156038   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:48.156046   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:48.156104   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:48.190105   61804 cri.go:89] found id: ""
	I0814 01:09:48.190127   61804 logs.go:276] 0 containers: []
	W0814 01:09:48.190136   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:48.190141   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:48.190202   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:48.222617   61804 cri.go:89] found id: ""
	I0814 01:09:48.222641   61804 logs.go:276] 0 containers: []
	W0814 01:09:48.222649   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:48.222655   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:48.222727   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:48.256198   61804 cri.go:89] found id: ""
	I0814 01:09:48.256222   61804 logs.go:276] 0 containers: []
	W0814 01:09:48.256230   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:48.256236   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:48.256294   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:48.294389   61804 cri.go:89] found id: ""
	I0814 01:09:48.294420   61804 logs.go:276] 0 containers: []
	W0814 01:09:48.294428   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:48.294434   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:48.294496   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:48.331503   61804 cri.go:89] found id: ""
	I0814 01:09:48.331540   61804 logs.go:276] 0 containers: []
	W0814 01:09:48.331553   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:48.331565   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:48.331585   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:48.407092   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:48.407134   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:48.446890   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:48.446920   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:48.498523   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:48.498559   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:48.511540   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:48.511578   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:48.576299   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:45.590239   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:48.090689   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:46.781816   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:49.280840   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:51.281638   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:50.418154   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:52.917611   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:51.076974   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:51.089440   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:51.089508   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:51.122770   61804 cri.go:89] found id: ""
	I0814 01:09:51.122794   61804 logs.go:276] 0 containers: []
	W0814 01:09:51.122806   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:51.122814   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:51.122873   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:51.159045   61804 cri.go:89] found id: ""
	I0814 01:09:51.159075   61804 logs.go:276] 0 containers: []
	W0814 01:09:51.159084   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:51.159090   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:51.159144   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:51.192983   61804 cri.go:89] found id: ""
	I0814 01:09:51.193013   61804 logs.go:276] 0 containers: []
	W0814 01:09:51.193022   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:51.193028   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:51.193087   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:51.225112   61804 cri.go:89] found id: ""
	I0814 01:09:51.225137   61804 logs.go:276] 0 containers: []
	W0814 01:09:51.225145   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:51.225151   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:51.225204   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:51.257785   61804 cri.go:89] found id: ""
	I0814 01:09:51.257813   61804 logs.go:276] 0 containers: []
	W0814 01:09:51.257822   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:51.257828   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:51.257879   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:51.289863   61804 cri.go:89] found id: ""
	I0814 01:09:51.289891   61804 logs.go:276] 0 containers: []
	W0814 01:09:51.289902   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:51.289910   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:51.289963   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:51.321834   61804 cri.go:89] found id: ""
	I0814 01:09:51.321860   61804 logs.go:276] 0 containers: []
	W0814 01:09:51.321870   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:51.321880   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:51.321949   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:51.354494   61804 cri.go:89] found id: ""
	I0814 01:09:51.354517   61804 logs.go:276] 0 containers: []
	W0814 01:09:51.354526   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:51.354535   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:51.354556   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:51.424704   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:51.424726   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:51.424741   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:51.505301   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:51.505337   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:51.544567   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:51.544603   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:51.598924   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:51.598954   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:54.113501   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:54.128000   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:54.128075   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:54.162230   61804 cri.go:89] found id: ""
	I0814 01:09:54.162260   61804 logs.go:276] 0 containers: []
	W0814 01:09:54.162270   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:54.162277   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:54.162327   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:54.196395   61804 cri.go:89] found id: ""
	I0814 01:09:54.196421   61804 logs.go:276] 0 containers: []
	W0814 01:09:54.196432   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:54.196440   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:54.196500   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:54.229685   61804 cri.go:89] found id: ""
	I0814 01:09:54.229718   61804 logs.go:276] 0 containers: []
	W0814 01:09:54.229730   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:54.229738   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:54.229825   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:54.263141   61804 cri.go:89] found id: ""
	I0814 01:09:54.263174   61804 logs.go:276] 0 containers: []
	W0814 01:09:54.263185   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:54.263193   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:54.263257   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:54.298658   61804 cri.go:89] found id: ""
	I0814 01:09:54.298689   61804 logs.go:276] 0 containers: []
	W0814 01:09:54.298700   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:54.298708   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:54.298794   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:54.331254   61804 cri.go:89] found id: ""
	I0814 01:09:54.331278   61804 logs.go:276] 0 containers: []
	W0814 01:09:54.331287   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:54.331294   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:54.331348   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:54.362930   61804 cri.go:89] found id: ""
	I0814 01:09:54.362954   61804 logs.go:276] 0 containers: []
	W0814 01:09:54.362961   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:54.362967   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:54.363017   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:54.403663   61804 cri.go:89] found id: ""
	I0814 01:09:54.403690   61804 logs.go:276] 0 containers: []
	W0814 01:09:54.403697   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:54.403706   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:54.403725   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:54.460623   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:54.460661   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:54.478728   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:54.478757   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 01:09:50.589697   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:53.089733   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:53.781208   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:56.282166   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:54.918107   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:56.918502   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	W0814 01:09:54.548615   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:54.548640   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:54.548654   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:54.624350   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:54.624385   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:57.164202   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:57.176107   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:57.176174   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:57.211204   61804 cri.go:89] found id: ""
	I0814 01:09:57.211230   61804 logs.go:276] 0 containers: []
	W0814 01:09:57.211238   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:57.211245   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:57.211305   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:57.243004   61804 cri.go:89] found id: ""
	I0814 01:09:57.243035   61804 logs.go:276] 0 containers: []
	W0814 01:09:57.243046   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:57.243052   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:57.243113   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:57.275315   61804 cri.go:89] found id: ""
	I0814 01:09:57.275346   61804 logs.go:276] 0 containers: []
	W0814 01:09:57.275357   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:57.275365   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:57.275435   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:57.311856   61804 cri.go:89] found id: ""
	I0814 01:09:57.311878   61804 logs.go:276] 0 containers: []
	W0814 01:09:57.311885   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:57.311890   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:57.311944   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:57.345305   61804 cri.go:89] found id: ""
	I0814 01:09:57.345335   61804 logs.go:276] 0 containers: []
	W0814 01:09:57.345347   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:57.345355   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:57.345419   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:57.378001   61804 cri.go:89] found id: ""
	I0814 01:09:57.378033   61804 logs.go:276] 0 containers: []
	W0814 01:09:57.378058   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:57.378066   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:57.378127   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:57.410664   61804 cri.go:89] found id: ""
	I0814 01:09:57.410691   61804 logs.go:276] 0 containers: []
	W0814 01:09:57.410700   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:57.410706   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:57.410766   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:57.443477   61804 cri.go:89] found id: ""
	I0814 01:09:57.443505   61804 logs.go:276] 0 containers: []
	W0814 01:09:57.443514   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:57.443523   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:57.443538   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:57.497674   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:57.497710   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:57.511347   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:57.511380   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:57.580111   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:57.580137   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:57.580153   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:57.660119   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:57.660166   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:55.089771   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:57.090272   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:59.591289   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:58.780363   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:00.781165   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:59.417990   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:01.419950   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:00.203685   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:10:00.224480   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:10:00.224552   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:10:00.265353   61804 cri.go:89] found id: ""
	I0814 01:10:00.265379   61804 logs.go:276] 0 containers: []
	W0814 01:10:00.265388   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:10:00.265395   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:10:00.265449   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:10:00.301086   61804 cri.go:89] found id: ""
	I0814 01:10:00.301112   61804 logs.go:276] 0 containers: []
	W0814 01:10:00.301122   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:10:00.301129   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:10:00.301203   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:10:00.335369   61804 cri.go:89] found id: ""
	I0814 01:10:00.335400   61804 logs.go:276] 0 containers: []
	W0814 01:10:00.335422   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:10:00.335442   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:10:00.335501   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:10:00.369341   61804 cri.go:89] found id: ""
	I0814 01:10:00.369367   61804 logs.go:276] 0 containers: []
	W0814 01:10:00.369377   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:10:00.369384   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:10:00.369446   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:10:00.403958   61804 cri.go:89] found id: ""
	I0814 01:10:00.403985   61804 logs.go:276] 0 containers: []
	W0814 01:10:00.403993   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:10:00.403998   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:10:00.404059   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:10:00.437921   61804 cri.go:89] found id: ""
	I0814 01:10:00.437944   61804 logs.go:276] 0 containers: []
	W0814 01:10:00.437952   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:10:00.437958   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:10:00.438020   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:10:00.471076   61804 cri.go:89] found id: ""
	I0814 01:10:00.471116   61804 logs.go:276] 0 containers: []
	W0814 01:10:00.471127   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:10:00.471135   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:10:00.471194   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:10:00.506002   61804 cri.go:89] found id: ""
	I0814 01:10:00.506026   61804 logs.go:276] 0 containers: []
	W0814 01:10:00.506034   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:10:00.506056   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:10:00.506074   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:10:00.576627   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:10:00.576653   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:10:00.576668   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:10:00.661108   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:10:00.661150   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:10:00.699083   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:10:00.699111   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:10:00.748944   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:10:00.748981   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:10:03.262338   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:10:03.274831   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:10:03.274909   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:10:03.308413   61804 cri.go:89] found id: ""
	I0814 01:10:03.308445   61804 logs.go:276] 0 containers: []
	W0814 01:10:03.308456   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:10:03.308463   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:10:03.308530   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:10:03.340763   61804 cri.go:89] found id: ""
	I0814 01:10:03.340789   61804 logs.go:276] 0 containers: []
	W0814 01:10:03.340798   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:10:03.340804   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:10:03.340872   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:10:03.375914   61804 cri.go:89] found id: ""
	I0814 01:10:03.375945   61804 logs.go:276] 0 containers: []
	W0814 01:10:03.375956   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:10:03.375964   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:10:03.376028   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:10:03.408904   61804 cri.go:89] found id: ""
	I0814 01:10:03.408934   61804 logs.go:276] 0 containers: []
	W0814 01:10:03.408944   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:10:03.408951   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:10:03.409015   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:10:03.443664   61804 cri.go:89] found id: ""
	I0814 01:10:03.443694   61804 logs.go:276] 0 containers: []
	W0814 01:10:03.443704   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:10:03.443712   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:10:03.443774   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:10:03.475742   61804 cri.go:89] found id: ""
	I0814 01:10:03.475775   61804 logs.go:276] 0 containers: []
	W0814 01:10:03.475786   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:10:03.475794   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:10:03.475856   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:10:03.509252   61804 cri.go:89] found id: ""
	I0814 01:10:03.509297   61804 logs.go:276] 0 containers: []
	W0814 01:10:03.509309   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:10:03.509315   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:10:03.509380   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:10:03.544311   61804 cri.go:89] found id: ""
	I0814 01:10:03.544332   61804 logs.go:276] 0 containers: []
	W0814 01:10:03.544341   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:10:03.544350   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:10:03.544365   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:10:03.620425   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:10:03.620459   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:10:03.658574   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:10:03.658601   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:10:03.708154   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:10:03.708187   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:10:03.721959   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:10:03.721986   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:10:03.789903   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:10:02.088526   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:04.092427   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:02.781595   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:05.280678   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:03.917268   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:05.917774   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:07.918699   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:06.290301   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:10:06.301935   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:10:06.301994   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:10:06.336211   61804 cri.go:89] found id: ""
	I0814 01:10:06.336231   61804 logs.go:276] 0 containers: []
	W0814 01:10:06.336239   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:10:06.336245   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:10:06.336293   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:10:06.369489   61804 cri.go:89] found id: ""
	I0814 01:10:06.369517   61804 logs.go:276] 0 containers: []
	W0814 01:10:06.369526   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:10:06.369532   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:10:06.369590   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:10:06.401142   61804 cri.go:89] found id: ""
	I0814 01:10:06.401167   61804 logs.go:276] 0 containers: []
	W0814 01:10:06.401176   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:10:06.401183   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:10:06.401233   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:10:06.432265   61804 cri.go:89] found id: ""
	I0814 01:10:06.432294   61804 logs.go:276] 0 containers: []
	W0814 01:10:06.432303   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:10:06.432308   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:10:06.432368   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:10:06.464786   61804 cri.go:89] found id: ""
	I0814 01:10:06.464815   61804 logs.go:276] 0 containers: []
	W0814 01:10:06.464826   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:10:06.464834   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:10:06.464928   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:10:06.497984   61804 cri.go:89] found id: ""
	I0814 01:10:06.498013   61804 logs.go:276] 0 containers: []
	W0814 01:10:06.498021   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:10:06.498027   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:10:06.498122   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:10:06.528722   61804 cri.go:89] found id: ""
	I0814 01:10:06.528750   61804 logs.go:276] 0 containers: []
	W0814 01:10:06.528760   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:10:06.528768   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:10:06.528836   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:10:06.559920   61804 cri.go:89] found id: ""
	I0814 01:10:06.559947   61804 logs.go:276] 0 containers: []
	W0814 01:10:06.559955   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:10:06.559964   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:10:06.559976   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:10:06.609227   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:10:06.609256   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:10:06.621627   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:10:06.621652   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:10:06.686110   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:10:06.686132   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:10:06.686145   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:10:06.767163   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:10:06.767201   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:10:09.302611   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:10:09.314804   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:10:09.314863   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:10:09.347222   61804 cri.go:89] found id: ""
	I0814 01:10:09.347248   61804 logs.go:276] 0 containers: []
	W0814 01:10:09.347257   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:10:09.347262   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:10:09.347311   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:10:09.382005   61804 cri.go:89] found id: ""
	I0814 01:10:09.382035   61804 logs.go:276] 0 containers: []
	W0814 01:10:09.382059   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:10:09.382067   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:10:09.382129   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:10:09.413728   61804 cri.go:89] found id: ""
	I0814 01:10:09.413759   61804 logs.go:276] 0 containers: []
	W0814 01:10:09.413771   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:10:09.413778   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:10:09.413843   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:10:09.446389   61804 cri.go:89] found id: ""
	I0814 01:10:09.446422   61804 logs.go:276] 0 containers: []
	W0814 01:10:09.446435   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:10:09.446455   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:10:09.446511   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:10:09.482224   61804 cri.go:89] found id: ""
	I0814 01:10:09.482253   61804 logs.go:276] 0 containers: []
	W0814 01:10:09.482261   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:10:09.482267   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:10:09.482330   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:10:06.589791   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:09.089933   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:07.782212   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:07.782245   61447 pod_ready.go:81] duration metric: took 4m0.007594209s for pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace to be "Ready" ...
	E0814 01:10:07.782257   61447 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0814 01:10:07.782267   61447 pod_ready.go:38] duration metric: took 4m3.607931792s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 01:10:07.782286   61447 api_server.go:52] waiting for apiserver process to appear ...
	I0814 01:10:07.782318   61447 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:10:07.782382   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:10:07.840346   61447 cri.go:89] found id: "ddba3ebb8413d6647bf6c8fe0bff16a1a10b35bcd7219cad5b66979c372cd92e"
	I0814 01:10:07.840370   61447 cri.go:89] found id: ""
	I0814 01:10:07.840378   61447 logs.go:276] 1 containers: [ddba3ebb8413d6647bf6c8fe0bff16a1a10b35bcd7219cad5b66979c372cd92e]
	I0814 01:10:07.840426   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:07.844721   61447 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:10:07.844775   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:10:07.879720   61447 cri.go:89] found id: "1632d4b88f7f0e7e802d1848acf6c67950cfa7cdfbe5b4dd7f6fbc467a969388"
	I0814 01:10:07.879748   61447 cri.go:89] found id: ""
	I0814 01:10:07.879756   61447 logs.go:276] 1 containers: [1632d4b88f7f0e7e802d1848acf6c67950cfa7cdfbe5b4dd7f6fbc467a969388]
	I0814 01:10:07.879805   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:07.883392   61447 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:10:07.883455   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:10:07.919395   61447 cri.go:89] found id: "7d3cb1d648607a62a84856164fa12f6d400c0d4316d7bda5d83a448870c145fc"
	I0814 01:10:07.919414   61447 cri.go:89] found id: ""
	I0814 01:10:07.919423   61447 logs.go:276] 1 containers: [7d3cb1d648607a62a84856164fa12f6d400c0d4316d7bda5d83a448870c145fc]
	I0814 01:10:07.919481   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:07.923650   61447 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:10:07.923715   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:10:07.960706   61447 cri.go:89] found id: "89953f1dc813e32748579ac4c2541c0f99b1443ae3956d85a367db06810107b2"
	I0814 01:10:07.960734   61447 cri.go:89] found id: ""
	I0814 01:10:07.960744   61447 logs.go:276] 1 containers: [89953f1dc813e32748579ac4c2541c0f99b1443ae3956d85a367db06810107b2]
	I0814 01:10:07.960792   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:07.964923   61447 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:10:07.964984   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:10:08.000107   61447 cri.go:89] found id: "0ec88a5a7a9d566fabdf1393667a95e5eac0ed7f2a2aaa326ee51d3e99a72b12"
	I0814 01:10:08.000127   61447 cri.go:89] found id: ""
	I0814 01:10:08.000134   61447 logs.go:276] 1 containers: [0ec88a5a7a9d566fabdf1393667a95e5eac0ed7f2a2aaa326ee51d3e99a72b12]
	I0814 01:10:08.000187   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:08.004313   61447 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:10:08.004375   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:10:08.039317   61447 cri.go:89] found id: "3ef9bf666bbbc4c4c8b8036d2fa7e15e882f2bb301fe7d3b63a483d8b38e6091"
	I0814 01:10:08.039346   61447 cri.go:89] found id: ""
	I0814 01:10:08.039356   61447 logs.go:276] 1 containers: [3ef9bf666bbbc4c4c8b8036d2fa7e15e882f2bb301fe7d3b63a483d8b38e6091]
	I0814 01:10:08.039433   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:08.043054   61447 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:10:08.043122   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:10:08.078708   61447 cri.go:89] found id: ""
	I0814 01:10:08.078745   61447 logs.go:276] 0 containers: []
	W0814 01:10:08.078756   61447 logs.go:278] No container was found matching "kindnet"
	I0814 01:10:08.078764   61447 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0814 01:10:08.078826   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0814 01:10:08.119964   61447 cri.go:89] found id: "d4d7da10edbe3c5e4d9a0997c7f8909523ed34eacac6b44dc982bf6ab7504eff"
	I0814 01:10:08.119989   61447 cri.go:89] found id: "bacb411cbea2000ee28b8a5eb1c371d4094e22dea31e33892860568e08ee9768"
	I0814 01:10:08.119995   61447 cri.go:89] found id: ""
	I0814 01:10:08.120004   61447 logs.go:276] 2 containers: [d4d7da10edbe3c5e4d9a0997c7f8909523ed34eacac6b44dc982bf6ab7504eff bacb411cbea2000ee28b8a5eb1c371d4094e22dea31e33892860568e08ee9768]
	I0814 01:10:08.120067   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:08.123852   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:08.127530   61447 logs.go:123] Gathering logs for dmesg ...
	I0814 01:10:08.127553   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:10:08.144431   61447 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:10:08.144466   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 01:10:08.267719   61447 logs.go:123] Gathering logs for coredns [7d3cb1d648607a62a84856164fa12f6d400c0d4316d7bda5d83a448870c145fc] ...
	I0814 01:10:08.267751   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7d3cb1d648607a62a84856164fa12f6d400c0d4316d7bda5d83a448870c145fc"
	I0814 01:10:08.308901   61447 logs.go:123] Gathering logs for kube-controller-manager [3ef9bf666bbbc4c4c8b8036d2fa7e15e882f2bb301fe7d3b63a483d8b38e6091] ...
	I0814 01:10:08.308936   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ef9bf666bbbc4c4c8b8036d2fa7e15e882f2bb301fe7d3b63a483d8b38e6091"
	I0814 01:10:08.357837   61447 logs.go:123] Gathering logs for storage-provisioner [d4d7da10edbe3c5e4d9a0997c7f8909523ed34eacac6b44dc982bf6ab7504eff] ...
	I0814 01:10:08.357868   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d4d7da10edbe3c5e4d9a0997c7f8909523ed34eacac6b44dc982bf6ab7504eff"
	I0814 01:10:08.393863   61447 logs.go:123] Gathering logs for storage-provisioner [bacb411cbea2000ee28b8a5eb1c371d4094e22dea31e33892860568e08ee9768] ...
	I0814 01:10:08.393890   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bacb411cbea2000ee28b8a5eb1c371d4094e22dea31e33892860568e08ee9768"
	I0814 01:10:08.430599   61447 logs.go:123] Gathering logs for kubelet ...
	I0814 01:10:08.430631   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:10:08.512420   61447 logs.go:123] Gathering logs for etcd [1632d4b88f7f0e7e802d1848acf6c67950cfa7cdfbe5b4dd7f6fbc467a969388] ...
	I0814 01:10:08.512460   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1632d4b88f7f0e7e802d1848acf6c67950cfa7cdfbe5b4dd7f6fbc467a969388"
	I0814 01:10:08.561482   61447 logs.go:123] Gathering logs for kube-scheduler [89953f1dc813e32748579ac4c2541c0f99b1443ae3956d85a367db06810107b2] ...
	I0814 01:10:08.561512   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 89953f1dc813e32748579ac4c2541c0f99b1443ae3956d85a367db06810107b2"
	I0814 01:10:08.598681   61447 logs.go:123] Gathering logs for kube-proxy [0ec88a5a7a9d566fabdf1393667a95e5eac0ed7f2a2aaa326ee51d3e99a72b12] ...
	I0814 01:10:08.598705   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ec88a5a7a9d566fabdf1393667a95e5eac0ed7f2a2aaa326ee51d3e99a72b12"
	I0814 01:10:08.634798   61447 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:10:08.634835   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:10:09.113197   61447 logs.go:123] Gathering logs for container status ...
	I0814 01:10:09.113249   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:10:09.166264   61447 logs.go:123] Gathering logs for kube-apiserver [ddba3ebb8413d6647bf6c8fe0bff16a1a10b35bcd7219cad5b66979c372cd92e] ...
	I0814 01:10:09.166294   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ddba3ebb8413d6647bf6c8fe0bff16a1a10b35bcd7219cad5b66979c372cd92e"
	I0814 01:10:10.417612   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:12.418303   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:12.911546   61689 pod_ready.go:81] duration metric: took 4m0.00009953s for pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace to be "Ready" ...
	E0814 01:10:12.911580   61689 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0814 01:10:12.911610   61689 pod_ready.go:38] duration metric: took 4m7.021956674s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 01:10:12.911643   61689 kubeadm.go:597] duration metric: took 4m14.591841657s to restartPrimaryControlPlane
	W0814 01:10:12.911710   61689 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0814 01:10:12.911741   61689 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0814 01:10:09.517482   61804 cri.go:89] found id: ""
	I0814 01:10:09.517511   61804 logs.go:276] 0 containers: []
	W0814 01:10:09.517529   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:10:09.517538   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:10:09.517600   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:10:09.550825   61804 cri.go:89] found id: ""
	I0814 01:10:09.550849   61804 logs.go:276] 0 containers: []
	W0814 01:10:09.550857   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:10:09.550863   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:10:09.550923   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:10:09.585090   61804 cri.go:89] found id: ""
	I0814 01:10:09.585122   61804 logs.go:276] 0 containers: []
	W0814 01:10:09.585129   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:10:09.585137   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:10:09.585148   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:10:09.636337   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:10:09.636367   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:10:09.649807   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:10:09.649837   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:10:09.720720   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:10:09.720743   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:10:09.720759   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:10:09.805985   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:10:09.806027   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:10:12.350767   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:10:12.364446   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:10:12.364516   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:10:12.396353   61804 cri.go:89] found id: ""
	I0814 01:10:12.396387   61804 logs.go:276] 0 containers: []
	W0814 01:10:12.396400   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:10:12.396409   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:10:12.396478   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:10:12.427988   61804 cri.go:89] found id: ""
	I0814 01:10:12.428010   61804 logs.go:276] 0 containers: []
	W0814 01:10:12.428022   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:10:12.428033   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:10:12.428094   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:10:12.461269   61804 cri.go:89] found id: ""
	I0814 01:10:12.461295   61804 logs.go:276] 0 containers: []
	W0814 01:10:12.461304   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:10:12.461310   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:10:12.461364   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:10:12.495746   61804 cri.go:89] found id: ""
	I0814 01:10:12.495772   61804 logs.go:276] 0 containers: []
	W0814 01:10:12.495783   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:10:12.495791   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:10:12.495850   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:10:12.528862   61804 cri.go:89] found id: ""
	I0814 01:10:12.528891   61804 logs.go:276] 0 containers: []
	W0814 01:10:12.528901   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:10:12.528909   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:10:12.528969   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:10:12.562169   61804 cri.go:89] found id: ""
	I0814 01:10:12.562196   61804 logs.go:276] 0 containers: []
	W0814 01:10:12.562206   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:10:12.562214   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:10:12.562279   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:10:12.601089   61804 cri.go:89] found id: ""
	I0814 01:10:12.601118   61804 logs.go:276] 0 containers: []
	W0814 01:10:12.601129   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:10:12.601137   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:10:12.601200   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:10:12.635250   61804 cri.go:89] found id: ""
	I0814 01:10:12.635276   61804 logs.go:276] 0 containers: []
	W0814 01:10:12.635285   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:10:12.635293   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:10:12.635306   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:10:12.686904   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:10:12.686937   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:10:12.702218   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:10:12.702244   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:10:12.767008   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:10:12.767034   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:10:12.767051   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:10:12.849601   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:10:12.849639   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:10:11.090068   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:13.090518   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:11.715364   61447 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:10:11.731610   61447 api_server.go:72] duration metric: took 4m15.320142444s to wait for apiserver process to appear ...
	I0814 01:10:11.731645   61447 api_server.go:88] waiting for apiserver healthz status ...
	I0814 01:10:11.731689   61447 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:10:11.731748   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:10:11.769722   61447 cri.go:89] found id: "ddba3ebb8413d6647bf6c8fe0bff16a1a10b35bcd7219cad5b66979c372cd92e"
	I0814 01:10:11.769754   61447 cri.go:89] found id: ""
	I0814 01:10:11.769763   61447 logs.go:276] 1 containers: [ddba3ebb8413d6647bf6c8fe0bff16a1a10b35bcd7219cad5b66979c372cd92e]
	I0814 01:10:11.769824   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:11.774334   61447 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:10:11.774403   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:10:11.808392   61447 cri.go:89] found id: "1632d4b88f7f0e7e802d1848acf6c67950cfa7cdfbe5b4dd7f6fbc467a969388"
	I0814 01:10:11.808412   61447 cri.go:89] found id: ""
	I0814 01:10:11.808419   61447 logs.go:276] 1 containers: [1632d4b88f7f0e7e802d1848acf6c67950cfa7cdfbe5b4dd7f6fbc467a969388]
	I0814 01:10:11.808464   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:11.812100   61447 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:10:11.812154   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:10:11.846105   61447 cri.go:89] found id: "7d3cb1d648607a62a84856164fa12f6d400c0d4316d7bda5d83a448870c145fc"
	I0814 01:10:11.846133   61447 cri.go:89] found id: ""
	I0814 01:10:11.846144   61447 logs.go:276] 1 containers: [7d3cb1d648607a62a84856164fa12f6d400c0d4316d7bda5d83a448870c145fc]
	I0814 01:10:11.846202   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:11.850271   61447 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:10:11.850330   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:10:11.889364   61447 cri.go:89] found id: "89953f1dc813e32748579ac4c2541c0f99b1443ae3956d85a367db06810107b2"
	I0814 01:10:11.889389   61447 cri.go:89] found id: ""
	I0814 01:10:11.889399   61447 logs.go:276] 1 containers: [89953f1dc813e32748579ac4c2541c0f99b1443ae3956d85a367db06810107b2]
	I0814 01:10:11.889446   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:11.893413   61447 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:10:11.893483   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:10:11.929675   61447 cri.go:89] found id: "0ec88a5a7a9d566fabdf1393667a95e5eac0ed7f2a2aaa326ee51d3e99a72b12"
	I0814 01:10:11.929696   61447 cri.go:89] found id: ""
	I0814 01:10:11.929704   61447 logs.go:276] 1 containers: [0ec88a5a7a9d566fabdf1393667a95e5eac0ed7f2a2aaa326ee51d3e99a72b12]
	I0814 01:10:11.929764   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:11.933454   61447 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:10:11.933513   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:10:11.971708   61447 cri.go:89] found id: "3ef9bf666bbbc4c4c8b8036d2fa7e15e882f2bb301fe7d3b63a483d8b38e6091"
	I0814 01:10:11.971734   61447 cri.go:89] found id: ""
	I0814 01:10:11.971743   61447 logs.go:276] 1 containers: [3ef9bf666bbbc4c4c8b8036d2fa7e15e882f2bb301fe7d3b63a483d8b38e6091]
	I0814 01:10:11.971801   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:11.975943   61447 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:10:11.976005   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:10:12.010171   61447 cri.go:89] found id: ""
	I0814 01:10:12.010198   61447 logs.go:276] 0 containers: []
	W0814 01:10:12.010209   61447 logs.go:278] No container was found matching "kindnet"
	I0814 01:10:12.010217   61447 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0814 01:10:12.010277   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0814 01:10:12.045333   61447 cri.go:89] found id: "d4d7da10edbe3c5e4d9a0997c7f8909523ed34eacac6b44dc982bf6ab7504eff"
	I0814 01:10:12.045354   61447 cri.go:89] found id: "bacb411cbea2000ee28b8a5eb1c371d4094e22dea31e33892860568e08ee9768"
	I0814 01:10:12.045359   61447 cri.go:89] found id: ""
	I0814 01:10:12.045367   61447 logs.go:276] 2 containers: [d4d7da10edbe3c5e4d9a0997c7f8909523ed34eacac6b44dc982bf6ab7504eff bacb411cbea2000ee28b8a5eb1c371d4094e22dea31e33892860568e08ee9768]
	I0814 01:10:12.045431   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:12.049476   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:12.053712   61447 logs.go:123] Gathering logs for kube-controller-manager [3ef9bf666bbbc4c4c8b8036d2fa7e15e882f2bb301fe7d3b63a483d8b38e6091] ...
	I0814 01:10:12.053732   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ef9bf666bbbc4c4c8b8036d2fa7e15e882f2bb301fe7d3b63a483d8b38e6091"
	I0814 01:10:12.109678   61447 logs.go:123] Gathering logs for storage-provisioner [d4d7da10edbe3c5e4d9a0997c7f8909523ed34eacac6b44dc982bf6ab7504eff] ...
	I0814 01:10:12.109706   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d4d7da10edbe3c5e4d9a0997c7f8909523ed34eacac6b44dc982bf6ab7504eff"
	I0814 01:10:12.146300   61447 logs.go:123] Gathering logs for storage-provisioner [bacb411cbea2000ee28b8a5eb1c371d4094e22dea31e33892860568e08ee9768] ...
	I0814 01:10:12.146327   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bacb411cbea2000ee28b8a5eb1c371d4094e22dea31e33892860568e08ee9768"
	I0814 01:10:12.186556   61447 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:10:12.186585   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:10:12.660273   61447 logs.go:123] Gathering logs for kubelet ...
	I0814 01:10:12.660307   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:10:12.739687   61447 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:10:12.739723   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 01:10:12.859358   61447 logs.go:123] Gathering logs for kube-apiserver [ddba3ebb8413d6647bf6c8fe0bff16a1a10b35bcd7219cad5b66979c372cd92e] ...
	I0814 01:10:12.859388   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ddba3ebb8413d6647bf6c8fe0bff16a1a10b35bcd7219cad5b66979c372cd92e"
	I0814 01:10:12.908682   61447 logs.go:123] Gathering logs for kube-proxy [0ec88a5a7a9d566fabdf1393667a95e5eac0ed7f2a2aaa326ee51d3e99a72b12] ...
	I0814 01:10:12.908712   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ec88a5a7a9d566fabdf1393667a95e5eac0ed7f2a2aaa326ee51d3e99a72b12"
	I0814 01:10:12.943374   61447 logs.go:123] Gathering logs for container status ...
	I0814 01:10:12.943403   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:10:12.985875   61447 logs.go:123] Gathering logs for dmesg ...
	I0814 01:10:12.985915   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:10:13.001173   61447 logs.go:123] Gathering logs for etcd [1632d4b88f7f0e7e802d1848acf6c67950cfa7cdfbe5b4dd7f6fbc467a969388] ...
	I0814 01:10:13.001206   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1632d4b88f7f0e7e802d1848acf6c67950cfa7cdfbe5b4dd7f6fbc467a969388"
	I0814 01:10:13.048387   61447 logs.go:123] Gathering logs for coredns [7d3cb1d648607a62a84856164fa12f6d400c0d4316d7bda5d83a448870c145fc] ...
	I0814 01:10:13.048419   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7d3cb1d648607a62a84856164fa12f6d400c0d4316d7bda5d83a448870c145fc"
	I0814 01:10:13.088258   61447 logs.go:123] Gathering logs for kube-scheduler [89953f1dc813e32748579ac4c2541c0f99b1443ae3956d85a367db06810107b2] ...
	I0814 01:10:13.088295   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 89953f1dc813e32748579ac4c2541c0f99b1443ae3956d85a367db06810107b2"
	I0814 01:10:15.634029   61447 api_server.go:253] Checking apiserver healthz at https://192.168.72.94:8443/healthz ...
	I0814 01:10:15.639313   61447 api_server.go:279] https://192.168.72.94:8443/healthz returned 200:
	ok
	I0814 01:10:15.640756   61447 api_server.go:141] control plane version: v1.31.0
	I0814 01:10:15.640778   61447 api_server.go:131] duration metric: took 3.909125329s to wait for apiserver health ...
	I0814 01:10:15.640785   61447 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 01:10:15.640808   61447 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:10:15.640853   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:10:15.687350   61447 cri.go:89] found id: "ddba3ebb8413d6647bf6c8fe0bff16a1a10b35bcd7219cad5b66979c372cd92e"
	I0814 01:10:15.687373   61447 cri.go:89] found id: ""
	I0814 01:10:15.687381   61447 logs.go:276] 1 containers: [ddba3ebb8413d6647bf6c8fe0bff16a1a10b35bcd7219cad5b66979c372cd92e]
	I0814 01:10:15.687460   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:15.691407   61447 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:10:15.691473   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:10:15.730526   61447 cri.go:89] found id: "1632d4b88f7f0e7e802d1848acf6c67950cfa7cdfbe5b4dd7f6fbc467a969388"
	I0814 01:10:15.730551   61447 cri.go:89] found id: ""
	I0814 01:10:15.730560   61447 logs.go:276] 1 containers: [1632d4b88f7f0e7e802d1848acf6c67950cfa7cdfbe5b4dd7f6fbc467a969388]
	I0814 01:10:15.730618   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:15.734328   61447 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:10:15.734390   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:10:15.773166   61447 cri.go:89] found id: "7d3cb1d648607a62a84856164fa12f6d400c0d4316d7bda5d83a448870c145fc"
	I0814 01:10:15.773185   61447 cri.go:89] found id: ""
	I0814 01:10:15.773192   61447 logs.go:276] 1 containers: [7d3cb1d648607a62a84856164fa12f6d400c0d4316d7bda5d83a448870c145fc]
	I0814 01:10:15.773236   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:15.778757   61447 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:10:15.778815   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:10:15.813960   61447 cri.go:89] found id: "89953f1dc813e32748579ac4c2541c0f99b1443ae3956d85a367db06810107b2"
	I0814 01:10:15.813984   61447 cri.go:89] found id: ""
	I0814 01:10:15.813993   61447 logs.go:276] 1 containers: [89953f1dc813e32748579ac4c2541c0f99b1443ae3956d85a367db06810107b2]
	I0814 01:10:15.814068   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:15.818154   61447 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:10:15.818206   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:10:15.859408   61447 cri.go:89] found id: "0ec88a5a7a9d566fabdf1393667a95e5eac0ed7f2a2aaa326ee51d3e99a72b12"
	I0814 01:10:15.859432   61447 cri.go:89] found id: ""
	I0814 01:10:15.859440   61447 logs.go:276] 1 containers: [0ec88a5a7a9d566fabdf1393667a95e5eac0ed7f2a2aaa326ee51d3e99a72b12]
	I0814 01:10:15.859487   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:15.864494   61447 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:10:15.864583   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:10:15.900903   61447 cri.go:89] found id: "3ef9bf666bbbc4c4c8b8036d2fa7e15e882f2bb301fe7d3b63a483d8b38e6091"
	I0814 01:10:15.900922   61447 cri.go:89] found id: ""
	I0814 01:10:15.900932   61447 logs.go:276] 1 containers: [3ef9bf666bbbc4c4c8b8036d2fa7e15e882f2bb301fe7d3b63a483d8b38e6091]
	I0814 01:10:15.900982   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:15.905238   61447 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:10:15.905298   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:10:15.941185   61447 cri.go:89] found id: ""
	I0814 01:10:15.941215   61447 logs.go:276] 0 containers: []
	W0814 01:10:15.941226   61447 logs.go:278] No container was found matching "kindnet"
	I0814 01:10:15.941233   61447 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0814 01:10:15.941293   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0814 01:10:15.980737   61447 cri.go:89] found id: "d4d7da10edbe3c5e4d9a0997c7f8909523ed34eacac6b44dc982bf6ab7504eff"
	I0814 01:10:15.980756   61447 cri.go:89] found id: "bacb411cbea2000ee28b8a5eb1c371d4094e22dea31e33892860568e08ee9768"
	I0814 01:10:15.980760   61447 cri.go:89] found id: ""
	I0814 01:10:15.980766   61447 logs.go:276] 2 containers: [d4d7da10edbe3c5e4d9a0997c7f8909523ed34eacac6b44dc982bf6ab7504eff bacb411cbea2000ee28b8a5eb1c371d4094e22dea31e33892860568e08ee9768]
	I0814 01:10:15.980809   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:15.985209   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:15.989469   61447 logs.go:123] Gathering logs for coredns [7d3cb1d648607a62a84856164fa12f6d400c0d4316d7bda5d83a448870c145fc] ...
	I0814 01:10:15.989492   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7d3cb1d648607a62a84856164fa12f6d400c0d4316d7bda5d83a448870c145fc"
	I0814 01:10:16.026888   61447 logs.go:123] Gathering logs for kube-proxy [0ec88a5a7a9d566fabdf1393667a95e5eac0ed7f2a2aaa326ee51d3e99a72b12] ...
	I0814 01:10:16.026917   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ec88a5a7a9d566fabdf1393667a95e5eac0ed7f2a2aaa326ee51d3e99a72b12"
	I0814 01:10:16.071726   61447 logs.go:123] Gathering logs for storage-provisioner [d4d7da10edbe3c5e4d9a0997c7f8909523ed34eacac6b44dc982bf6ab7504eff] ...
	I0814 01:10:16.071754   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d4d7da10edbe3c5e4d9a0997c7f8909523ed34eacac6b44dc982bf6ab7504eff"
	I0814 01:10:16.109685   61447 logs.go:123] Gathering logs for storage-provisioner [bacb411cbea2000ee28b8a5eb1c371d4094e22dea31e33892860568e08ee9768] ...
	I0814 01:10:16.109710   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bacb411cbea2000ee28b8a5eb1c371d4094e22dea31e33892860568e08ee9768"
	I0814 01:10:16.145898   61447 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:10:16.145928   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:10:15.387785   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:10:15.401850   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:10:15.401916   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:10:15.441217   61804 cri.go:89] found id: ""
	I0814 01:10:15.441240   61804 logs.go:276] 0 containers: []
	W0814 01:10:15.441255   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:10:15.441261   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:10:15.441312   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:10:15.475123   61804 cri.go:89] found id: ""
	I0814 01:10:15.475158   61804 logs.go:276] 0 containers: []
	W0814 01:10:15.475167   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:10:15.475172   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:10:15.475234   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:10:15.509696   61804 cri.go:89] found id: ""
	I0814 01:10:15.509725   61804 logs.go:276] 0 containers: []
	W0814 01:10:15.509733   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:10:15.509739   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:10:15.509797   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:10:15.542584   61804 cri.go:89] found id: ""
	I0814 01:10:15.542615   61804 logs.go:276] 0 containers: []
	W0814 01:10:15.542625   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:10:15.542632   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:10:15.542701   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:10:15.576508   61804 cri.go:89] found id: ""
	I0814 01:10:15.576540   61804 logs.go:276] 0 containers: []
	W0814 01:10:15.576552   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:10:15.576558   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:10:15.576622   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:10:15.613618   61804 cri.go:89] found id: ""
	I0814 01:10:15.613649   61804 logs.go:276] 0 containers: []
	W0814 01:10:15.613660   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:10:15.613669   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:10:15.613732   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:10:15.646153   61804 cri.go:89] found id: ""
	I0814 01:10:15.646173   61804 logs.go:276] 0 containers: []
	W0814 01:10:15.646182   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:10:15.646189   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:10:15.646241   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:10:15.681417   61804 cri.go:89] found id: ""
	I0814 01:10:15.681444   61804 logs.go:276] 0 containers: []
	W0814 01:10:15.681455   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:10:15.681466   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:10:15.681483   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:10:15.763989   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:10:15.764026   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:10:15.803304   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:10:15.803337   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:10:15.872591   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:10:15.872630   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:10:15.886469   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:10:15.886504   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:10:15.956403   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:10:18.457103   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:10:18.470059   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:10:18.470138   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:10:18.505369   61804 cri.go:89] found id: ""
	I0814 01:10:18.505399   61804 logs.go:276] 0 containers: []
	W0814 01:10:18.505410   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:10:18.505419   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:10:18.505481   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:10:18.536719   61804 cri.go:89] found id: ""
	I0814 01:10:18.536750   61804 logs.go:276] 0 containers: []
	W0814 01:10:18.536781   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:10:18.536790   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:10:18.536845   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:10:18.571048   61804 cri.go:89] found id: ""
	I0814 01:10:18.571077   61804 logs.go:276] 0 containers: []
	W0814 01:10:18.571089   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:10:18.571096   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:10:18.571161   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:10:18.605547   61804 cri.go:89] found id: ""
	I0814 01:10:18.605569   61804 logs.go:276] 0 containers: []
	W0814 01:10:18.605578   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:10:18.605585   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:10:18.605645   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:10:18.637177   61804 cri.go:89] found id: ""
	I0814 01:10:18.637199   61804 logs.go:276] 0 containers: []
	W0814 01:10:18.637207   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:10:18.637213   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:10:18.637275   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:10:18.674976   61804 cri.go:89] found id: ""
	I0814 01:10:18.675003   61804 logs.go:276] 0 containers: []
	W0814 01:10:18.675012   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:10:18.675017   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:10:18.675066   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:10:18.709808   61804 cri.go:89] found id: ""
	I0814 01:10:18.709832   61804 logs.go:276] 0 containers: []
	W0814 01:10:18.709840   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:10:18.709846   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:10:18.709902   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:10:18.743577   61804 cri.go:89] found id: ""
	I0814 01:10:18.743601   61804 logs.go:276] 0 containers: []
	W0814 01:10:18.743607   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:10:18.743615   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:10:18.743635   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:10:18.794913   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:10:18.794944   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:10:18.807665   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:10:18.807692   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:10:18.877814   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:10:18.877835   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:10:18.877847   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:10:18.962319   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:10:18.962356   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:10:16.533474   61447 logs.go:123] Gathering logs for container status ...
	I0814 01:10:16.533523   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:10:16.579098   61447 logs.go:123] Gathering logs for kube-apiserver [ddba3ebb8413d6647bf6c8fe0bff16a1a10b35bcd7219cad5b66979c372cd92e] ...
	I0814 01:10:16.579129   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ddba3ebb8413d6647bf6c8fe0bff16a1a10b35bcd7219cad5b66979c372cd92e"
	I0814 01:10:16.620711   61447 logs.go:123] Gathering logs for dmesg ...
	I0814 01:10:16.620744   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:10:16.633968   61447 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:10:16.634005   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 01:10:16.733947   61447 logs.go:123] Gathering logs for etcd [1632d4b88f7f0e7e802d1848acf6c67950cfa7cdfbe5b4dd7f6fbc467a969388] ...
	I0814 01:10:16.733985   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1632d4b88f7f0e7e802d1848acf6c67950cfa7cdfbe5b4dd7f6fbc467a969388"
	I0814 01:10:16.785475   61447 logs.go:123] Gathering logs for kube-scheduler [89953f1dc813e32748579ac4c2541c0f99b1443ae3956d85a367db06810107b2] ...
	I0814 01:10:16.785512   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 89953f1dc813e32748579ac4c2541c0f99b1443ae3956d85a367db06810107b2"
	I0814 01:10:16.826307   61447 logs.go:123] Gathering logs for kube-controller-manager [3ef9bf666bbbc4c4c8b8036d2fa7e15e882f2bb301fe7d3b63a483d8b38e6091] ...
	I0814 01:10:16.826334   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ef9bf666bbbc4c4c8b8036d2fa7e15e882f2bb301fe7d3b63a483d8b38e6091"
	I0814 01:10:16.879391   61447 logs.go:123] Gathering logs for kubelet ...
	I0814 01:10:16.879422   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:10:19.453998   61447 system_pods.go:59] 8 kube-system pods found
	I0814 01:10:19.454028   61447 system_pods.go:61] "coredns-6f6b679f8f-dz9zk" [67e29ce3-7f67-4b96-8030-c980773b5772] Running
	I0814 01:10:19.454034   61447 system_pods.go:61] "etcd-no-preload-776907" [b81b7341-dcd8-4374-8241-8797eb33d707] Running
	I0814 01:10:19.454050   61447 system_pods.go:61] "kube-apiserver-no-preload-776907" [33b066e2-28ef-46a7-95d7-b17806cdbde6] Running
	I0814 01:10:19.454056   61447 system_pods.go:61] "kube-controller-manager-no-preload-776907" [1de07b1f-7e0d-4704-84dc-fbb1280fc3bf] Running
	I0814 01:10:19.454060   61447 system_pods.go:61] "kube-proxy-pgm9t" [efad60b0-c62e-4c47-974b-98fdca9d3496] Running
	I0814 01:10:19.454065   61447 system_pods.go:61] "kube-scheduler-no-preload-776907" [6a57c2f5-6194-4e84-bfd3-985a6ff2333d] Running
	I0814 01:10:19.454074   61447 system_pods.go:61] "metrics-server-6867b74b74-gb2dt" [c950c58e-c5c3-4535-b10f-f4379ff03409] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 01:10:19.454079   61447 system_pods.go:61] "storage-provisioner" [d0ba9510-e0a5-4558-98e3-a9510920f93a] Running
	I0814 01:10:19.454090   61447 system_pods.go:74] duration metric: took 3.813297982s to wait for pod list to return data ...
	I0814 01:10:19.454101   61447 default_sa.go:34] waiting for default service account to be created ...
	I0814 01:10:19.456941   61447 default_sa.go:45] found service account: "default"
	I0814 01:10:19.456969   61447 default_sa.go:55] duration metric: took 2.858057ms for default service account to be created ...
	I0814 01:10:19.456980   61447 system_pods.go:116] waiting for k8s-apps to be running ...
	I0814 01:10:19.461101   61447 system_pods.go:86] 8 kube-system pods found
	I0814 01:10:19.461125   61447 system_pods.go:89] "coredns-6f6b679f8f-dz9zk" [67e29ce3-7f67-4b96-8030-c980773b5772] Running
	I0814 01:10:19.461133   61447 system_pods.go:89] "etcd-no-preload-776907" [b81b7341-dcd8-4374-8241-8797eb33d707] Running
	I0814 01:10:19.461138   61447 system_pods.go:89] "kube-apiserver-no-preload-776907" [33b066e2-28ef-46a7-95d7-b17806cdbde6] Running
	I0814 01:10:19.461144   61447 system_pods.go:89] "kube-controller-manager-no-preload-776907" [1de07b1f-7e0d-4704-84dc-fbb1280fc3bf] Running
	I0814 01:10:19.461150   61447 system_pods.go:89] "kube-proxy-pgm9t" [efad60b0-c62e-4c47-974b-98fdca9d3496] Running
	I0814 01:10:19.461155   61447 system_pods.go:89] "kube-scheduler-no-preload-776907" [6a57c2f5-6194-4e84-bfd3-985a6ff2333d] Running
	I0814 01:10:19.461166   61447 system_pods.go:89] "metrics-server-6867b74b74-gb2dt" [c950c58e-c5c3-4535-b10f-f4379ff03409] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 01:10:19.461178   61447 system_pods.go:89] "storage-provisioner" [d0ba9510-e0a5-4558-98e3-a9510920f93a] Running
	I0814 01:10:19.461191   61447 system_pods.go:126] duration metric: took 4.203785ms to wait for k8s-apps to be running ...
	I0814 01:10:19.461203   61447 system_svc.go:44] waiting for kubelet service to be running ....
	I0814 01:10:19.461253   61447 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 01:10:19.476698   61447 system_svc.go:56] duration metric: took 15.486945ms WaitForService to wait for kubelet
	I0814 01:10:19.476735   61447 kubeadm.go:582] duration metric: took 4m23.065272349s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 01:10:19.476762   61447 node_conditions.go:102] verifying NodePressure condition ...
	I0814 01:10:19.480352   61447 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0814 01:10:19.480377   61447 node_conditions.go:123] node cpu capacity is 2
	I0814 01:10:19.480392   61447 node_conditions.go:105] duration metric: took 3.624166ms to run NodePressure ...
	I0814 01:10:19.480407   61447 start.go:241] waiting for startup goroutines ...
	I0814 01:10:19.480426   61447 start.go:246] waiting for cluster config update ...
	I0814 01:10:19.480440   61447 start.go:255] writing updated cluster config ...
	I0814 01:10:19.480790   61447 ssh_runner.go:195] Run: rm -f paused
	I0814 01:10:19.529809   61447 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0814 01:10:19.531666   61447 out.go:177] * Done! kubectl is now configured to use "no-preload-776907" cluster and "default" namespace by default
	I0814 01:10:15.590230   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:18.089286   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:21.500596   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:10:21.513404   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:10:21.513479   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:10:21.554150   61804 cri.go:89] found id: ""
	I0814 01:10:21.554179   61804 logs.go:276] 0 containers: []
	W0814 01:10:21.554188   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:10:21.554194   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:10:21.554251   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:10:21.588785   61804 cri.go:89] found id: ""
	I0814 01:10:21.588807   61804 logs.go:276] 0 containers: []
	W0814 01:10:21.588815   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:10:21.588820   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:10:21.588870   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:10:21.621537   61804 cri.go:89] found id: ""
	I0814 01:10:21.621572   61804 logs.go:276] 0 containers: []
	W0814 01:10:21.621581   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:10:21.621587   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:10:21.621640   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:10:21.660651   61804 cri.go:89] found id: ""
	I0814 01:10:21.660680   61804 logs.go:276] 0 containers: []
	W0814 01:10:21.660690   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:10:21.660698   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:10:21.660763   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:10:21.697233   61804 cri.go:89] found id: ""
	I0814 01:10:21.697259   61804 logs.go:276] 0 containers: []
	W0814 01:10:21.697269   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:10:21.697276   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:10:21.697347   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:10:21.728389   61804 cri.go:89] found id: ""
	I0814 01:10:21.728416   61804 logs.go:276] 0 containers: []
	W0814 01:10:21.728428   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:10:21.728435   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:10:21.728498   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:10:21.761502   61804 cri.go:89] found id: ""
	I0814 01:10:21.761534   61804 logs.go:276] 0 containers: []
	W0814 01:10:21.761546   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:10:21.761552   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:10:21.761624   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:10:21.796569   61804 cri.go:89] found id: ""
	I0814 01:10:21.796598   61804 logs.go:276] 0 containers: []
	W0814 01:10:21.796610   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:10:21.796621   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:10:21.796637   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:10:21.845444   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:10:21.845483   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:10:21.858017   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:10:21.858057   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:10:21.930417   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:10:21.930443   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:10:21.930460   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:10:22.005912   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:10:22.005951   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:10:20.089593   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:22.089797   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:24.591315   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:24.545241   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:10:24.559341   61804 kubeadm.go:597] duration metric: took 4m4.643567639s to restartPrimaryControlPlane
	W0814 01:10:24.559407   61804 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0814 01:10:24.559430   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0814 01:10:28.294241   61804 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (3.734785326s)
	I0814 01:10:28.294319   61804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 01:10:28.311148   61804 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 01:10:28.321145   61804 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 01:10:28.335025   61804 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 01:10:28.335042   61804 kubeadm.go:157] found existing configuration files:
	
	I0814 01:10:28.335084   61804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 01:10:28.348778   61804 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 01:10:28.348838   61804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 01:10:28.362209   61804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 01:10:28.374981   61804 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 01:10:28.375054   61804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 01:10:28.385686   61804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 01:10:28.396608   61804 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 01:10:28.396681   61804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 01:10:28.410155   61804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 01:10:28.419462   61804 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 01:10:28.419524   61804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 01:10:28.429089   61804 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0814 01:10:28.506715   61804 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0814 01:10:28.506816   61804 kubeadm.go:310] [preflight] Running pre-flight checks
	I0814 01:10:28.668770   61804 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0814 01:10:28.668908   61804 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0814 01:10:28.669020   61804 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0814 01:10:28.865442   61804 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0814 01:10:28.866971   61804 out.go:204]   - Generating certificates and keys ...
	I0814 01:10:28.867065   61804 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0814 01:10:28.867151   61804 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0814 01:10:28.867270   61804 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0814 01:10:28.867370   61804 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0814 01:10:28.867486   61804 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0814 01:10:28.867575   61804 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0814 01:10:28.867668   61804 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0814 01:10:28.867762   61804 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0814 01:10:28.867854   61804 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0814 01:10:28.867969   61804 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0814 01:10:28.868026   61804 kubeadm.go:310] [certs] Using the existing "sa" key
	I0814 01:10:28.868095   61804 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0814 01:10:29.109820   61804 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0814 01:10:29.305485   61804 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0814 01:10:29.447627   61804 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0814 01:10:29.519749   61804 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0814 01:10:29.534507   61804 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0814 01:10:29.535858   61804 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0814 01:10:29.535915   61804 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0814 01:10:29.679100   61804 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0814 01:10:27.089933   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:29.590579   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:29.681457   61804 out.go:204]   - Booting up control plane ...
	I0814 01:10:29.681596   61804 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0814 01:10:29.686193   61804 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0814 01:10:29.690458   61804 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0814 01:10:29.690602   61804 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0814 01:10:29.692526   61804 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0814 01:10:32.089926   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:34.090129   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:39.266092   61689 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.354324468s)
	I0814 01:10:39.266176   61689 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 01:10:39.281039   61689 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 01:10:39.290328   61689 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 01:10:39.299179   61689 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 01:10:39.299200   61689 kubeadm.go:157] found existing configuration files:
	
	I0814 01:10:39.299240   61689 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0814 01:10:39.307972   61689 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 01:10:39.308029   61689 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 01:10:39.316639   61689 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0814 01:10:39.324834   61689 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 01:10:39.324907   61689 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 01:10:39.333911   61689 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0814 01:10:39.342294   61689 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 01:10:39.342358   61689 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 01:10:39.351209   61689 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0814 01:10:39.361364   61689 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 01:10:39.361429   61689 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 01:10:39.370737   61689 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0814 01:10:39.422751   61689 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0814 01:10:39.422819   61689 kubeadm.go:310] [preflight] Running pre-flight checks
	I0814 01:10:39.536672   61689 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0814 01:10:39.536827   61689 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0814 01:10:39.536965   61689 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0814 01:10:39.546793   61689 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0814 01:10:36.590409   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:39.090160   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:39.548749   61689 out.go:204]   - Generating certificates and keys ...
	I0814 01:10:39.548852   61689 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0814 01:10:39.548936   61689 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0814 01:10:39.549054   61689 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0814 01:10:39.549147   61689 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0814 01:10:39.549236   61689 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0814 01:10:39.549354   61689 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0814 01:10:39.549454   61689 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0814 01:10:39.549540   61689 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0814 01:10:39.549647   61689 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0814 01:10:39.549725   61689 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0814 01:10:39.549779   61689 kubeadm.go:310] [certs] Using the existing "sa" key
	I0814 01:10:39.549857   61689 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0814 01:10:39.626351   61689 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0814 01:10:39.760278   61689 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0814 01:10:39.866008   61689 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0814 01:10:39.999161   61689 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0814 01:10:40.196721   61689 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0814 01:10:40.197188   61689 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0814 01:10:40.199882   61689 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0814 01:10:40.201618   61689 out.go:204]   - Booting up control plane ...
	I0814 01:10:40.201746   61689 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0814 01:10:40.201813   61689 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0814 01:10:40.201869   61689 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0814 01:10:40.219199   61689 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0814 01:10:40.227902   61689 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0814 01:10:40.227973   61689 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0814 01:10:40.361233   61689 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0814 01:10:40.361348   61689 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0814 01:10:40.862332   61689 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.269742ms
	I0814 01:10:40.862432   61689 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0814 01:10:41.590443   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:43.590766   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:45.864038   61689 kubeadm.go:310] [api-check] The API server is healthy after 5.001460061s
	I0814 01:10:45.878388   61689 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0814 01:10:45.896709   61689 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0814 01:10:45.940134   61689 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0814 01:10:45.940348   61689 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-585256 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0814 01:10:45.955748   61689 kubeadm.go:310] [bootstrap-token] Using token: 8dipep.54emqs990as2h2yu
	I0814 01:10:45.957107   61689 out.go:204]   - Configuring RBAC rules ...
	I0814 01:10:45.957260   61689 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0814 01:10:45.967198   61689 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0814 01:10:45.981109   61689 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0814 01:10:45.984971   61689 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0814 01:10:45.990218   61689 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0814 01:10:45.994132   61689 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0814 01:10:46.271392   61689 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0814 01:10:46.713198   61689 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0814 01:10:47.271788   61689 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0814 01:10:47.271821   61689 kubeadm.go:310] 
	I0814 01:10:47.271873   61689 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0814 01:10:47.271880   61689 kubeadm.go:310] 
	I0814 01:10:47.271970   61689 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0814 01:10:47.271983   61689 kubeadm.go:310] 
	I0814 01:10:47.272035   61689 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0814 01:10:47.272118   61689 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0814 01:10:47.272195   61689 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0814 01:10:47.272219   61689 kubeadm.go:310] 
	I0814 01:10:47.272313   61689 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0814 01:10:47.272340   61689 kubeadm.go:310] 
	I0814 01:10:47.272418   61689 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0814 01:10:47.272431   61689 kubeadm.go:310] 
	I0814 01:10:47.272493   61689 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0814 01:10:47.272603   61689 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0814 01:10:47.272718   61689 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0814 01:10:47.272736   61689 kubeadm.go:310] 
	I0814 01:10:47.272851   61689 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0814 01:10:47.272978   61689 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0814 01:10:47.272988   61689 kubeadm.go:310] 
	I0814 01:10:47.273093   61689 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 8dipep.54emqs990as2h2yu \
	I0814 01:10:47.273238   61689 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f608549e9c065598e2d494ba753eb8a7bbe7260a53a76decd30e6e17b5fe24c3 \
	I0814 01:10:47.273276   61689 kubeadm.go:310] 	--control-plane 
	I0814 01:10:47.273290   61689 kubeadm.go:310] 
	I0814 01:10:47.273405   61689 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0814 01:10:47.273413   61689 kubeadm.go:310] 
	I0814 01:10:47.273513   61689 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 8dipep.54emqs990as2h2yu \
	I0814 01:10:47.273659   61689 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f608549e9c065598e2d494ba753eb8a7bbe7260a53a76decd30e6e17b5fe24c3 
	I0814 01:10:47.274832   61689 kubeadm.go:310] W0814 01:10:39.407507    2549 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0814 01:10:47.275253   61689 kubeadm.go:310] W0814 01:10:39.408398    2549 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0814 01:10:47.275402   61689 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0814 01:10:47.275444   61689 cni.go:84] Creating CNI manager for ""
	I0814 01:10:47.275455   61689 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 01:10:47.277239   61689 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0814 01:10:47.278570   61689 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0814 01:10:47.289683   61689 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0814 01:10:47.306392   61689 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0814 01:10:47.306474   61689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:10:47.306474   61689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-585256 minikube.k8s.io/updated_at=2024_08_14T01_10_47_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a5ac17629aabe8562ef9220195ba6559cf416caf minikube.k8s.io/name=default-k8s-diff-port-585256 minikube.k8s.io/primary=true
	I0814 01:10:47.471053   61689 ops.go:34] apiserver oom_adj: -16
	I0814 01:10:47.471227   61689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:10:47.971669   61689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:10:46.089776   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:48.589378   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:48.472147   61689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:10:48.971874   61689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:10:49.471867   61689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:10:49.972002   61689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:10:50.471298   61689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:10:50.971656   61689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:10:51.471610   61689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:10:51.548562   61689 kubeadm.go:1113] duration metric: took 4.24215834s to wait for elevateKubeSystemPrivileges
	I0814 01:10:51.548600   61689 kubeadm.go:394] duration metric: took 4m53.28604263s to StartCluster
	I0814 01:10:51.548621   61689 settings.go:142] acquiring lock: {Name:mkb0f793aa2a6618ff3457f9cd2d34beec5f1b5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 01:10:51.548708   61689 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19429-9425/kubeconfig
	I0814 01:10:51.551834   61689 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19429-9425/kubeconfig: {Name:mkd99f966962f24d3ec49056c344f5320df43dba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 01:10:51.552154   61689 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.110 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0814 01:10:51.552236   61689 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0814 01:10:51.552311   61689 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-585256"
	I0814 01:10:51.552343   61689 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-585256"
	I0814 01:10:51.552341   61689 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-585256"
	W0814 01:10:51.552354   61689 addons.go:243] addon storage-provisioner should already be in state true
	I0814 01:10:51.552384   61689 host.go:66] Checking if "default-k8s-diff-port-585256" exists ...
	I0814 01:10:51.552387   61689 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-585256"
	W0814 01:10:51.552396   61689 addons.go:243] addon metrics-server should already be in state true
	I0814 01:10:51.552416   61689 host.go:66] Checking if "default-k8s-diff-port-585256" exists ...
	I0814 01:10:51.552423   61689 config.go:182] Loaded profile config "default-k8s-diff-port-585256": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 01:10:51.552805   61689 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:10:51.552842   61689 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:10:51.552855   61689 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:10:51.552865   61689 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:10:51.553056   61689 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-585256"
	I0814 01:10:51.553092   61689 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-585256"
	I0814 01:10:51.553476   61689 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:10:51.553519   61689 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:10:51.553870   61689 out.go:177] * Verifying Kubernetes components...
	I0814 01:10:51.555358   61689 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 01:10:51.569380   61689 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36961
	I0814 01:10:51.569570   61689 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38335
	I0814 01:10:51.569920   61689 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:10:51.570057   61689 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:10:51.570516   61689 main.go:141] libmachine: Using API Version  1
	I0814 01:10:51.570536   61689 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:10:51.570648   61689 main.go:141] libmachine: Using API Version  1
	I0814 01:10:51.570672   61689 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:10:51.570891   61689 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:10:51.570981   61689 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:10:51.571148   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetState
	I0814 01:10:51.571564   61689 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:10:51.571600   61689 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:10:51.572161   61689 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40351
	I0814 01:10:51.572637   61689 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:10:51.573134   61689 main.go:141] libmachine: Using API Version  1
	I0814 01:10:51.573153   61689 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:10:51.574142   61689 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:10:51.574576   61689 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:10:51.574600   61689 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:10:51.575008   61689 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-585256"
	W0814 01:10:51.575026   61689 addons.go:243] addon default-storageclass should already be in state true
	I0814 01:10:51.575056   61689 host.go:66] Checking if "default-k8s-diff-port-585256" exists ...
	I0814 01:10:51.575459   61689 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:10:51.575500   61689 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:10:51.587910   61689 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35335
	I0814 01:10:51.588640   61689 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:10:51.589298   61689 main.go:141] libmachine: Using API Version  1
	I0814 01:10:51.589318   61689 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:10:51.589938   61689 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:10:51.590198   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetState
	I0814 01:10:51.591151   61689 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40625
	I0814 01:10:51.591786   61689 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:10:51.592257   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .DriverName
	I0814 01:10:51.592427   61689 main.go:141] libmachine: Using API Version  1
	I0814 01:10:51.592444   61689 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:10:51.592742   61689 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:10:51.592959   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetState
	I0814 01:10:51.594517   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .DriverName
	I0814 01:10:51.594851   61689 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0814 01:10:51.596245   61689 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 01:10:51.596263   61689 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0814 01:10:51.596277   61689 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0814 01:10:51.596296   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHHostname
	I0814 01:10:51.597335   61689 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 01:10:51.597351   61689 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0814 01:10:51.597365   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHHostname
	I0814 01:10:51.599147   61689 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40567
	I0814 01:10:51.599559   61689 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:10:51.600041   61689 main.go:141] libmachine: Using API Version  1
	I0814 01:10:51.600062   61689 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:10:51.600442   61689 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:10:51.601105   61689 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:10:51.601131   61689 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:10:51.601316   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:10:51.601345   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:10:51.601367   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:10:51.601408   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHPort
	I0814 01:10:51.601889   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:10:51.601893   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 01:10:51.602060   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHUsername
	I0814 01:10:51.602226   61689 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/default-k8s-diff-port-585256/id_rsa Username:docker}
	I0814 01:10:51.606415   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:10:51.606437   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:10:51.606582   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHPort
	I0814 01:10:51.606793   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 01:10:51.607035   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHUsername
	I0814 01:10:51.607200   61689 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/default-k8s-diff-port-585256/id_rsa Username:docker}
	I0814 01:10:51.623773   61689 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33265
	I0814 01:10:51.624272   61689 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:10:51.624752   61689 main.go:141] libmachine: Using API Version  1
	I0814 01:10:51.624772   61689 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:10:51.625130   61689 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:10:51.625309   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetState
	I0814 01:10:51.627055   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .DriverName
	I0814 01:10:51.627259   61689 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0814 01:10:51.627272   61689 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0814 01:10:51.627284   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHHostname
	I0814 01:10:51.630492   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:10:51.630890   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:10:51.630904   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:10:51.631066   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHPort
	I0814 01:10:51.631226   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 01:10:51.631389   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHUsername
	I0814 01:10:51.631501   61689 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/default-k8s-diff-port-585256/id_rsa Username:docker}
	I0814 01:10:51.744471   61689 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 01:10:51.762256   61689 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-585256" to be "Ready" ...
	I0814 01:10:51.782968   61689 node_ready.go:49] node "default-k8s-diff-port-585256" has status "Ready":"True"
	I0814 01:10:51.782999   61689 node_ready.go:38] duration metric: took 20.706198ms for node "default-k8s-diff-port-585256" to be "Ready" ...
	I0814 01:10:51.783011   61689 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 01:10:51.796967   61689 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-hngz9" in "kube-system" namespace to be "Ready" ...
	I0814 01:10:51.866263   61689 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 01:10:51.867193   61689 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0814 01:10:51.880992   61689 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0814 01:10:51.881017   61689 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0814 01:10:51.927059   61689 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0814 01:10:51.927081   61689 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0814 01:10:51.987114   61689 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0814 01:10:51.987134   61689 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0814 01:10:52.053818   61689 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0814 01:10:52.977726   61689 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.111426777s)
	I0814 01:10:52.977791   61689 main.go:141] libmachine: Making call to close driver server
	I0814 01:10:52.977789   61689 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.110564484s)
	I0814 01:10:52.977844   61689 main.go:141] libmachine: Making call to close driver server
	I0814 01:10:52.977863   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .Close
	I0814 01:10:52.977805   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .Close
	I0814 01:10:52.978191   61689 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:10:52.978210   61689 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:10:52.978217   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | Closing plugin on server side
	I0814 01:10:52.978222   61689 main.go:141] libmachine: Making call to close driver server
	I0814 01:10:52.978230   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | Closing plugin on server side
	I0814 01:10:52.978236   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .Close
	I0814 01:10:52.978282   61689 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:10:52.978310   61689 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:10:52.978325   61689 main.go:141] libmachine: Making call to close driver server
	I0814 01:10:52.978335   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .Close
	I0814 01:10:52.978869   61689 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:10:52.978909   61689 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:10:52.979017   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | Closing plugin on server side
	I0814 01:10:52.981465   61689 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:10:52.981488   61689 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:10:53.039845   61689 main.go:141] libmachine: Making call to close driver server
	I0814 01:10:53.039866   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .Close
	I0814 01:10:53.040156   61689 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:10:53.040174   61689 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:10:53.040217   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | Closing plugin on server side
	I0814 01:10:53.239968   61689 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.186108272s)
	I0814 01:10:53.240018   61689 main.go:141] libmachine: Making call to close driver server
	I0814 01:10:53.240035   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .Close
	I0814 01:10:53.240360   61689 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:10:53.240378   61689 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:10:53.240387   61689 main.go:141] libmachine: Making call to close driver server
	I0814 01:10:53.240395   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .Close
	I0814 01:10:53.240672   61689 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:10:53.240686   61689 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:10:53.240696   61689 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-585256"
	I0814 01:10:53.242401   61689 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0814 01:10:50.591245   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:52.584492   61115 pod_ready.go:81] duration metric: took 4m0.000968161s for pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace to be "Ready" ...
	E0814 01:10:52.584532   61115 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace to be "Ready" (will not retry!)
	I0814 01:10:52.584557   61115 pod_ready.go:38] duration metric: took 4m8.538973262s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 01:10:52.584585   61115 kubeadm.go:597] duration metric: took 4m16.433276087s to restartPrimaryControlPlane
	W0814 01:10:52.584639   61115 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0814 01:10:52.584666   61115 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0814 01:10:53.243906   61689 addons.go:510] duration metric: took 1.691669156s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0814 01:10:53.804696   61689 pod_ready.go:102] pod "coredns-6f6b679f8f-hngz9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:56.305075   61689 pod_ready.go:102] pod "coredns-6f6b679f8f-hngz9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:57.805174   61689 pod_ready.go:92] pod "coredns-6f6b679f8f-hngz9" in "kube-system" namespace has status "Ready":"True"
	I0814 01:10:57.805202   61689 pod_ready.go:81] duration metric: took 6.008208867s for pod "coredns-6f6b679f8f-hngz9" in "kube-system" namespace to be "Ready" ...
	I0814 01:10:57.805214   61689 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-jmqk7" in "kube-system" namespace to be "Ready" ...
	I0814 01:10:57.809693   61689 pod_ready.go:92] pod "coredns-6f6b679f8f-jmqk7" in "kube-system" namespace has status "Ready":"True"
	I0814 01:10:57.809714   61689 pod_ready.go:81] duration metric: took 4.491999ms for pod "coredns-6f6b679f8f-jmqk7" in "kube-system" namespace to be "Ready" ...
	I0814 01:10:57.809726   61689 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-585256" in "kube-system" namespace to be "Ready" ...
	I0814 01:10:59.816199   61689 pod_ready.go:92] pod "etcd-default-k8s-diff-port-585256" in "kube-system" namespace has status "Ready":"True"
	I0814 01:10:59.816228   61689 pod_ready.go:81] duration metric: took 2.006493576s for pod "etcd-default-k8s-diff-port-585256" in "kube-system" namespace to be "Ready" ...
	I0814 01:10:59.816241   61689 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-585256" in "kube-system" namespace to be "Ready" ...
	I0814 01:10:59.821351   61689 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-585256" in "kube-system" namespace has status "Ready":"True"
	I0814 01:10:59.821374   61689 pod_ready.go:81] duration metric: took 5.126272ms for pod "kube-apiserver-default-k8s-diff-port-585256" in "kube-system" namespace to be "Ready" ...
	I0814 01:10:59.821384   61689 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-585256" in "kube-system" namespace to be "Ready" ...
	I0814 01:10:59.825182   61689 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-585256" in "kube-system" namespace has status "Ready":"True"
	I0814 01:10:59.825200   61689 pod_ready.go:81] duration metric: took 3.810193ms for pod "kube-controller-manager-default-k8s-diff-port-585256" in "kube-system" namespace to be "Ready" ...
	I0814 01:10:59.825209   61689 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rg8h9" in "kube-system" namespace to be "Ready" ...
	I0814 01:10:59.829240   61689 pod_ready.go:92] pod "kube-proxy-rg8h9" in "kube-system" namespace has status "Ready":"True"
	I0814 01:10:59.829259   61689 pod_ready.go:81] duration metric: took 4.043044ms for pod "kube-proxy-rg8h9" in "kube-system" namespace to be "Ready" ...
	I0814 01:10:59.829269   61689 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-585256" in "kube-system" namespace to be "Ready" ...
	I0814 01:11:00.602253   61689 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-585256" in "kube-system" namespace has status "Ready":"True"
	I0814 01:11:00.602276   61689 pod_ready.go:81] duration metric: took 773.000181ms for pod "kube-scheduler-default-k8s-diff-port-585256" in "kube-system" namespace to be "Ready" ...
	I0814 01:11:00.602285   61689 pod_ready.go:38] duration metric: took 8.819260447s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 01:11:00.602301   61689 api_server.go:52] waiting for apiserver process to appear ...
	I0814 01:11:00.602352   61689 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:11:00.620930   61689 api_server.go:72] duration metric: took 9.068741768s to wait for apiserver process to appear ...
	I0814 01:11:00.620954   61689 api_server.go:88] waiting for apiserver healthz status ...
	I0814 01:11:00.620973   61689 api_server.go:253] Checking apiserver healthz at https://192.168.39.110:8444/healthz ...
	I0814 01:11:00.625960   61689 api_server.go:279] https://192.168.39.110:8444/healthz returned 200:
	ok
	I0814 01:11:00.626930   61689 api_server.go:141] control plane version: v1.31.0
	I0814 01:11:00.626948   61689 api_server.go:131] duration metric: took 5.98825ms to wait for apiserver health ...
	I0814 01:11:00.626956   61689 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 01:11:00.805157   61689 system_pods.go:59] 9 kube-system pods found
	I0814 01:11:00.805183   61689 system_pods.go:61] "coredns-6f6b679f8f-hngz9" [213f9a45-596b-47b3-9c37-ceae021433ea] Running
	I0814 01:11:00.805187   61689 system_pods.go:61] "coredns-6f6b679f8f-jmqk7" [397fb54b-40cd-4c4e-9503-c077f814c6e5] Running
	I0814 01:11:00.805190   61689 system_pods.go:61] "etcd-default-k8s-diff-port-585256" [2fa04b3c-b311-4f0f-82e5-e512db3dd11b] Running
	I0814 01:11:00.805194   61689 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-585256" [ef1c1aeb-9cee-47d6-8cf5-14535208af62] Running
	I0814 01:11:00.805197   61689 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-585256" [ff5c5123-b01f-4023-b8ec-169065ddb88a] Running
	I0814 01:11:00.805200   61689 system_pods.go:61] "kube-proxy-rg8h9" [b2601104-a6f5-4065-87d5-c027d583f647] Running
	I0814 01:11:00.805203   61689 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-585256" [31e655e4-00c7-443a-9ee8-058a4020852d] Running
	I0814 01:11:00.805209   61689 system_pods.go:61] "metrics-server-6867b74b74-lzfpz" [2dd31ad2-c384-4edd-8d5c-561bc2fa72e4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 01:11:00.805213   61689 system_pods.go:61] "storage-provisioner" [1636777b-2347-4c48-b72a-3b5445c4862a] Running
	I0814 01:11:00.805219   61689 system_pods.go:74] duration metric: took 178.259422ms to wait for pod list to return data ...
	I0814 01:11:00.805226   61689 default_sa.go:34] waiting for default service account to be created ...
	I0814 01:11:01.001973   61689 default_sa.go:45] found service account: "default"
	I0814 01:11:01.002000   61689 default_sa.go:55] duration metric: took 196.764266ms for default service account to be created ...
	I0814 01:11:01.002010   61689 system_pods.go:116] waiting for k8s-apps to be running ...
	I0814 01:11:01.203660   61689 system_pods.go:86] 9 kube-system pods found
	I0814 01:11:01.203683   61689 system_pods.go:89] "coredns-6f6b679f8f-hngz9" [213f9a45-596b-47b3-9c37-ceae021433ea] Running
	I0814 01:11:01.203688   61689 system_pods.go:89] "coredns-6f6b679f8f-jmqk7" [397fb54b-40cd-4c4e-9503-c077f814c6e5] Running
	I0814 01:11:01.203695   61689 system_pods.go:89] "etcd-default-k8s-diff-port-585256" [2fa04b3c-b311-4f0f-82e5-e512db3dd11b] Running
	I0814 01:11:01.203702   61689 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-585256" [ef1c1aeb-9cee-47d6-8cf5-14535208af62] Running
	I0814 01:11:01.203708   61689 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-585256" [ff5c5123-b01f-4023-b8ec-169065ddb88a] Running
	I0814 01:11:01.203713   61689 system_pods.go:89] "kube-proxy-rg8h9" [b2601104-a6f5-4065-87d5-c027d583f647] Running
	I0814 01:11:01.203719   61689 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-585256" [31e655e4-00c7-443a-9ee8-058a4020852d] Running
	I0814 01:11:01.203727   61689 system_pods.go:89] "metrics-server-6867b74b74-lzfpz" [2dd31ad2-c384-4edd-8d5c-561bc2fa72e4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 01:11:01.203733   61689 system_pods.go:89] "storage-provisioner" [1636777b-2347-4c48-b72a-3b5445c4862a] Running
	I0814 01:11:01.203744   61689 system_pods.go:126] duration metric: took 201.72785ms to wait for k8s-apps to be running ...
	I0814 01:11:01.203752   61689 system_svc.go:44] waiting for kubelet service to be running ....
	I0814 01:11:01.203810   61689 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 01:11:01.218903   61689 system_svc.go:56] duration metric: took 15.144054ms WaitForService to wait for kubelet
	I0814 01:11:01.218925   61689 kubeadm.go:582] duration metric: took 9.666741267s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 01:11:01.218950   61689 node_conditions.go:102] verifying NodePressure condition ...
	I0814 01:11:01.403320   61689 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0814 01:11:01.403350   61689 node_conditions.go:123] node cpu capacity is 2
	I0814 01:11:01.403363   61689 node_conditions.go:105] duration metric: took 184.40754ms to run NodePressure ...
	I0814 01:11:01.403377   61689 start.go:241] waiting for startup goroutines ...
	I0814 01:11:01.403385   61689 start.go:246] waiting for cluster config update ...
	I0814 01:11:01.403398   61689 start.go:255] writing updated cluster config ...
	I0814 01:11:01.403690   61689 ssh_runner.go:195] Run: rm -f paused
	I0814 01:11:01.451211   61689 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0814 01:11:01.453288   61689 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-585256" cluster and "default" namespace by default
	I0814 01:11:09.693028   61804 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0814 01:11:09.693700   61804 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 01:11:09.693975   61804 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 01:11:18.892614   61115 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.307924274s)
	I0814 01:11:18.892692   61115 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 01:11:18.907571   61115 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 01:11:18.917775   61115 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 01:11:18.927492   61115 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 01:11:18.927521   61115 kubeadm.go:157] found existing configuration files:
	
	I0814 01:11:18.927588   61115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 01:11:18.936787   61115 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 01:11:18.936840   61115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 01:11:18.946163   61115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 01:11:18.954567   61115 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 01:11:18.954613   61115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 01:11:18.963437   61115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 01:11:18.971647   61115 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 01:11:18.971691   61115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 01:11:18.980676   61115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 01:11:18.989638   61115 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 01:11:18.989681   61115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 01:11:18.998834   61115 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0814 01:11:19.044209   61115 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0814 01:11:19.044286   61115 kubeadm.go:310] [preflight] Running pre-flight checks
	I0814 01:11:19.152983   61115 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0814 01:11:19.153147   61115 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0814 01:11:19.153253   61115 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0814 01:11:19.160933   61115 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0814 01:11:14.694223   61804 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 01:11:14.694446   61804 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 01:11:19.162856   61115 out.go:204]   - Generating certificates and keys ...
	I0814 01:11:19.162972   61115 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0814 01:11:19.163044   61115 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0814 01:11:19.163121   61115 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0814 01:11:19.163213   61115 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0814 01:11:19.163322   61115 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0814 01:11:19.163396   61115 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0814 01:11:19.163467   61115 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0814 01:11:19.163527   61115 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0814 01:11:19.163755   61115 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0814 01:11:19.163860   61115 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0814 01:11:19.163917   61115 kubeadm.go:310] [certs] Using the existing "sa" key
	I0814 01:11:19.163987   61115 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0814 01:11:19.615014   61115 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0814 01:11:19.777877   61115 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0814 01:11:19.917278   61115 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0814 01:11:20.190113   61115 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0814 01:11:20.351945   61115 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0814 01:11:20.352522   61115 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0814 01:11:20.355239   61115 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0814 01:11:20.356550   61115 out.go:204]   - Booting up control plane ...
	I0814 01:11:20.356683   61115 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0814 01:11:20.356784   61115 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0814 01:11:20.356993   61115 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0814 01:11:20.376382   61115 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0814 01:11:20.381926   61115 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0814 01:11:20.382001   61115 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0814 01:11:20.510283   61115 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0814 01:11:20.510394   61115 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0814 01:11:21.016575   61115 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 505.997518ms
	I0814 01:11:21.016716   61115 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0814 01:11:26.018203   61115 kubeadm.go:310] [api-check] The API server is healthy after 5.00166081s
	I0814 01:11:26.035867   61115 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0814 01:11:26.053660   61115 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0814 01:11:26.084727   61115 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0814 01:11:26.084987   61115 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-901410 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0814 01:11:26.100115   61115 kubeadm.go:310] [bootstrap-token] Using token: t7ews1.hirn7pq8otu9l2lh
	I0814 01:11:26.101532   61115 out.go:204]   - Configuring RBAC rules ...
	I0814 01:11:26.101691   61115 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0814 01:11:26.107165   61115 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0814 01:11:26.117715   61115 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0814 01:11:26.121222   61115 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0814 01:11:26.124371   61115 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0814 01:11:26.128216   61115 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0814 01:11:26.426496   61115 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0814 01:11:26.868163   61115 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0814 01:11:27.426401   61115 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0814 01:11:27.427484   61115 kubeadm.go:310] 
	I0814 01:11:27.427587   61115 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0814 01:11:27.427604   61115 kubeadm.go:310] 
	I0814 01:11:27.427727   61115 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0814 01:11:27.427743   61115 kubeadm.go:310] 
	I0814 01:11:27.427770   61115 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0814 01:11:27.427846   61115 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0814 01:11:27.427928   61115 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0814 01:11:27.427939   61115 kubeadm.go:310] 
	I0814 01:11:27.428020   61115 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0814 01:11:27.428027   61115 kubeadm.go:310] 
	I0814 01:11:27.428109   61115 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0814 01:11:27.428116   61115 kubeadm.go:310] 
	I0814 01:11:27.428192   61115 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0814 01:11:27.428289   61115 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0814 01:11:27.428389   61115 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0814 01:11:27.428397   61115 kubeadm.go:310] 
	I0814 01:11:27.428511   61115 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0814 01:11:27.428625   61115 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0814 01:11:27.428640   61115 kubeadm.go:310] 
	I0814 01:11:27.428778   61115 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token t7ews1.hirn7pq8otu9l2lh \
	I0814 01:11:27.428920   61115 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f608549e9c065598e2d494ba753eb8a7bbe7260a53a76decd30e6e17b5fe24c3 \
	I0814 01:11:27.428964   61115 kubeadm.go:310] 	--control-plane 
	I0814 01:11:27.428971   61115 kubeadm.go:310] 
	I0814 01:11:27.429085   61115 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0814 01:11:27.429097   61115 kubeadm.go:310] 
	I0814 01:11:27.429229   61115 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token t7ews1.hirn7pq8otu9l2lh \
	I0814 01:11:27.429381   61115 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f608549e9c065598e2d494ba753eb8a7bbe7260a53a76decd30e6e17b5fe24c3 
	I0814 01:11:27.430485   61115 kubeadm.go:310] W0814 01:11:19.012996    2597 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0814 01:11:27.430895   61115 kubeadm.go:310] W0814 01:11:19.013634    2597 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0814 01:11:27.431062   61115 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0814 01:11:27.431092   61115 cni.go:84] Creating CNI manager for ""
	I0814 01:11:27.431102   61115 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 01:11:27.432987   61115 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0814 01:11:24.694861   61804 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 01:11:24.695123   61804 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 01:11:27.434183   61115 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0814 01:11:27.446168   61115 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0814 01:11:27.466651   61115 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0814 01:11:27.466760   61115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-901410 minikube.k8s.io/updated_at=2024_08_14T01_11_27_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a5ac17629aabe8562ef9220195ba6559cf416caf minikube.k8s.io/name=embed-certs-901410 minikube.k8s.io/primary=true
	I0814 01:11:27.466760   61115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:11:27.495784   61115 ops.go:34] apiserver oom_adj: -16
	I0814 01:11:27.670097   61115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:11:28.170891   61115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:11:28.670320   61115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:11:29.170197   61115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:11:29.670157   61115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:11:30.170664   61115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:11:30.670254   61115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:11:31.170767   61115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:11:31.671004   61115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:11:31.762872   61115 kubeadm.go:1113] duration metric: took 4.296174293s to wait for elevateKubeSystemPrivileges
	I0814 01:11:31.762902   61115 kubeadm.go:394] duration metric: took 4m55.664668706s to StartCluster
	I0814 01:11:31.762924   61115 settings.go:142] acquiring lock: {Name:mkb0f793aa2a6618ff3457f9cd2d34beec5f1b5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 01:11:31.763010   61115 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19429-9425/kubeconfig
	I0814 01:11:31.764625   61115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19429-9425/kubeconfig: {Name:mkd99f966962f24d3ec49056c344f5320df43dba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 01:11:31.764876   61115 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.210 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0814 01:11:31.764951   61115 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0814 01:11:31.765038   61115 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-901410"
	I0814 01:11:31.765052   61115 addons.go:69] Setting default-storageclass=true in profile "embed-certs-901410"
	I0814 01:11:31.765070   61115 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-901410"
	I0814 01:11:31.765068   61115 addons.go:69] Setting metrics-server=true in profile "embed-certs-901410"
	I0814 01:11:31.765086   61115 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-901410"
	I0814 01:11:31.765092   61115 config.go:182] Loaded profile config "embed-certs-901410": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 01:11:31.765111   61115 addons.go:234] Setting addon metrics-server=true in "embed-certs-901410"
	W0814 01:11:31.765126   61115 addons.go:243] addon metrics-server should already be in state true
	I0814 01:11:31.765163   61115 host.go:66] Checking if "embed-certs-901410" exists ...
	W0814 01:11:31.765083   61115 addons.go:243] addon storage-provisioner should already be in state true
	I0814 01:11:31.765199   61115 host.go:66] Checking if "embed-certs-901410" exists ...
	I0814 01:11:31.765481   61115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:11:31.765516   61115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:11:31.765554   61115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:11:31.765570   61115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:11:31.765588   61115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:11:31.765614   61115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:11:31.766459   61115 out.go:177] * Verifying Kubernetes components...
	I0814 01:11:31.767835   61115 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 01:11:31.781637   61115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34599
	I0814 01:11:31.782146   61115 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:11:31.782517   61115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32983
	I0814 01:11:31.782700   61115 main.go:141] libmachine: Using API Version  1
	I0814 01:11:31.782732   61115 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:11:31.783038   61115 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:11:31.783052   61115 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:11:31.783213   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetState
	I0814 01:11:31.783540   61115 main.go:141] libmachine: Using API Version  1
	I0814 01:11:31.783569   61115 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:11:31.783897   61115 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:11:31.784326   61115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39503
	I0814 01:11:31.784458   61115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:11:31.784487   61115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:11:31.784791   61115 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:11:31.785281   61115 main.go:141] libmachine: Using API Version  1
	I0814 01:11:31.785306   61115 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:11:31.785665   61115 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:11:31.786175   61115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:11:31.786218   61115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:11:31.786466   61115 addons.go:234] Setting addon default-storageclass=true in "embed-certs-901410"
	W0814 01:11:31.786484   61115 addons.go:243] addon default-storageclass should already be in state true
	I0814 01:11:31.786513   61115 host.go:66] Checking if "embed-certs-901410" exists ...
	I0814 01:11:31.786853   61115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:11:31.786881   61115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:11:31.801208   61115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41561
	I0814 01:11:31.801592   61115 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:11:31.802016   61115 main.go:141] libmachine: Using API Version  1
	I0814 01:11:31.802032   61115 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:11:31.802382   61115 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:11:31.802555   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetState
	I0814 01:11:31.803106   61115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40669
	I0814 01:11:31.803589   61115 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:11:31.804133   61115 main.go:141] libmachine: Using API Version  1
	I0814 01:11:31.804159   61115 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:11:31.804462   61115 main.go:141] libmachine: (embed-certs-901410) Calling .DriverName
	I0814 01:11:31.804532   61115 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:11:31.804716   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetState
	I0814 01:11:31.805759   61115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39529
	I0814 01:11:31.806197   61115 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:11:31.806546   61115 main.go:141] libmachine: (embed-certs-901410) Calling .DriverName
	I0814 01:11:31.806590   61115 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0814 01:11:31.806667   61115 main.go:141] libmachine: Using API Version  1
	I0814 01:11:31.806692   61115 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:11:31.806982   61115 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:11:31.807572   61115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:11:31.807609   61115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:11:31.808223   61115 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 01:11:31.808225   61115 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0814 01:11:31.808301   61115 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0814 01:11:31.808335   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHHostname
	I0814 01:11:31.810018   61115 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 01:11:31.810057   61115 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0814 01:11:31.810125   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHHostname
	I0814 01:11:31.812029   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:11:31.812728   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:11:31.812862   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:11:31.813062   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHPort
	I0814 01:11:31.813261   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 01:11:31.813284   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:11:31.813420   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHUsername
	I0814 01:11:31.813562   61115 sshutil.go:53] new ssh client: &{IP:192.168.50.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/embed-certs-901410/id_rsa Username:docker}
	I0814 01:11:31.813864   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:11:31.813880   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:11:31.814032   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHPort
	I0814 01:11:31.814236   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 01:11:31.814398   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHUsername
	I0814 01:11:31.814542   61115 sshutil.go:53] new ssh client: &{IP:192.168.50.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/embed-certs-901410/id_rsa Username:docker}
	I0814 01:11:31.825081   61115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34269
	I0814 01:11:31.825523   61115 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:11:31.825944   61115 main.go:141] libmachine: Using API Version  1
	I0814 01:11:31.825967   61115 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:11:31.826327   61115 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:11:31.826537   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetState
	I0814 01:11:31.831060   61115 main.go:141] libmachine: (embed-certs-901410) Calling .DriverName
	I0814 01:11:31.831292   61115 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0814 01:11:31.831315   61115 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0814 01:11:31.831334   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHHostname
	I0814 01:11:31.834552   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:11:31.834934   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:11:31.834962   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:11:31.835102   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHPort
	I0814 01:11:31.835304   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 01:11:31.835476   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHUsername
	I0814 01:11:31.835610   61115 sshutil.go:53] new ssh client: &{IP:192.168.50.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/embed-certs-901410/id_rsa Username:docker}
	I0814 01:11:31.960224   61115 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 01:11:31.980097   61115 node_ready.go:35] waiting up to 6m0s for node "embed-certs-901410" to be "Ready" ...
	I0814 01:11:31.993130   61115 node_ready.go:49] node "embed-certs-901410" has status "Ready":"True"
	I0814 01:11:31.993152   61115 node_ready.go:38] duration metric: took 13.020022ms for node "embed-certs-901410" to be "Ready" ...
	I0814 01:11:31.993164   61115 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 01:11:31.998448   61115 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-901410" in "kube-system" namespace to be "Ready" ...
	I0814 01:11:32.075908   61115 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0814 01:11:32.075933   61115 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0814 01:11:32.114559   61115 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 01:11:32.137251   61115 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0814 01:11:32.144383   61115 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0814 01:11:32.144404   61115 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0814 01:11:32.207930   61115 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0814 01:11:32.207957   61115 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0814 01:11:32.235306   61115 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0814 01:11:32.769968   61115 main.go:141] libmachine: Making call to close driver server
	I0814 01:11:32.769994   61115 main.go:141] libmachine: (embed-certs-901410) Calling .Close
	I0814 01:11:32.770140   61115 main.go:141] libmachine: Making call to close driver server
	I0814 01:11:32.770164   61115 main.go:141] libmachine: (embed-certs-901410) Calling .Close
	I0814 01:11:32.770300   61115 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:11:32.770337   61115 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:11:32.770348   61115 main.go:141] libmachine: Making call to close driver server
	I0814 01:11:32.770351   61115 main.go:141] libmachine: (embed-certs-901410) DBG | Closing plugin on server side
	I0814 01:11:32.770360   61115 main.go:141] libmachine: (embed-certs-901410) Calling .Close
	I0814 01:11:32.770412   61115 main.go:141] libmachine: (embed-certs-901410) DBG | Closing plugin on server side
	I0814 01:11:32.770434   61115 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:11:32.770447   61115 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:11:32.770461   61115 main.go:141] libmachine: Making call to close driver server
	I0814 01:11:32.770472   61115 main.go:141] libmachine: (embed-certs-901410) Calling .Close
	I0814 01:11:32.770656   61115 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:11:32.770696   61115 main.go:141] libmachine: (embed-certs-901410) DBG | Closing plugin on server side
	I0814 01:11:32.770706   61115 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:11:32.770767   61115 main.go:141] libmachine: (embed-certs-901410) DBG | Closing plugin on server side
	I0814 01:11:32.770945   61115 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:11:32.770960   61115 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:11:32.779423   61115 main.go:141] libmachine: Making call to close driver server
	I0814 01:11:32.779437   61115 main.go:141] libmachine: (embed-certs-901410) Calling .Close
	I0814 01:11:32.779661   61115 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:11:32.779675   61115 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:11:32.779702   61115 main.go:141] libmachine: (embed-certs-901410) DBG | Closing plugin on server side
	I0814 01:11:33.063157   61115 main.go:141] libmachine: Making call to close driver server
	I0814 01:11:33.063187   61115 main.go:141] libmachine: (embed-certs-901410) Calling .Close
	I0814 01:11:33.064055   61115 main.go:141] libmachine: (embed-certs-901410) DBG | Closing plugin on server side
	I0814 01:11:33.064101   61115 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:11:33.064110   61115 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:11:33.064120   61115 main.go:141] libmachine: Making call to close driver server
	I0814 01:11:33.064127   61115 main.go:141] libmachine: (embed-certs-901410) Calling .Close
	I0814 01:11:33.064378   61115 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:11:33.064397   61115 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:11:33.064409   61115 addons.go:475] Verifying addon metrics-server=true in "embed-certs-901410"
	I0814 01:11:33.064458   61115 main.go:141] libmachine: (embed-certs-901410) DBG | Closing plugin on server side
	I0814 01:11:33.066122   61115 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0814 01:11:33.067534   61115 addons.go:510] duration metric: took 1.302585898s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0814 01:11:34.004078   61115 pod_ready.go:102] pod "etcd-embed-certs-901410" in "kube-system" namespace has status "Ready":"False"
	I0814 01:11:36.005391   61115 pod_ready.go:102] pod "etcd-embed-certs-901410" in "kube-system" namespace has status "Ready":"False"
	I0814 01:11:38.505031   61115 pod_ready.go:102] pod "etcd-embed-certs-901410" in "kube-system" namespace has status "Ready":"False"
	I0814 01:11:39.507006   61115 pod_ready.go:92] pod "etcd-embed-certs-901410" in "kube-system" namespace has status "Ready":"True"
	I0814 01:11:39.507026   61115 pod_ready.go:81] duration metric: took 7.508554233s for pod "etcd-embed-certs-901410" in "kube-system" namespace to be "Ready" ...
	I0814 01:11:39.507035   61115 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-901410" in "kube-system" namespace to be "Ready" ...
	I0814 01:11:39.517719   61115 pod_ready.go:92] pod "kube-apiserver-embed-certs-901410" in "kube-system" namespace has status "Ready":"True"
	I0814 01:11:39.517739   61115 pod_ready.go:81] duration metric: took 10.698211ms for pod "kube-apiserver-embed-certs-901410" in "kube-system" namespace to be "Ready" ...
	I0814 01:11:39.517751   61115 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-901410" in "kube-system" namespace to be "Ready" ...
	I0814 01:11:39.522245   61115 pod_ready.go:92] pod "kube-controller-manager-embed-certs-901410" in "kube-system" namespace has status "Ready":"True"
	I0814 01:11:39.522267   61115 pod_ready.go:81] duration metric: took 4.507786ms for pod "kube-controller-manager-embed-certs-901410" in "kube-system" namespace to be "Ready" ...
	I0814 01:11:39.522280   61115 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fqmzw" in "kube-system" namespace to be "Ready" ...
	I0814 01:11:39.527880   61115 pod_ready.go:92] pod "kube-proxy-fqmzw" in "kube-system" namespace has status "Ready":"True"
	I0814 01:11:39.527897   61115 pod_ready.go:81] duration metric: took 5.609617ms for pod "kube-proxy-fqmzw" in "kube-system" namespace to be "Ready" ...
	I0814 01:11:39.527904   61115 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-901410" in "kube-system" namespace to be "Ready" ...
	I0814 01:11:39.532430   61115 pod_ready.go:92] pod "kube-scheduler-embed-certs-901410" in "kube-system" namespace has status "Ready":"True"
	I0814 01:11:39.532448   61115 pod_ready.go:81] duration metric: took 4.536902ms for pod "kube-scheduler-embed-certs-901410" in "kube-system" namespace to be "Ready" ...
	I0814 01:11:39.532456   61115 pod_ready.go:38] duration metric: took 7.539280742s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 01:11:39.532471   61115 api_server.go:52] waiting for apiserver process to appear ...
	I0814 01:11:39.532537   61115 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:11:39.547608   61115 api_server.go:72] duration metric: took 7.782698582s to wait for apiserver process to appear ...
	I0814 01:11:39.547635   61115 api_server.go:88] waiting for apiserver healthz status ...
	I0814 01:11:39.547652   61115 api_server.go:253] Checking apiserver healthz at https://192.168.50.210:8443/healthz ...
	I0814 01:11:39.552021   61115 api_server.go:279] https://192.168.50.210:8443/healthz returned 200:
	ok
	I0814 01:11:39.552955   61115 api_server.go:141] control plane version: v1.31.0
	I0814 01:11:39.552972   61115 api_server.go:131] duration metric: took 5.330974ms to wait for apiserver health ...
	I0814 01:11:39.552979   61115 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 01:11:39.704928   61115 system_pods.go:59] 9 kube-system pods found
	I0814 01:11:39.704952   61115 system_pods.go:61] "coredns-6f6b679f8f-bq2xk" [6593bc2b-ef8f-4738-8674-dcaea675b88b] Running
	I0814 01:11:39.704959   61115 system_pods.go:61] "coredns-6f6b679f8f-lwd2j" [75f6e3fe-c5ac-4dbc-bbbb-bfb91796aaff] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0814 01:11:39.704964   61115 system_pods.go:61] "etcd-embed-certs-901410" [60eb6469-1be4-401b-9382-977428a0ead5] Running
	I0814 01:11:39.704970   61115 system_pods.go:61] "kube-apiserver-embed-certs-901410" [802d6cc2-d1d4-485c-98d8-e5b4afa9e632] Running
	I0814 01:11:39.704974   61115 system_pods.go:61] "kube-controller-manager-embed-certs-901410" [12e308db-7ca5-4d33-b62a-e144e7dd06c5] Running
	I0814 01:11:39.704977   61115 system_pods.go:61] "kube-proxy-fqmzw" [f9d63b14-ce56-4d0b-8511-1198b306b70e] Running
	I0814 01:11:39.704980   61115 system_pods.go:61] "kube-scheduler-embed-certs-901410" [668258a9-02d2-416d-ac07-b2b87deea00d] Running
	I0814 01:11:39.704985   61115 system_pods.go:61] "metrics-server-6867b74b74-mwl74" [065b6973-cd9d-4091-96b9-8dff2c5f85eb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 01:11:39.704989   61115 system_pods.go:61] "storage-provisioner" [e0f82856-b50c-4a5f-b0c7-4cd81e4b896e] Running
	I0814 01:11:39.704995   61115 system_pods.go:74] duration metric: took 152.010903ms to wait for pod list to return data ...
	I0814 01:11:39.705004   61115 default_sa.go:34] waiting for default service account to be created ...
	I0814 01:11:39.902622   61115 default_sa.go:45] found service account: "default"
	I0814 01:11:39.902662   61115 default_sa.go:55] duration metric: took 197.651811ms for default service account to be created ...
	I0814 01:11:39.902674   61115 system_pods.go:116] waiting for k8s-apps to be running ...
	I0814 01:11:40.105740   61115 system_pods.go:86] 9 kube-system pods found
	I0814 01:11:40.105767   61115 system_pods.go:89] "coredns-6f6b679f8f-bq2xk" [6593bc2b-ef8f-4738-8674-dcaea675b88b] Running
	I0814 01:11:40.105775   61115 system_pods.go:89] "coredns-6f6b679f8f-lwd2j" [75f6e3fe-c5ac-4dbc-bbbb-bfb91796aaff] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0814 01:11:40.105781   61115 system_pods.go:89] "etcd-embed-certs-901410" [60eb6469-1be4-401b-9382-977428a0ead5] Running
	I0814 01:11:40.105787   61115 system_pods.go:89] "kube-apiserver-embed-certs-901410" [802d6cc2-d1d4-485c-98d8-e5b4afa9e632] Running
	I0814 01:11:40.105791   61115 system_pods.go:89] "kube-controller-manager-embed-certs-901410" [12e308db-7ca5-4d33-b62a-e144e7dd06c5] Running
	I0814 01:11:40.105794   61115 system_pods.go:89] "kube-proxy-fqmzw" [f9d63b14-ce56-4d0b-8511-1198b306b70e] Running
	I0814 01:11:40.105798   61115 system_pods.go:89] "kube-scheduler-embed-certs-901410" [668258a9-02d2-416d-ac07-b2b87deea00d] Running
	I0814 01:11:40.105804   61115 system_pods.go:89] "metrics-server-6867b74b74-mwl74" [065b6973-cd9d-4091-96b9-8dff2c5f85eb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 01:11:40.105809   61115 system_pods.go:89] "storage-provisioner" [e0f82856-b50c-4a5f-b0c7-4cd81e4b896e] Running
	I0814 01:11:40.105815   61115 system_pods.go:126] duration metric: took 203.134555ms to wait for k8s-apps to be running ...
	I0814 01:11:40.105824   61115 system_svc.go:44] waiting for kubelet service to be running ....
	I0814 01:11:40.105866   61115 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 01:11:40.121399   61115 system_svc.go:56] duration metric: took 15.565745ms WaitForService to wait for kubelet
	I0814 01:11:40.121427   61115 kubeadm.go:582] duration metric: took 8.356517219s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 01:11:40.121445   61115 node_conditions.go:102] verifying NodePressure condition ...
	I0814 01:11:40.303687   61115 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0814 01:11:40.303720   61115 node_conditions.go:123] node cpu capacity is 2
	I0814 01:11:40.303732   61115 node_conditions.go:105] duration metric: took 182.281943ms to run NodePressure ...
	I0814 01:11:40.303745   61115 start.go:241] waiting for startup goroutines ...
	I0814 01:11:40.303754   61115 start.go:246] waiting for cluster config update ...
	I0814 01:11:40.303768   61115 start.go:255] writing updated cluster config ...
	I0814 01:11:40.304122   61115 ssh_runner.go:195] Run: rm -f paused
	I0814 01:11:40.350855   61115 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0814 01:11:40.352610   61115 out.go:177] * Done! kubectl is now configured to use "embed-certs-901410" cluster and "default" namespace by default
	I0814 01:11:44.695887   61804 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 01:11:44.696122   61804 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 01:12:24.697922   61804 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 01:12:24.698217   61804 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 01:12:24.698256   61804 kubeadm.go:310] 
	I0814 01:12:24.698318   61804 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0814 01:12:24.698406   61804 kubeadm.go:310] 		timed out waiting for the condition
	I0814 01:12:24.698434   61804 kubeadm.go:310] 
	I0814 01:12:24.698484   61804 kubeadm.go:310] 	This error is likely caused by:
	I0814 01:12:24.698530   61804 kubeadm.go:310] 		- The kubelet is not running
	I0814 01:12:24.698640   61804 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0814 01:12:24.698651   61804 kubeadm.go:310] 
	I0814 01:12:24.698784   61804 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0814 01:12:24.698841   61804 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0814 01:12:24.698874   61804 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0814 01:12:24.698878   61804 kubeadm.go:310] 
	I0814 01:12:24.699009   61804 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0814 01:12:24.699119   61804 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0814 01:12:24.699128   61804 kubeadm.go:310] 
	I0814 01:12:24.699294   61804 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0814 01:12:24.699431   61804 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0814 01:12:24.699536   61804 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0814 01:12:24.699635   61804 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0814 01:12:24.699647   61804 kubeadm.go:310] 
	I0814 01:12:24.700201   61804 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0814 01:12:24.700300   61804 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0814 01:12:24.700391   61804 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0814 01:12:24.700527   61804 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0814 01:12:24.700577   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0814 01:12:30.038180   61804 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.337582505s)
	I0814 01:12:30.038256   61804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 01:12:30.052476   61804 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 01:12:30.062330   61804 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 01:12:30.062357   61804 kubeadm.go:157] found existing configuration files:
	
	I0814 01:12:30.062409   61804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 01:12:30.072303   61804 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 01:12:30.072355   61804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 01:12:30.081331   61804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 01:12:30.090105   61804 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 01:12:30.090163   61804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 01:12:30.099446   61804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 01:12:30.108290   61804 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 01:12:30.108346   61804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 01:12:30.117872   61804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 01:12:30.126357   61804 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 01:12:30.126424   61804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 01:12:30.136277   61804 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0814 01:12:30.342736   61804 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0814 01:14:26.274820   61804 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0814 01:14:26.274958   61804 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0814 01:14:26.276512   61804 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0814 01:14:26.276601   61804 kubeadm.go:310] [preflight] Running pre-flight checks
	I0814 01:14:26.276743   61804 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0814 01:14:26.276887   61804 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0814 01:14:26.277017   61804 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0814 01:14:26.277097   61804 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0814 01:14:26.278845   61804 out.go:204]   - Generating certificates and keys ...
	I0814 01:14:26.278935   61804 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0814 01:14:26.279005   61804 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0814 01:14:26.279103   61804 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0814 01:14:26.279187   61804 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0814 01:14:26.279278   61804 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0814 01:14:26.279351   61804 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0814 01:14:26.279433   61804 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0814 01:14:26.279515   61804 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0814 01:14:26.279623   61804 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0814 01:14:26.279725   61804 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0814 01:14:26.279776   61804 kubeadm.go:310] [certs] Using the existing "sa" key
	I0814 01:14:26.279858   61804 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0814 01:14:26.279933   61804 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0814 01:14:26.280086   61804 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0814 01:14:26.280188   61804 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0814 01:14:26.280289   61804 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0814 01:14:26.280424   61804 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0814 01:14:26.280517   61804 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0814 01:14:26.280573   61804 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0814 01:14:26.280648   61804 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0814 01:14:26.281982   61804 out.go:204]   - Booting up control plane ...
	I0814 01:14:26.282070   61804 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0814 01:14:26.282159   61804 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0814 01:14:26.282249   61804 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0814 01:14:26.282389   61804 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0814 01:14:26.282564   61804 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0814 01:14:26.282624   61804 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0814 01:14:26.282685   61804 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 01:14:26.282866   61804 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 01:14:26.282971   61804 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 01:14:26.283161   61804 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 01:14:26.283235   61804 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 01:14:26.283494   61804 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 01:14:26.283611   61804 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 01:14:26.283768   61804 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 01:14:26.283830   61804 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 01:14:26.284021   61804 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 01:14:26.284032   61804 kubeadm.go:310] 
	I0814 01:14:26.284069   61804 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0814 01:14:26.284126   61804 kubeadm.go:310] 		timed out waiting for the condition
	I0814 01:14:26.284135   61804 kubeadm.go:310] 
	I0814 01:14:26.284188   61804 kubeadm.go:310] 	This error is likely caused by:
	I0814 01:14:26.284234   61804 kubeadm.go:310] 		- The kubelet is not running
	I0814 01:14:26.284336   61804 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0814 01:14:26.284344   61804 kubeadm.go:310] 
	I0814 01:14:26.284429   61804 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0814 01:14:26.284463   61804 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0814 01:14:26.284490   61804 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0814 01:14:26.284499   61804 kubeadm.go:310] 
	I0814 01:14:26.284587   61804 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0814 01:14:26.284726   61804 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0814 01:14:26.284747   61804 kubeadm.go:310] 
	I0814 01:14:26.284889   61804 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0814 01:14:26.285007   61804 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0814 01:14:26.285083   61804 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0814 01:14:26.285158   61804 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0814 01:14:26.285174   61804 kubeadm.go:310] 
	I0814 01:14:26.285220   61804 kubeadm.go:394] duration metric: took 8m6.417053649s to StartCluster
	I0814 01:14:26.285266   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:14:26.285318   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:14:26.327320   61804 cri.go:89] found id: ""
	I0814 01:14:26.327351   61804 logs.go:276] 0 containers: []
	W0814 01:14:26.327359   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:14:26.327366   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:14:26.327435   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:14:26.362074   61804 cri.go:89] found id: ""
	I0814 01:14:26.362101   61804 logs.go:276] 0 containers: []
	W0814 01:14:26.362109   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:14:26.362115   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:14:26.362192   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:14:26.395777   61804 cri.go:89] found id: ""
	I0814 01:14:26.395802   61804 logs.go:276] 0 containers: []
	W0814 01:14:26.395814   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:14:26.395821   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:14:26.395884   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:14:26.429263   61804 cri.go:89] found id: ""
	I0814 01:14:26.429290   61804 logs.go:276] 0 containers: []
	W0814 01:14:26.429299   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:14:26.429307   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:14:26.429370   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:14:26.463278   61804 cri.go:89] found id: ""
	I0814 01:14:26.463307   61804 logs.go:276] 0 containers: []
	W0814 01:14:26.463314   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:14:26.463321   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:14:26.463381   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:14:26.496454   61804 cri.go:89] found id: ""
	I0814 01:14:26.496493   61804 logs.go:276] 0 containers: []
	W0814 01:14:26.496513   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:14:26.496521   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:14:26.496591   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:14:26.530536   61804 cri.go:89] found id: ""
	I0814 01:14:26.530567   61804 logs.go:276] 0 containers: []
	W0814 01:14:26.530579   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:14:26.530587   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:14:26.530659   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:14:26.564201   61804 cri.go:89] found id: ""
	I0814 01:14:26.564232   61804 logs.go:276] 0 containers: []
	W0814 01:14:26.564245   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:14:26.564258   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:14:26.564274   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:14:26.614225   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:14:26.614263   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:14:26.632126   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:14:26.632162   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:14:26.733732   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:14:26.733757   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:14:26.733773   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:14:26.849177   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:14:26.849218   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0814 01:14:26.885741   61804 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0814 01:14:26.885794   61804 out.go:239] * 
	W0814 01:14:26.885846   61804 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0814 01:14:26.885871   61804 out.go:239] * 
	W0814 01:14:26.886747   61804 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0814 01:14:26.889874   61804 out.go:177] 
	W0814 01:14:26.891040   61804 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0814 01:14:26.891083   61804 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0814 01:14:26.891101   61804 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0814 01:14:26.892501   61804 out.go:177] 
	
	
	==> CRI-O <==
	Aug 14 01:14:28 old-k8s-version-179312 crio[648]: time="2024-08-14 01:14:28.701399817Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598068701363839,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=611ec1ae-d3a9-4bce-8213-0ea97612969b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 01:14:28 old-k8s-version-179312 crio[648]: time="2024-08-14 01:14:28.703393169Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a5f8a885-a214-468c-8808-5a735471ea5e name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 01:14:28 old-k8s-version-179312 crio[648]: time="2024-08-14 01:14:28.703448231Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a5f8a885-a214-468c-8808-5a735471ea5e name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 01:14:28 old-k8s-version-179312 crio[648]: time="2024-08-14 01:14:28.703493700Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=a5f8a885-a214-468c-8808-5a735471ea5e name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 01:14:28 old-k8s-version-179312 crio[648]: time="2024-08-14 01:14:28.733787197Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=afc96312-2b56-4516-8beb-17d6b0266d5f name=/runtime.v1.RuntimeService/Version
	Aug 14 01:14:28 old-k8s-version-179312 crio[648]: time="2024-08-14 01:14:28.733872510Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=afc96312-2b56-4516-8beb-17d6b0266d5f name=/runtime.v1.RuntimeService/Version
	Aug 14 01:14:28 old-k8s-version-179312 crio[648]: time="2024-08-14 01:14:28.735059718Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=377918f4-6852-427e-aeeb-aa8c2acee970 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 01:14:28 old-k8s-version-179312 crio[648]: time="2024-08-14 01:14:28.735482240Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598068735458029,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=377918f4-6852-427e-aeeb-aa8c2acee970 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 01:14:28 old-k8s-version-179312 crio[648]: time="2024-08-14 01:14:28.736084865Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fec69e02-7df6-4446-b4e1-b4a2ba3c931a name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 01:14:28 old-k8s-version-179312 crio[648]: time="2024-08-14 01:14:28.736156538Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fec69e02-7df6-4446-b4e1-b4a2ba3c931a name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 01:14:28 old-k8s-version-179312 crio[648]: time="2024-08-14 01:14:28.736191926Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=fec69e02-7df6-4446-b4e1-b4a2ba3c931a name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 01:14:28 old-k8s-version-179312 crio[648]: time="2024-08-14 01:14:28.767845159Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5ce68236-09ef-44f1-9d96-c1217d5ce650 name=/runtime.v1.RuntimeService/Version
	Aug 14 01:14:28 old-k8s-version-179312 crio[648]: time="2024-08-14 01:14:28.767919028Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5ce68236-09ef-44f1-9d96-c1217d5ce650 name=/runtime.v1.RuntimeService/Version
	Aug 14 01:14:28 old-k8s-version-179312 crio[648]: time="2024-08-14 01:14:28.768782453Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=73556cb6-d4e3-48bc-9c38-6bcd18e98bc7 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 01:14:28 old-k8s-version-179312 crio[648]: time="2024-08-14 01:14:28.769137554Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598068769112413,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=73556cb6-d4e3-48bc-9c38-6bcd18e98bc7 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 01:14:28 old-k8s-version-179312 crio[648]: time="2024-08-14 01:14:28.769731751Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=94cf913e-7026-4cc6-85e6-066c30f577ea name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 01:14:28 old-k8s-version-179312 crio[648]: time="2024-08-14 01:14:28.769777234Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=94cf913e-7026-4cc6-85e6-066c30f577ea name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 01:14:28 old-k8s-version-179312 crio[648]: time="2024-08-14 01:14:28.769809254Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=94cf913e-7026-4cc6-85e6-066c30f577ea name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 01:14:28 old-k8s-version-179312 crio[648]: time="2024-08-14 01:14:28.799582174Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0ee71313-c03f-4c20-8bc6-7a6e07be87c8 name=/runtime.v1.RuntimeService/Version
	Aug 14 01:14:28 old-k8s-version-179312 crio[648]: time="2024-08-14 01:14:28.799678974Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0ee71313-c03f-4c20-8bc6-7a6e07be87c8 name=/runtime.v1.RuntimeService/Version
	Aug 14 01:14:28 old-k8s-version-179312 crio[648]: time="2024-08-14 01:14:28.800531216Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1aadb9e5-71f1-4872-bb7f-441e98d97d0c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 01:14:28 old-k8s-version-179312 crio[648]: time="2024-08-14 01:14:28.800886657Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598068800862069,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1aadb9e5-71f1-4872-bb7f-441e98d97d0c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 01:14:28 old-k8s-version-179312 crio[648]: time="2024-08-14 01:14:28.801375199Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b5dfb72a-2099-48f0-a612-5ca616aa3092 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 01:14:28 old-k8s-version-179312 crio[648]: time="2024-08-14 01:14:28.801420237Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b5dfb72a-2099-48f0-a612-5ca616aa3092 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 01:14:28 old-k8s-version-179312 crio[648]: time="2024-08-14 01:14:28.801451479Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=b5dfb72a-2099-48f0-a612-5ca616aa3092 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Aug14 01:05] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051654] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037900] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Aug14 01:06] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.069039] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.556159] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.745693] systemd-fstab-generator[568]: Ignoring "noauto" option for root device
	[  +0.067571] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.073344] systemd-fstab-generator[580]: Ignoring "noauto" option for root device
	[  +0.191121] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.114642] systemd-fstab-generator[606]: Ignoring "noauto" option for root device
	[  +0.237276] systemd-fstab-generator[632]: Ignoring "noauto" option for root device
	[  +6.127376] systemd-fstab-generator[900]: Ignoring "noauto" option for root device
	[  +0.063905] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.036138] systemd-fstab-generator[1027]: Ignoring "noauto" option for root device
	[ +12.708573] kauditd_printk_skb: 46 callbacks suppressed
	[Aug14 01:10] systemd-fstab-generator[5126]: Ignoring "noauto" option for root device
	[Aug14 01:12] systemd-fstab-generator[5405]: Ignoring "noauto" option for root device
	[  +0.068703] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 01:14:28 up 8 min,  0 users,  load average: 0.01, 0.05, 0.01
	Linux old-k8s-version-179312 5.10.207 #1 SMP Tue Aug 13 22:05:29 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Aug 14 01:14:25 old-k8s-version-179312 kubelet[5587]:         /usr/local/go/src/net/ipsock.go:280 +0x4d4
	Aug 14 01:14:25 old-k8s-version-179312 kubelet[5587]: net.(*Resolver).resolveAddrList(0x70c5740, 0x4f7fe40, 0xc00038b5c0, 0x48abf6d, 0x4, 0x48ab5d6, 0x3, 0xc0006be930, 0x24, 0x0, ...)
	Aug 14 01:14:25 old-k8s-version-179312 kubelet[5587]:         /usr/local/go/src/net/dial.go:221 +0x47d
	Aug 14 01:14:25 old-k8s-version-179312 kubelet[5587]: net.(*Dialer).DialContext(0xc0002bfe00, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc0006be930, 0x24, 0x0, 0x0, 0x0, ...)
	Aug 14 01:14:25 old-k8s-version-179312 kubelet[5587]:         /usr/local/go/src/net/dial.go:403 +0x22b
	Aug 14 01:14:25 old-k8s-version-179312 kubelet[5587]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc000023cc0, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc0006be930, 0x24, 0x60, 0x7fb0d45a5660, 0x118, ...)
	Aug 14 01:14:25 old-k8s-version-179312 kubelet[5587]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Aug 14 01:14:25 old-k8s-version-179312 kubelet[5587]: net/http.(*Transport).dial(0xc000a62000, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc0006be930, 0x24, 0x0, 0x0, 0x0, ...)
	Aug 14 01:14:25 old-k8s-version-179312 kubelet[5587]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Aug 14 01:14:25 old-k8s-version-179312 kubelet[5587]: net/http.(*Transport).dialConn(0xc000a62000, 0x4f7fe00, 0xc000052030, 0x0, 0xc000103da0, 0x5, 0xc0006be930, 0x24, 0x0, 0xc000b50d80, ...)
	Aug 14 01:14:25 old-k8s-version-179312 kubelet[5587]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Aug 14 01:14:25 old-k8s-version-179312 kubelet[5587]: net/http.(*Transport).dialConnFor(0xc000a62000, 0xc000b4f600)
	Aug 14 01:14:25 old-k8s-version-179312 kubelet[5587]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Aug 14 01:14:25 old-k8s-version-179312 kubelet[5587]: created by net/http.(*Transport).queueForDial
	Aug 14 01:14:25 old-k8s-version-179312 kubelet[5587]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Aug 14 01:14:26 old-k8s-version-179312 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Aug 14 01:14:26 old-k8s-version-179312 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Aug 14 01:14:26 old-k8s-version-179312 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Aug 14 01:14:26 old-k8s-version-179312 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Aug 14 01:14:26 old-k8s-version-179312 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Aug 14 01:14:26 old-k8s-version-179312 kubelet[5639]: I0814 01:14:26.753426    5639 server.go:416] Version: v1.20.0
	Aug 14 01:14:26 old-k8s-version-179312 kubelet[5639]: I0814 01:14:26.753848    5639 server.go:837] Client rotation is on, will bootstrap in background
	Aug 14 01:14:26 old-k8s-version-179312 kubelet[5639]: I0814 01:14:26.756812    5639 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Aug 14 01:14:26 old-k8s-version-179312 kubelet[5639]: W0814 01:14:26.758089    5639 manager.go:159] Cannot detect current cgroup on cgroup v2
	Aug 14 01:14:26 old-k8s-version-179312 kubelet[5639]: I0814 01:14:26.758186    5639 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-179312 -n old-k8s-version-179312
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-179312 -n old-k8s-version-179312: exit status 2 (224.812753ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-179312" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (770.86s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-776907 -n no-preload-776907
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-08-14 01:19:20.040494844 +0000 UTC m=+5550.203992051
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-776907 -n no-preload-776907
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-776907 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-776907 logs -n 25: (1.978977674s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| unpause | -p pause-074686                                        | pause-074686                 | jenkins | v1.33.1 | 14 Aug 24 00:56 UTC | 14 Aug 24 00:56 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| pause   | -p pause-074686                                        | pause-074686                 | jenkins | v1.33.1 | 14 Aug 24 00:56 UTC | 14 Aug 24 00:56 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| delete  | -p pause-074686                                        | pause-074686                 | jenkins | v1.33.1 | 14 Aug 24 00:56 UTC | 14 Aug 24 00:56 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| delete  | -p pause-074686                                        | pause-074686                 | jenkins | v1.33.1 | 14 Aug 24 00:56 UTC | 14 Aug 24 00:56 UTC |
	| delete  | -p                                                     | disable-driver-mounts-655306 | jenkins | v1.33.1 | 14 Aug 24 00:56 UTC | 14 Aug 24 00:56 UTC |
	|         | disable-driver-mounts-655306                           |                              |         |         |                     |                     |
	| start   | -p no-preload-776907                                   | no-preload-776907            | jenkins | v1.33.1 | 14 Aug 24 00:56 UTC | 14 Aug 24 00:58 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-769488                              | cert-expiration-769488       | jenkins | v1.33.1 | 14 Aug 24 00:57 UTC | 14 Aug 24 00:58 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-769488                              | cert-expiration-769488       | jenkins | v1.33.1 | 14 Aug 24 00:58 UTC | 14 Aug 24 00:58 UTC |
	| start   | -p                                                     | default-k8s-diff-port-585256 | jenkins | v1.33.1 | 14 Aug 24 00:58 UTC | 14 Aug 24 00:58 UTC |
	|         | default-k8s-diff-port-585256                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-901410            | embed-certs-901410           | jenkins | v1.33.1 | 14 Aug 24 00:58 UTC | 14 Aug 24 00:58 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-901410                                  | embed-certs-901410           | jenkins | v1.33.1 | 14 Aug 24 00:58 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-776907             | no-preload-776907            | jenkins | v1.33.1 | 14 Aug 24 00:58 UTC | 14 Aug 24 00:58 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-776907                                   | no-preload-776907            | jenkins | v1.33.1 | 14 Aug 24 00:58 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-585256  | default-k8s-diff-port-585256 | jenkins | v1.33.1 | 14 Aug 24 00:59 UTC | 14 Aug 24 00:59 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-585256 | jenkins | v1.33.1 | 14 Aug 24 00:59 UTC |                     |
	|         | default-k8s-diff-port-585256                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-179312        | old-k8s-version-179312       | jenkins | v1.33.1 | 14 Aug 24 01:00 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-901410                 | embed-certs-901410           | jenkins | v1.33.1 | 14 Aug 24 01:00 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-901410                                  | embed-certs-901410           | jenkins | v1.33.1 | 14 Aug 24 01:00 UTC | 14 Aug 24 01:11 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-776907                  | no-preload-776907            | jenkins | v1.33.1 | 14 Aug 24 01:01 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-776907                                   | no-preload-776907            | jenkins | v1.33.1 | 14 Aug 24 01:01 UTC | 14 Aug 24 01:10 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-585256       | default-k8s-diff-port-585256 | jenkins | v1.33.1 | 14 Aug 24 01:01 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-179312                              | old-k8s-version-179312       | jenkins | v1.33.1 | 14 Aug 24 01:01 UTC | 14 Aug 24 01:01 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-585256 | jenkins | v1.33.1 | 14 Aug 24 01:01 UTC | 14 Aug 24 01:11 UTC |
	|         | default-k8s-diff-port-585256                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-179312             | old-k8s-version-179312       | jenkins | v1.33.1 | 14 Aug 24 01:01 UTC | 14 Aug 24 01:01 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-179312                              | old-k8s-version-179312       | jenkins | v1.33.1 | 14 Aug 24 01:01 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/14 01:01:39
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0814 01:01:39.512898   61804 out.go:291] Setting OutFile to fd 1 ...
	I0814 01:01:39.513038   61804 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 01:01:39.513051   61804 out.go:304] Setting ErrFile to fd 2...
	I0814 01:01:39.513057   61804 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 01:01:39.513259   61804 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19429-9425/.minikube/bin
	I0814 01:01:39.513864   61804 out.go:298] Setting JSON to false
	I0814 01:01:39.514866   61804 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6245,"bootTime":1723591054,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0814 01:01:39.514924   61804 start.go:139] virtualization: kvm guest
	I0814 01:01:39.516858   61804 out.go:177] * [old-k8s-version-179312] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0814 01:01:39.518018   61804 out.go:177]   - MINIKUBE_LOCATION=19429
	I0814 01:01:39.518036   61804 notify.go:220] Checking for updates...
	I0814 01:01:39.520190   61804 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0814 01:01:39.521372   61804 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19429-9425/kubeconfig
	I0814 01:01:39.522536   61804 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19429-9425/.minikube
	I0814 01:01:39.523748   61804 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0814 01:01:39.524905   61804 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0814 01:01:39.526506   61804 config.go:182] Loaded profile config "old-k8s-version-179312": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0814 01:01:39.526919   61804 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:01:39.526976   61804 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:01:39.541877   61804 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35025
	I0814 01:01:39.542250   61804 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:01:39.542776   61804 main.go:141] libmachine: Using API Version  1
	I0814 01:01:39.542796   61804 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:01:39.543149   61804 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:01:39.543304   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .DriverName
	I0814 01:01:39.544990   61804 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0814 01:01:39.546103   61804 driver.go:392] Setting default libvirt URI to qemu:///system
	I0814 01:01:39.546426   61804 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:01:39.546461   61804 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:01:39.561404   61804 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42995
	I0814 01:01:39.561820   61804 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:01:39.562277   61804 main.go:141] libmachine: Using API Version  1
	I0814 01:01:39.562305   61804 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:01:39.562609   61804 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:01:39.562824   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .DriverName
	I0814 01:01:39.598760   61804 out.go:177] * Using the kvm2 driver based on existing profile
	I0814 01:01:39.599899   61804 start.go:297] selected driver: kvm2
	I0814 01:01:39.599912   61804 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-179312 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-179312 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.123 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 01:01:39.600052   61804 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0814 01:01:39.600706   61804 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 01:01:39.600767   61804 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19429-9425/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0814 01:01:39.616316   61804 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0814 01:01:39.616678   61804 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 01:01:39.616712   61804 cni.go:84] Creating CNI manager for ""
	I0814 01:01:39.616719   61804 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 01:01:39.616748   61804 start.go:340] cluster config:
	{Name:old-k8s-version-179312 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-179312 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.123 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 01:01:39.616839   61804 iso.go:125] acquiring lock: {Name:mk654171f0e78c238a265344dbbd1eacb21d0f1b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 01:01:39.618491   61804 out.go:177] * Starting "old-k8s-version-179312" primary control-plane node in "old-k8s-version-179312" cluster
	I0814 01:01:36.022382   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:01:39.094354   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:01:38.136107   61689 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0814 01:01:38.136146   61689 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0814 01:01:38.136159   61689 cache.go:56] Caching tarball of preloaded images
	I0814 01:01:38.136234   61689 preload.go:172] Found /home/jenkins/minikube-integration/19429-9425/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0814 01:01:38.136245   61689 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0814 01:01:38.136360   61689 profile.go:143] Saving config to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/default-k8s-diff-port-585256/config.json ...
	I0814 01:01:38.136567   61689 start.go:360] acquireMachinesLock for default-k8s-diff-port-585256: {Name:mk8ab1b491404dd6cc95a37a50c74a14d6d0e4c9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 01:01:39.619632   61804 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0814 01:01:39.619674   61804 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0814 01:01:39.619694   61804 cache.go:56] Caching tarball of preloaded images
	I0814 01:01:39.619767   61804 preload.go:172] Found /home/jenkins/minikube-integration/19429-9425/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0814 01:01:39.619781   61804 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0814 01:01:39.619899   61804 profile.go:143] Saving config to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312/config.json ...
	I0814 01:01:39.620085   61804 start.go:360] acquireMachinesLock for old-k8s-version-179312: {Name:mk8ab1b491404dd6cc95a37a50c74a14d6d0e4c9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 01:01:45.174229   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:01:48.246337   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:01:54.326275   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:01:57.398310   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:02:03.478349   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:02:06.550262   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:02:12.630330   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:02:15.702383   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:02:21.782321   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:02:24.854346   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:02:30.934349   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:02:34.006298   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:02:40.086361   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:02:43.158326   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:02:49.238298   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:02:52.310357   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:02:58.390361   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:03:01.462356   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:03:07.542292   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:03:10.614310   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:03:16.694325   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:03:19.766305   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:03:25.846331   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:03:28.918369   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:03:34.998360   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:03:38.070357   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:03:44.150338   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:03:47.222336   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:03:53.302301   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:03:56.374355   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:04:02.454379   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:04:05.526325   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:04:11.606322   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:04:14.678359   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:04:20.758332   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:04:23.830339   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:04:29.910318   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:04:32.982355   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:04:39.062376   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:04:42.134351   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:04:48.214321   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:04:51.286357   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:04:57.366282   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:05:00.438378   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:05:06.518254   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:05:09.590272   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:05:12.594550   61447 start.go:364] duration metric: took 3m55.982517455s to acquireMachinesLock for "no-preload-776907"
	I0814 01:05:12.594617   61447 start.go:96] Skipping create...Using existing machine configuration
	I0814 01:05:12.594639   61447 fix.go:54] fixHost starting: 
	I0814 01:05:12.595017   61447 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:05:12.595051   61447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:05:12.611377   61447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40079
	I0814 01:05:12.611848   61447 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:05:12.612405   61447 main.go:141] libmachine: Using API Version  1
	I0814 01:05:12.612433   61447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:05:12.612810   61447 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:05:12.613004   61447 main.go:141] libmachine: (no-preload-776907) Calling .DriverName
	I0814 01:05:12.613170   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetState
	I0814 01:05:12.614831   61447 fix.go:112] recreateIfNeeded on no-preload-776907: state=Stopped err=<nil>
	I0814 01:05:12.614852   61447 main.go:141] libmachine: (no-preload-776907) Calling .DriverName
	W0814 01:05:12.615027   61447 fix.go:138] unexpected machine state, will restart: <nil>
	I0814 01:05:12.616713   61447 out.go:177] * Restarting existing kvm2 VM for "no-preload-776907" ...
	I0814 01:05:12.591919   61115 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 01:05:12.591979   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetMachineName
	I0814 01:05:12.592302   61115 buildroot.go:166] provisioning hostname "embed-certs-901410"
	I0814 01:05:12.592333   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetMachineName
	I0814 01:05:12.592567   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHHostname
	I0814 01:05:12.594384   61115 machine.go:97] duration metric: took 4m37.436734696s to provisionDockerMachine
	I0814 01:05:12.594452   61115 fix.go:56] duration metric: took 4m37.45620173s for fixHost
	I0814 01:05:12.594468   61115 start.go:83] releasing machines lock for "embed-certs-901410", held for 4m37.456229846s
	W0814 01:05:12.594503   61115 start.go:714] error starting host: provision: host is not running
	W0814 01:05:12.594696   61115 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0814 01:05:12.594717   61115 start.go:729] Will try again in 5 seconds ...
	I0814 01:05:12.617855   61447 main.go:141] libmachine: (no-preload-776907) Calling .Start
	I0814 01:05:12.618047   61447 main.go:141] libmachine: (no-preload-776907) Ensuring networks are active...
	I0814 01:05:12.619058   61447 main.go:141] libmachine: (no-preload-776907) Ensuring network default is active
	I0814 01:05:12.619398   61447 main.go:141] libmachine: (no-preload-776907) Ensuring network mk-no-preload-776907 is active
	I0814 01:05:12.619763   61447 main.go:141] libmachine: (no-preload-776907) Getting domain xml...
	I0814 01:05:12.620437   61447 main.go:141] libmachine: (no-preload-776907) Creating domain...
	I0814 01:05:13.819938   61447 main.go:141] libmachine: (no-preload-776907) Waiting to get IP...
	I0814 01:05:13.820741   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:13.821142   61447 main.go:141] libmachine: (no-preload-776907) DBG | unable to find current IP address of domain no-preload-776907 in network mk-no-preload-776907
	I0814 01:05:13.821244   61447 main.go:141] libmachine: (no-preload-776907) DBG | I0814 01:05:13.821137   62559 retry.go:31] will retry after 224.897937ms: waiting for machine to come up
	I0814 01:05:14.047611   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:14.048046   61447 main.go:141] libmachine: (no-preload-776907) DBG | unable to find current IP address of domain no-preload-776907 in network mk-no-preload-776907
	I0814 01:05:14.048073   61447 main.go:141] libmachine: (no-preload-776907) DBG | I0814 01:05:14.047999   62559 retry.go:31] will retry after 289.797156ms: waiting for machine to come up
	I0814 01:05:14.339577   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:14.339966   61447 main.go:141] libmachine: (no-preload-776907) DBG | unable to find current IP address of domain no-preload-776907 in network mk-no-preload-776907
	I0814 01:05:14.339990   61447 main.go:141] libmachine: (no-preload-776907) DBG | I0814 01:05:14.339923   62559 retry.go:31] will retry after 335.55372ms: waiting for machine to come up
	I0814 01:05:14.677277   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:14.677646   61447 main.go:141] libmachine: (no-preload-776907) DBG | unable to find current IP address of domain no-preload-776907 in network mk-no-preload-776907
	I0814 01:05:14.677850   61447 main.go:141] libmachine: (no-preload-776907) DBG | I0814 01:05:14.677612   62559 retry.go:31] will retry after 376.666569ms: waiting for machine to come up
	I0814 01:05:15.056486   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:15.057008   61447 main.go:141] libmachine: (no-preload-776907) DBG | unable to find current IP address of domain no-preload-776907 in network mk-no-preload-776907
	I0814 01:05:15.057046   61447 main.go:141] libmachine: (no-preload-776907) DBG | I0814 01:05:15.056935   62559 retry.go:31] will retry after 594.277419ms: waiting for machine to come up
	I0814 01:05:15.652571   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:15.653122   61447 main.go:141] libmachine: (no-preload-776907) DBG | unable to find current IP address of domain no-preload-776907 in network mk-no-preload-776907
	I0814 01:05:15.653156   61447 main.go:141] libmachine: (no-preload-776907) DBG | I0814 01:05:15.653066   62559 retry.go:31] will retry after 827.123674ms: waiting for machine to come up
	I0814 01:05:16.482405   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:16.482799   61447 main.go:141] libmachine: (no-preload-776907) DBG | unable to find current IP address of domain no-preload-776907 in network mk-no-preload-776907
	I0814 01:05:16.482827   61447 main.go:141] libmachine: (no-preload-776907) DBG | I0814 01:05:16.482746   62559 retry.go:31] will retry after 897.843008ms: waiting for machine to come up
	I0814 01:05:17.595257   61115 start.go:360] acquireMachinesLock for embed-certs-901410: {Name:mk8ab1b491404dd6cc95a37a50c74a14d6d0e4c9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 01:05:17.381838   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:17.382282   61447 main.go:141] libmachine: (no-preload-776907) DBG | unable to find current IP address of domain no-preload-776907 in network mk-no-preload-776907
	I0814 01:05:17.382309   61447 main.go:141] libmachine: (no-preload-776907) DBG | I0814 01:05:17.382233   62559 retry.go:31] will retry after 1.346474914s: waiting for machine to come up
	I0814 01:05:18.730384   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:18.730837   61447 main.go:141] libmachine: (no-preload-776907) DBG | unable to find current IP address of domain no-preload-776907 in network mk-no-preload-776907
	I0814 01:05:18.730865   61447 main.go:141] libmachine: (no-preload-776907) DBG | I0814 01:05:18.730770   62559 retry.go:31] will retry after 1.755579596s: waiting for machine to come up
	I0814 01:05:20.488719   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:20.489235   61447 main.go:141] libmachine: (no-preload-776907) DBG | unable to find current IP address of domain no-preload-776907 in network mk-no-preload-776907
	I0814 01:05:20.489269   61447 main.go:141] libmachine: (no-preload-776907) DBG | I0814 01:05:20.489180   62559 retry.go:31] will retry after 1.82357845s: waiting for machine to come up
	I0814 01:05:22.315099   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:22.315508   61447 main.go:141] libmachine: (no-preload-776907) DBG | unable to find current IP address of domain no-preload-776907 in network mk-no-preload-776907
	I0814 01:05:22.315543   61447 main.go:141] libmachine: (no-preload-776907) DBG | I0814 01:05:22.315458   62559 retry.go:31] will retry after 1.799604975s: waiting for machine to come up
	I0814 01:05:24.116869   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:24.117361   61447 main.go:141] libmachine: (no-preload-776907) DBG | unable to find current IP address of domain no-preload-776907 in network mk-no-preload-776907
	I0814 01:05:24.117389   61447 main.go:141] libmachine: (no-preload-776907) DBG | I0814 01:05:24.117302   62559 retry.go:31] will retry after 2.588913034s: waiting for machine to come up
	I0814 01:05:26.708996   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:26.709436   61447 main.go:141] libmachine: (no-preload-776907) DBG | unable to find current IP address of domain no-preload-776907 in network mk-no-preload-776907
	I0814 01:05:26.709462   61447 main.go:141] libmachine: (no-preload-776907) DBG | I0814 01:05:26.709395   62559 retry.go:31] will retry after 3.736481406s: waiting for machine to come up
	I0814 01:05:30.449552   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:30.450068   61447 main.go:141] libmachine: (no-preload-776907) Found IP for machine: 192.168.72.94
	I0814 01:05:30.450093   61447 main.go:141] libmachine: (no-preload-776907) Reserving static IP address...
	I0814 01:05:30.450109   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has current primary IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:30.450584   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "no-preload-776907", mac: "52:54:00:96:29:79", ip: "192.168.72.94"} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:30.450609   61447 main.go:141] libmachine: (no-preload-776907) Reserved static IP address: 192.168.72.94
	I0814 01:05:30.450629   61447 main.go:141] libmachine: (no-preload-776907) DBG | skip adding static IP to network mk-no-preload-776907 - found existing host DHCP lease matching {name: "no-preload-776907", mac: "52:54:00:96:29:79", ip: "192.168.72.94"}
	I0814 01:05:30.450640   61447 main.go:141] libmachine: (no-preload-776907) Waiting for SSH to be available...
	I0814 01:05:30.450652   61447 main.go:141] libmachine: (no-preload-776907) DBG | Getting to WaitForSSH function...
	I0814 01:05:30.452908   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:30.453222   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:30.453250   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:30.453351   61447 main.go:141] libmachine: (no-preload-776907) DBG | Using SSH client type: external
	I0814 01:05:30.453380   61447 main.go:141] libmachine: (no-preload-776907) DBG | Using SSH private key: /home/jenkins/minikube-integration/19429-9425/.minikube/machines/no-preload-776907/id_rsa (-rw-------)
	I0814 01:05:30.453413   61447 main.go:141] libmachine: (no-preload-776907) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.94 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19429-9425/.minikube/machines/no-preload-776907/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0814 01:05:30.453430   61447 main.go:141] libmachine: (no-preload-776907) DBG | About to run SSH command:
	I0814 01:05:30.453443   61447 main.go:141] libmachine: (no-preload-776907) DBG | exit 0
	I0814 01:05:30.574126   61447 main.go:141] libmachine: (no-preload-776907) DBG | SSH cmd err, output: <nil>: 
	I0814 01:05:30.574502   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetConfigRaw
	I0814 01:05:30.575125   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetIP
	I0814 01:05:30.577732   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:30.578169   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:30.578203   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:30.578449   61447 profile.go:143] Saving config to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/no-preload-776907/config.json ...
	I0814 01:05:30.578651   61447 machine.go:94] provisionDockerMachine start ...
	I0814 01:05:30.578669   61447 main.go:141] libmachine: (no-preload-776907) Calling .DriverName
	I0814 01:05:30.578916   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHHostname
	I0814 01:05:30.581363   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:30.581653   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:30.581678   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:30.581769   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHPort
	I0814 01:05:30.581944   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 01:05:30.582114   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 01:05:30.582230   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHUsername
	I0814 01:05:30.582389   61447 main.go:141] libmachine: Using SSH client type: native
	I0814 01:05:30.582631   61447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.94 22 <nil> <nil>}
	I0814 01:05:30.582641   61447 main.go:141] libmachine: About to run SSH command:
	hostname
	I0814 01:05:30.678219   61447 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0814 01:05:30.678248   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetMachineName
	I0814 01:05:30.678530   61447 buildroot.go:166] provisioning hostname "no-preload-776907"
	I0814 01:05:30.678560   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetMachineName
	I0814 01:05:30.678736   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHHostname
	I0814 01:05:30.681602   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:30.681914   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:30.681943   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:30.682058   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHPort
	I0814 01:05:30.682224   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 01:05:30.682373   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 01:05:30.682507   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHUsername
	I0814 01:05:30.682662   61447 main.go:141] libmachine: Using SSH client type: native
	I0814 01:05:30.682832   61447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.94 22 <nil> <nil>}
	I0814 01:05:30.682844   61447 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-776907 && echo "no-preload-776907" | sudo tee /etc/hostname
	I0814 01:05:30.790444   61447 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-776907
	
	I0814 01:05:30.790476   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHHostname
	I0814 01:05:30.793090   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:30.793357   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:30.793386   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:30.793503   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHPort
	I0814 01:05:30.793713   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 01:05:30.793885   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 01:05:30.794030   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHUsername
	I0814 01:05:30.794206   61447 main.go:141] libmachine: Using SSH client type: native
	I0814 01:05:30.794390   61447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.94 22 <nil> <nil>}
	I0814 01:05:30.794411   61447 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-776907' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-776907/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-776907' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0814 01:05:30.897761   61447 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 01:05:30.897818   61447 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19429-9425/.minikube CaCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19429-9425/.minikube}
	I0814 01:05:30.897869   61447 buildroot.go:174] setting up certificates
	I0814 01:05:30.897890   61447 provision.go:84] configureAuth start
	I0814 01:05:30.897915   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetMachineName
	I0814 01:05:30.898272   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetIP
	I0814 01:05:30.900961   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:30.901235   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:30.901268   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:30.901432   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHHostname
	I0814 01:05:30.903329   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:30.903604   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:30.903634   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:30.903799   61447 provision.go:143] copyHostCerts
	I0814 01:05:30.903866   61447 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem, removing ...
	I0814 01:05:30.903881   61447 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem
	I0814 01:05:30.903960   61447 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem (1082 bytes)
	I0814 01:05:30.904104   61447 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem, removing ...
	I0814 01:05:30.904126   61447 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem
	I0814 01:05:30.904165   61447 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem (1123 bytes)
	I0814 01:05:30.904259   61447 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem, removing ...
	I0814 01:05:30.904271   61447 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem
	I0814 01:05:30.904304   61447 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem (1675 bytes)
	I0814 01:05:30.904389   61447 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem org=jenkins.no-preload-776907 san=[127.0.0.1 192.168.72.94 localhost minikube no-preload-776907]
	I0814 01:05:31.219047   61447 provision.go:177] copyRemoteCerts
	I0814 01:05:31.219108   61447 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0814 01:05:31.219138   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHHostname
	I0814 01:05:31.222328   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:31.222679   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:31.222719   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:31.222858   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHPort
	I0814 01:05:31.223059   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 01:05:31.223199   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHUsername
	I0814 01:05:31.223368   61447 sshutil.go:53] new ssh client: &{IP:192.168.72.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/no-preload-776907/id_rsa Username:docker}
	I0814 01:05:31.299711   61447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0814 01:05:31.321459   61447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0814 01:05:31.342798   61447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0814 01:05:31.363610   61447 provision.go:87] duration metric: took 465.708315ms to configureAuth
	I0814 01:05:31.363636   61447 buildroot.go:189] setting minikube options for container-runtime
	I0814 01:05:31.363877   61447 config.go:182] Loaded profile config "no-preload-776907": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 01:05:31.363970   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHHostname
	I0814 01:05:31.366458   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:31.366723   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:31.366753   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:31.366948   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHPort
	I0814 01:05:31.367154   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 01:05:31.367300   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 01:05:31.367452   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHUsername
	I0814 01:05:31.367605   61447 main.go:141] libmachine: Using SSH client type: native
	I0814 01:05:31.367826   61447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.94 22 <nil> <nil>}
	I0814 01:05:31.367848   61447 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0814 01:05:31.826307   61689 start.go:364] duration metric: took 3m53.689696917s to acquireMachinesLock for "default-k8s-diff-port-585256"
	I0814 01:05:31.826378   61689 start.go:96] Skipping create...Using existing machine configuration
	I0814 01:05:31.826394   61689 fix.go:54] fixHost starting: 
	I0814 01:05:31.826794   61689 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:05:31.826829   61689 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:05:31.842943   61689 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38143
	I0814 01:05:31.843345   61689 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:05:31.843840   61689 main.go:141] libmachine: Using API Version  1
	I0814 01:05:31.843872   61689 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:05:31.844236   61689 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:05:31.844445   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .DriverName
	I0814 01:05:31.844653   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetState
	I0814 01:05:31.846298   61689 fix.go:112] recreateIfNeeded on default-k8s-diff-port-585256: state=Stopped err=<nil>
	I0814 01:05:31.846319   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .DriverName
	W0814 01:05:31.846504   61689 fix.go:138] unexpected machine state, will restart: <nil>
	I0814 01:05:31.848477   61689 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-585256" ...
	I0814 01:05:31.849592   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .Start
	I0814 01:05:31.849779   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Ensuring networks are active...
	I0814 01:05:31.850320   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Ensuring network default is active
	I0814 01:05:31.850622   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Ensuring network mk-default-k8s-diff-port-585256 is active
	I0814 01:05:31.850949   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Getting domain xml...
	I0814 01:05:31.851706   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Creating domain...
	I0814 01:05:31.612709   61447 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0814 01:05:31.612730   61447 machine.go:97] duration metric: took 1.0340672s to provisionDockerMachine
	I0814 01:05:31.612741   61447 start.go:293] postStartSetup for "no-preload-776907" (driver="kvm2")
	I0814 01:05:31.612763   61447 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0814 01:05:31.612794   61447 main.go:141] libmachine: (no-preload-776907) Calling .DriverName
	I0814 01:05:31.613074   61447 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0814 01:05:31.613098   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHHostname
	I0814 01:05:31.615600   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:31.615957   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:31.615985   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:31.616091   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHPort
	I0814 01:05:31.616244   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 01:05:31.616373   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHUsername
	I0814 01:05:31.616516   61447 sshutil.go:53] new ssh client: &{IP:192.168.72.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/no-preload-776907/id_rsa Username:docker}
	I0814 01:05:31.691987   61447 ssh_runner.go:195] Run: cat /etc/os-release
	I0814 01:05:31.695849   61447 info.go:137] Remote host: Buildroot 2023.02.9
	I0814 01:05:31.695872   61447 filesync.go:126] Scanning /home/jenkins/minikube-integration/19429-9425/.minikube/addons for local assets ...
	I0814 01:05:31.695940   61447 filesync.go:126] Scanning /home/jenkins/minikube-integration/19429-9425/.minikube/files for local assets ...
	I0814 01:05:31.696016   61447 filesync.go:149] local asset: /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem -> 165892.pem in /etc/ssl/certs
	I0814 01:05:31.696099   61447 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0814 01:05:31.704650   61447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem --> /etc/ssl/certs/165892.pem (1708 bytes)
	I0814 01:05:31.725889   61447 start.go:296] duration metric: took 113.131949ms for postStartSetup
	I0814 01:05:31.725939   61447 fix.go:56] duration metric: took 19.131305949s for fixHost
	I0814 01:05:31.725962   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHHostname
	I0814 01:05:31.728613   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:31.729001   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:31.729030   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:31.729178   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHPort
	I0814 01:05:31.729379   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 01:05:31.729556   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 01:05:31.729721   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHUsername
	I0814 01:05:31.729861   61447 main.go:141] libmachine: Using SSH client type: native
	I0814 01:05:31.730062   61447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.94 22 <nil> <nil>}
	I0814 01:05:31.730076   61447 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0814 01:05:31.826139   61447 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723597531.803704808
	
	I0814 01:05:31.826161   61447 fix.go:216] guest clock: 1723597531.803704808
	I0814 01:05:31.826172   61447 fix.go:229] Guest: 2024-08-14 01:05:31.803704808 +0000 UTC Remote: 2024-08-14 01:05:31.72594365 +0000 UTC m=+255.249076472 (delta=77.761158ms)
	I0814 01:05:31.826197   61447 fix.go:200] guest clock delta is within tolerance: 77.761158ms
	I0814 01:05:31.826208   61447 start.go:83] releasing machines lock for "no-preload-776907", held for 19.231627325s
	I0814 01:05:31.826240   61447 main.go:141] libmachine: (no-preload-776907) Calling .DriverName
	I0814 01:05:31.826536   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetIP
	I0814 01:05:31.829417   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:31.829824   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:31.829854   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:31.829986   61447 main.go:141] libmachine: (no-preload-776907) Calling .DriverName
	I0814 01:05:31.830482   61447 main.go:141] libmachine: (no-preload-776907) Calling .DriverName
	I0814 01:05:31.830633   61447 main.go:141] libmachine: (no-preload-776907) Calling .DriverName
	I0814 01:05:31.830697   61447 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0814 01:05:31.830804   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHHostname
	I0814 01:05:31.830894   61447 ssh_runner.go:195] Run: cat /version.json
	I0814 01:05:31.830914   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHHostname
	I0814 01:05:31.833641   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:31.833963   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:31.833992   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:31.834096   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:31.834260   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHPort
	I0814 01:05:31.834427   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 01:05:31.834549   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHUsername
	I0814 01:05:31.834575   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:31.834599   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:31.834696   61447 sshutil.go:53] new ssh client: &{IP:192.168.72.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/no-preload-776907/id_rsa Username:docker}
	I0814 01:05:31.834773   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHPort
	I0814 01:05:31.834917   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 01:05:31.835101   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHUsername
	I0814 01:05:31.835253   61447 sshutil.go:53] new ssh client: &{IP:192.168.72.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/no-preload-776907/id_rsa Username:docker}
	I0814 01:05:31.915928   61447 ssh_runner.go:195] Run: systemctl --version
	I0814 01:05:31.947877   61447 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0814 01:05:32.091869   61447 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0814 01:05:32.097278   61447 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0814 01:05:32.097333   61447 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0814 01:05:32.112225   61447 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0814 01:05:32.112243   61447 start.go:495] detecting cgroup driver to use...
	I0814 01:05:32.112317   61447 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0814 01:05:32.131562   61447 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0814 01:05:32.145858   61447 docker.go:217] disabling cri-docker service (if available) ...
	I0814 01:05:32.145917   61447 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0814 01:05:32.160887   61447 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0814 01:05:32.175742   61447 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0814 01:05:32.290421   61447 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0814 01:05:32.420159   61447 docker.go:233] disabling docker service ...
	I0814 01:05:32.420237   61447 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0814 01:05:32.434020   61447 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0814 01:05:32.451378   61447 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0814 01:05:32.601306   61447 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0814 01:05:32.714480   61447 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0814 01:05:32.727033   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 01:05:32.743611   61447 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0814 01:05:32.743681   61447 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:05:32.753404   61447 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0814 01:05:32.753471   61447 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:05:32.762934   61447 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:05:32.772193   61447 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:05:32.781270   61447 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0814 01:05:32.791271   61447 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:05:32.802788   61447 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:05:32.821431   61447 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:05:32.831529   61447 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0814 01:05:32.840975   61447 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0814 01:05:32.841033   61447 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0814 01:05:32.854037   61447 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0814 01:05:32.863437   61447 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 01:05:32.999601   61447 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0814 01:05:33.152806   61447 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0814 01:05:33.152868   61447 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0814 01:05:33.157209   61447 start.go:563] Will wait 60s for crictl version
	I0814 01:05:33.157266   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:05:33.160792   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0814 01:05:33.196825   61447 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0814 01:05:33.196903   61447 ssh_runner.go:195] Run: crio --version
	I0814 01:05:33.222886   61447 ssh_runner.go:195] Run: crio --version
	I0814 01:05:33.258900   61447 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0814 01:05:33.260059   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetIP
	I0814 01:05:33.263044   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:33.263422   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:33.263449   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:33.263749   61447 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0814 01:05:33.268315   61447 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 01:05:33.282628   61447 kubeadm.go:883] updating cluster {Name:no-preload-776907 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:no-preload-776907 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.94 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0814 01:05:33.282744   61447 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0814 01:05:33.282800   61447 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 01:05:33.319748   61447 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0814 01:05:33.319777   61447 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0814 01:05:33.319875   61447 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0814 01:05:33.319855   61447 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0814 01:05:33.319906   61447 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0814 01:05:33.319846   61447 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 01:05:33.319845   61447 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0814 01:05:33.320006   61447 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0814 01:05:33.320011   61447 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0814 01:05:33.320011   61447 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0814 01:05:33.321705   61447 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0814 01:05:33.321719   61447 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0814 01:05:33.321741   61447 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0814 01:05:33.321800   61447 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0814 01:05:33.321820   61447 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0814 01:05:33.321851   61447 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 01:05:33.321862   61447 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0814 01:05:33.321858   61447 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0814 01:05:33.549228   61447 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0814 01:05:33.558351   61447 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0814 01:05:33.561199   61447 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0814 01:05:33.570929   61447 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0814 01:05:33.573362   61447 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0814 01:05:33.606128   61447 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0814 01:05:33.623839   61447 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0814 01:05:33.721634   61447 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0814 01:05:33.721674   61447 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0814 01:05:33.721695   61447 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0814 01:05:33.721706   61447 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0814 01:05:33.721718   61447 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0814 01:05:33.721743   61447 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0814 01:05:33.721756   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:05:33.721790   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:05:33.721743   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:05:33.721822   61447 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0814 01:05:33.721851   61447 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0814 01:05:33.721904   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:05:33.733731   61447 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0814 01:05:33.733762   61447 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0814 01:05:33.733792   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:05:33.745957   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0814 01:05:33.745957   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0814 01:05:33.746027   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0814 01:05:33.746031   61447 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0814 01:05:33.746075   61447 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0814 01:05:33.746100   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0814 01:05:33.746110   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0814 01:05:33.746128   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:05:33.837313   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0814 01:05:33.837334   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0814 01:05:33.840696   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0814 01:05:33.840751   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0814 01:05:33.840821   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0814 01:05:33.840959   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0814 01:05:33.952383   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0814 01:05:33.952459   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0814 01:05:33.960252   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0814 01:05:33.966935   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0814 01:05:33.966980   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0814 01:05:33.966949   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0814 01:05:34.070125   61447 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0814 01:05:34.070241   61447 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0814 01:05:34.070361   61447 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0814 01:05:34.070427   61447 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0814 01:05:34.070495   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0814 01:05:34.091128   61447 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0814 01:05:34.091240   61447 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0814 01:05:34.092453   61447 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0814 01:05:34.092547   61447 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.15-0
	I0814 01:05:34.092649   61447 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0814 01:05:34.092743   61447 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0814 01:05:34.100595   61447 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0814 01:05:34.100616   61447 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0814 01:05:34.100663   61447 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0814 01:05:34.100799   61447 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0814 01:05:34.130869   61447 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0814 01:05:34.130914   61447 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0814 01:05:34.130931   61447 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0814 01:05:34.130968   61447 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0814 01:05:34.131021   61447 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0814 01:05:34.197462   61447 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 01:05:36.080029   61447 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (1.979348221s)
	I0814 01:05:36.080056   61447 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0814 01:05:36.080081   61447 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0814 01:05:36.080140   61447 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0814 01:05:36.080175   61447 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.882683519s)
	I0814 01:05:36.080139   61447 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0: (1.949094618s)
	I0814 01:05:36.080227   61447 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0814 01:05:36.080270   61447 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 01:05:36.080310   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:05:36.080232   61447 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0814 01:05:33.131411   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting to get IP...
	I0814 01:05:33.132448   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:33.132806   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | unable to find current IP address of domain default-k8s-diff-port-585256 in network mk-default-k8s-diff-port-585256
	I0814 01:05:33.132920   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | I0814 01:05:33.132799   62699 retry.go:31] will retry after 311.730649ms: waiting for machine to come up
	I0814 01:05:33.446380   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:33.446841   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | unable to find current IP address of domain default-k8s-diff-port-585256 in network mk-default-k8s-diff-port-585256
	I0814 01:05:33.446870   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | I0814 01:05:33.446794   62699 retry.go:31] will retry after 383.687115ms: waiting for machine to come up
	I0814 01:05:33.832368   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:33.832974   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | unable to find current IP address of domain default-k8s-diff-port-585256 in network mk-default-k8s-diff-port-585256
	I0814 01:05:33.833008   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | I0814 01:05:33.832808   62699 retry.go:31] will retry after 455.445491ms: waiting for machine to come up
	I0814 01:05:34.289395   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:34.289832   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | unable to find current IP address of domain default-k8s-diff-port-585256 in network mk-default-k8s-diff-port-585256
	I0814 01:05:34.289869   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | I0814 01:05:34.289782   62699 retry.go:31] will retry after 513.174411ms: waiting for machine to come up
	I0814 01:05:34.804399   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:34.804842   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | unable to find current IP address of domain default-k8s-diff-port-585256 in network mk-default-k8s-diff-port-585256
	I0814 01:05:34.804877   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | I0814 01:05:34.804793   62699 retry.go:31] will retry after 497.23394ms: waiting for machine to come up
	I0814 01:05:35.303286   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:35.303809   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | unable to find current IP address of domain default-k8s-diff-port-585256 in network mk-default-k8s-diff-port-585256
	I0814 01:05:35.303839   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | I0814 01:05:35.303757   62699 retry.go:31] will retry after 774.036418ms: waiting for machine to come up
	I0814 01:05:36.080026   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:36.080605   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | unable to find current IP address of domain default-k8s-diff-port-585256 in network mk-default-k8s-diff-port-585256
	I0814 01:05:36.080631   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | I0814 01:05:36.080572   62699 retry.go:31] will retry after 970.636476ms: waiting for machine to come up
	I0814 01:05:37.052546   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:37.052978   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | unable to find current IP address of domain default-k8s-diff-port-585256 in network mk-default-k8s-diff-port-585256
	I0814 01:05:37.053007   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | I0814 01:05:37.052929   62699 retry.go:31] will retry after 1.471882931s: waiting for machine to come up
	I0814 01:05:37.749423   61447 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.669254345s)
	I0814 01:05:37.749462   61447 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0814 01:05:37.749464   61447 ssh_runner.go:235] Completed: which crictl: (1.669139781s)
	I0814 01:05:37.749508   61447 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0814 01:05:37.749520   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 01:05:37.749573   61447 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0814 01:05:40.024973   61447 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.275431609s)
	I0814 01:05:40.024997   61447 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (2.275404079s)
	I0814 01:05:40.025019   61447 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0814 01:05:40.025049   61447 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0814 01:05:40.025050   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 01:05:40.025084   61447 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0814 01:05:38.526491   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:38.527039   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | unable to find current IP address of domain default-k8s-diff-port-585256 in network mk-default-k8s-diff-port-585256
	I0814 01:05:38.527074   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | I0814 01:05:38.526996   62699 retry.go:31] will retry after 1.14308512s: waiting for machine to come up
	I0814 01:05:39.672470   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:39.672869   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | unable to find current IP address of domain default-k8s-diff-port-585256 in network mk-default-k8s-diff-port-585256
	I0814 01:05:39.672893   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | I0814 01:05:39.672812   62699 retry.go:31] will retry after 2.208537111s: waiting for machine to come up
	I0814 01:05:41.883541   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:41.883981   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | unable to find current IP address of domain default-k8s-diff-port-585256 in network mk-default-k8s-diff-port-585256
	I0814 01:05:41.884004   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | I0814 01:05:41.883925   62699 retry.go:31] will retry after 1.996466385s: waiting for machine to come up
	I0814 01:05:43.619471   61447 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.594358195s)
	I0814 01:05:43.619507   61447 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0814 01:05:43.619537   61447 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0814 01:05:43.619541   61447 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.594466847s)
	I0814 01:05:43.619586   61447 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0814 01:05:43.619612   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 01:05:44.986974   61447 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (1.367364508s)
	I0814 01:05:44.987013   61447 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0814 01:05:44.987045   61447 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0814 01:05:44.987041   61447 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.367403978s)
	I0814 01:05:44.987087   61447 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0814 01:05:44.987109   61447 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0814 01:05:44.987207   61447 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0814 01:05:44.991463   61447 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0814 01:05:43.882980   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:43.883366   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | unable to find current IP address of domain default-k8s-diff-port-585256 in network mk-default-k8s-diff-port-585256
	I0814 01:05:43.883395   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | I0814 01:05:43.883327   62699 retry.go:31] will retry after 3.565128765s: waiting for machine to come up
	I0814 01:05:47.449997   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:47.450447   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | unable to find current IP address of domain default-k8s-diff-port-585256 in network mk-default-k8s-diff-port-585256
	I0814 01:05:47.450477   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | I0814 01:05:47.450398   62699 retry.go:31] will retry after 3.284570516s: waiting for machine to come up
	I0814 01:05:46.846330   61447 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (1.859214752s)
	I0814 01:05:46.846363   61447 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0814 01:05:46.846397   61447 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0814 01:05:46.846448   61447 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0814 01:05:47.484561   61447 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0814 01:05:47.484612   61447 cache_images.go:123] Successfully loaded all cached images
	I0814 01:05:47.484618   61447 cache_images.go:92] duration metric: took 14.164829321s to LoadCachedImages
	I0814 01:05:47.484632   61447 kubeadm.go:934] updating node { 192.168.72.94 8443 v1.31.0 crio true true} ...
	I0814 01:05:47.484813   61447 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-776907 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.94
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-776907 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0814 01:05:47.484897   61447 ssh_runner.go:195] Run: crio config
	I0814 01:05:47.530082   61447 cni.go:84] Creating CNI manager for ""
	I0814 01:05:47.530105   61447 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 01:05:47.530120   61447 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0814 01:05:47.530143   61447 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.94 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-776907 NodeName:no-preload-776907 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.94"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.94 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0814 01:05:47.530285   61447 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.94
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-776907"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.94
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.94"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0814 01:05:47.530350   61447 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0814 01:05:47.540091   61447 binaries.go:44] Found k8s binaries, skipping transfer
	I0814 01:05:47.540155   61447 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0814 01:05:47.548445   61447 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0814 01:05:47.563668   61447 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0814 01:05:47.578184   61447 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0814 01:05:47.593013   61447 ssh_runner.go:195] Run: grep 192.168.72.94	control-plane.minikube.internal$ /etc/hosts
	I0814 01:05:47.596371   61447 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.94	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 01:05:47.606895   61447 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 01:05:47.711714   61447 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 01:05:47.726979   61447 certs.go:68] Setting up /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/no-preload-776907 for IP: 192.168.72.94
	I0814 01:05:47.727006   61447 certs.go:194] generating shared ca certs ...
	I0814 01:05:47.727027   61447 certs.go:226] acquiring lock for ca certs: {Name:mke321171338291cf6d66f3acbfe43d3fabcafa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 01:05:47.727236   61447 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19429-9425/.minikube/ca.key
	I0814 01:05:47.727305   61447 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.key
	I0814 01:05:47.727321   61447 certs.go:256] generating profile certs ...
	I0814 01:05:47.727446   61447 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/no-preload-776907/client.key
	I0814 01:05:47.727532   61447 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/no-preload-776907/apiserver.key.b2b1ec25
	I0814 01:05:47.727583   61447 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/no-preload-776907/proxy-client.key
	I0814 01:05:47.727745   61447 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589.pem (1338 bytes)
	W0814 01:05:47.727796   61447 certs.go:480] ignoring /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589_empty.pem, impossibly tiny 0 bytes
	I0814 01:05:47.727811   61447 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem (1679 bytes)
	I0814 01:05:47.727846   61447 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem (1082 bytes)
	I0814 01:05:47.727882   61447 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem (1123 bytes)
	I0814 01:05:47.727907   61447 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem (1675 bytes)
	I0814 01:05:47.727948   61447 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem (1708 bytes)
	I0814 01:05:47.728598   61447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0814 01:05:47.758661   61447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0814 01:05:47.790036   61447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0814 01:05:47.814323   61447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0814 01:05:47.839537   61447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/no-preload-776907/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0814 01:05:47.867466   61447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/no-preload-776907/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0814 01:05:47.898996   61447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/no-preload-776907/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0814 01:05:47.923051   61447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/no-preload-776907/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0814 01:05:47.946004   61447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0814 01:05:47.967147   61447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589.pem --> /usr/share/ca-certificates/16589.pem (1338 bytes)
	I0814 01:05:47.988005   61447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem --> /usr/share/ca-certificates/165892.pem (1708 bytes)
	I0814 01:05:48.009704   61447 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0814 01:05:48.024096   61447 ssh_runner.go:195] Run: openssl version
	I0814 01:05:48.029499   61447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0814 01:05:48.038961   61447 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0814 01:05:48.042928   61447 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 13 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0814 01:05:48.042967   61447 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0814 01:05:48.048101   61447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0814 01:05:48.057498   61447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16589.pem && ln -fs /usr/share/ca-certificates/16589.pem /etc/ssl/certs/16589.pem"
	I0814 01:05:48.067275   61447 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16589.pem
	I0814 01:05:48.071457   61447 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 13 23:59 /usr/share/ca-certificates/16589.pem
	I0814 01:05:48.071503   61447 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16589.pem
	I0814 01:05:48.076924   61447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16589.pem /etc/ssl/certs/51391683.0"
	I0814 01:05:48.086951   61447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/165892.pem && ln -fs /usr/share/ca-certificates/165892.pem /etc/ssl/certs/165892.pem"
	I0814 01:05:48.097071   61447 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/165892.pem
	I0814 01:05:48.101070   61447 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 13 23:59 /usr/share/ca-certificates/165892.pem
	I0814 01:05:48.101116   61447 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/165892.pem
	I0814 01:05:48.106289   61447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/165892.pem /etc/ssl/certs/3ec20f2e.0"
	I0814 01:05:48.116109   61447 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0814 01:05:48.119931   61447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0814 01:05:48.124976   61447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0814 01:05:48.129900   61447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0814 01:05:48.135041   61447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0814 01:05:48.140528   61447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0814 01:05:48.145653   61447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0814 01:05:48.150733   61447 kubeadm.go:392] StartCluster: {Name:no-preload-776907 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:no-preload-776907 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.94 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 01:05:48.150833   61447 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0814 01:05:48.150869   61447 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 01:05:48.184513   61447 cri.go:89] found id: ""
	I0814 01:05:48.184585   61447 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0814 01:05:48.194089   61447 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0814 01:05:48.194107   61447 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0814 01:05:48.194145   61447 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0814 01:05:48.202993   61447 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0814 01:05:48.203917   61447 kubeconfig.go:125] found "no-preload-776907" server: "https://192.168.72.94:8443"
	I0814 01:05:48.205929   61447 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0814 01:05:48.214947   61447 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.94
	I0814 01:05:48.214974   61447 kubeadm.go:1160] stopping kube-system containers ...
	I0814 01:05:48.214985   61447 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0814 01:05:48.215023   61447 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 01:05:48.247731   61447 cri.go:89] found id: ""
	I0814 01:05:48.247803   61447 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0814 01:05:48.262901   61447 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 01:05:48.271600   61447 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 01:05:48.271616   61447 kubeadm.go:157] found existing configuration files:
	
	I0814 01:05:48.271652   61447 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 01:05:48.279915   61447 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 01:05:48.279963   61447 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 01:05:48.288458   61447 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 01:05:48.296996   61447 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 01:05:48.297049   61447 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 01:05:48.305625   61447 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 01:05:48.313796   61447 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 01:05:48.313837   61447 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 01:05:48.322211   61447 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 01:05:48.330289   61447 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 01:05:48.330350   61447 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 01:05:48.338604   61447 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 01:05:48.347106   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:05:48.452598   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:05:49.345180   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:05:49.535832   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:05:49.597770   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:05:49.711880   61447 api_server.go:52] waiting for apiserver process to appear ...
	I0814 01:05:49.711964   61447 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:05:50.212332   61447 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:05:50.712073   61447 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:05:50.726301   61447 api_server.go:72] duration metric: took 1.014425118s to wait for apiserver process to appear ...
	I0814 01:05:50.726335   61447 api_server.go:88] waiting for apiserver healthz status ...
	I0814 01:05:50.726369   61447 api_server.go:253] Checking apiserver healthz at https://192.168.72.94:8443/healthz ...
	I0814 01:05:52.086727   61804 start.go:364] duration metric: took 4m12.466611913s to acquireMachinesLock for "old-k8s-version-179312"
	I0814 01:05:52.086801   61804 start.go:96] Skipping create...Using existing machine configuration
	I0814 01:05:52.086811   61804 fix.go:54] fixHost starting: 
	I0814 01:05:52.087240   61804 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:05:52.087282   61804 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:05:52.104210   61804 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42343
	I0814 01:05:52.104679   61804 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:05:52.105122   61804 main.go:141] libmachine: Using API Version  1
	I0814 01:05:52.105146   61804 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:05:52.105462   61804 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:05:52.105656   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .DriverName
	I0814 01:05:52.105804   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetState
	I0814 01:05:52.107362   61804 fix.go:112] recreateIfNeeded on old-k8s-version-179312: state=Stopped err=<nil>
	I0814 01:05:52.107399   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .DriverName
	W0814 01:05:52.107543   61804 fix.go:138] unexpected machine state, will restart: <nil>
	I0814 01:05:52.109460   61804 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-179312" ...
	I0814 01:05:50.738825   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:50.739311   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Found IP for machine: 192.168.39.110
	I0814 01:05:50.739333   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Reserving static IP address...
	I0814 01:05:50.739353   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has current primary IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:50.739784   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-585256", mac: "52:54:00:00:bd:a3", ip: "192.168.39.110"} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:05:50.739819   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Reserved static IP address: 192.168.39.110
	I0814 01:05:50.739844   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | skip adding static IP to network mk-default-k8s-diff-port-585256 - found existing host DHCP lease matching {name: "default-k8s-diff-port-585256", mac: "52:54:00:00:bd:a3", ip: "192.168.39.110"}
	I0814 01:05:50.739871   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | Getting to WaitForSSH function...
	I0814 01:05:50.739888   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for SSH to be available...
	I0814 01:05:50.742187   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:50.742563   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:05:50.742597   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:50.742696   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | Using SSH client type: external
	I0814 01:05:50.742726   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | Using SSH private key: /home/jenkins/minikube-integration/19429-9425/.minikube/machines/default-k8s-diff-port-585256/id_rsa (-rw-------)
	I0814 01:05:50.742755   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.110 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19429-9425/.minikube/machines/default-k8s-diff-port-585256/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0814 01:05:50.742769   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | About to run SSH command:
	I0814 01:05:50.742784   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | exit 0
	I0814 01:05:50.870185   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | SSH cmd err, output: <nil>: 
	I0814 01:05:50.870601   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetConfigRaw
	I0814 01:05:50.871331   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetIP
	I0814 01:05:50.873990   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:50.874371   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:05:50.874401   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:50.874720   61689 profile.go:143] Saving config to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/default-k8s-diff-port-585256/config.json ...
	I0814 01:05:50.874962   61689 machine.go:94] provisionDockerMachine start ...
	I0814 01:05:50.874984   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .DriverName
	I0814 01:05:50.875223   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHHostname
	I0814 01:05:50.877460   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:50.877829   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:05:50.877868   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:50.877958   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHPort
	I0814 01:05:50.878140   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 01:05:50.878274   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 01:05:50.878440   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHUsername
	I0814 01:05:50.878596   61689 main.go:141] libmachine: Using SSH client type: native
	I0814 01:05:50.878828   61689 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0814 01:05:50.878844   61689 main.go:141] libmachine: About to run SSH command:
	hostname
	I0814 01:05:50.990920   61689 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0814 01:05:50.990952   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetMachineName
	I0814 01:05:50.991216   61689 buildroot.go:166] provisioning hostname "default-k8s-diff-port-585256"
	I0814 01:05:50.991244   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetMachineName
	I0814 01:05:50.991445   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHHostname
	I0814 01:05:50.994031   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:50.994353   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:05:50.994384   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:50.994595   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHPort
	I0814 01:05:50.994785   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 01:05:50.994936   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 01:05:50.995105   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHUsername
	I0814 01:05:50.995273   61689 main.go:141] libmachine: Using SSH client type: native
	I0814 01:05:50.995458   61689 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0814 01:05:50.995475   61689 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-585256 && echo "default-k8s-diff-port-585256" | sudo tee /etc/hostname
	I0814 01:05:51.115106   61689 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-585256
	
	I0814 01:05:51.115141   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHHostname
	I0814 01:05:51.118113   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:51.118480   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:05:51.118509   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:51.118726   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHPort
	I0814 01:05:51.118932   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 01:05:51.119097   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 01:05:51.119218   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHUsername
	I0814 01:05:51.119418   61689 main.go:141] libmachine: Using SSH client type: native
	I0814 01:05:51.119594   61689 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0814 01:05:51.119619   61689 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-585256' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-585256/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-585256' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0814 01:05:51.239368   61689 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 01:05:51.239404   61689 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19429-9425/.minikube CaCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19429-9425/.minikube}
	I0814 01:05:51.239430   61689 buildroot.go:174] setting up certificates
	I0814 01:05:51.239438   61689 provision.go:84] configureAuth start
	I0814 01:05:51.239450   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetMachineName
	I0814 01:05:51.239744   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetIP
	I0814 01:05:51.242426   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:51.242864   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:05:51.242894   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:51.243061   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHHostname
	I0814 01:05:51.245385   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:51.245774   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:05:51.245802   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:51.245950   61689 provision.go:143] copyHostCerts
	I0814 01:05:51.246001   61689 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem, removing ...
	I0814 01:05:51.246012   61689 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem
	I0814 01:05:51.246090   61689 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem (1082 bytes)
	I0814 01:05:51.246184   61689 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem, removing ...
	I0814 01:05:51.246192   61689 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem
	I0814 01:05:51.246211   61689 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem (1123 bytes)
	I0814 01:05:51.246268   61689 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem, removing ...
	I0814 01:05:51.246274   61689 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem
	I0814 01:05:51.246291   61689 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem (1675 bytes)
	I0814 01:05:51.246345   61689 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-585256 san=[127.0.0.1 192.168.39.110 default-k8s-diff-port-585256 localhost minikube]
	I0814 01:05:51.390720   61689 provision.go:177] copyRemoteCerts
	I0814 01:05:51.390779   61689 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0814 01:05:51.390828   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHHostname
	I0814 01:05:51.393583   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:51.394011   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:05:51.394065   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:51.394311   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHPort
	I0814 01:05:51.394493   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 01:05:51.394648   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHUsername
	I0814 01:05:51.394774   61689 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/default-k8s-diff-port-585256/id_rsa Username:docker}
	I0814 01:05:51.479700   61689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0814 01:05:51.501643   61689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0814 01:05:51.523469   61689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0814 01:05:51.548552   61689 provision.go:87] duration metric: took 309.100404ms to configureAuth
	I0814 01:05:51.548579   61689 buildroot.go:189] setting minikube options for container-runtime
	I0814 01:05:51.548811   61689 config.go:182] Loaded profile config "default-k8s-diff-port-585256": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 01:05:51.548902   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHHostname
	I0814 01:05:51.551955   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:51.552410   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:05:51.552439   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:51.552657   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHPort
	I0814 01:05:51.552846   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 01:05:51.553007   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 01:05:51.553131   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHUsername
	I0814 01:05:51.553293   61689 main.go:141] libmachine: Using SSH client type: native
	I0814 01:05:51.553506   61689 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0814 01:05:51.553536   61689 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0814 01:05:51.836027   61689 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0814 01:05:51.836048   61689 machine.go:97] duration metric: took 961.072984ms to provisionDockerMachine
	I0814 01:05:51.836060   61689 start.go:293] postStartSetup for "default-k8s-diff-port-585256" (driver="kvm2")
	I0814 01:05:51.836075   61689 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0814 01:05:51.836092   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .DriverName
	I0814 01:05:51.836448   61689 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0814 01:05:51.836483   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHHostname
	I0814 01:05:51.839252   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:51.839608   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:05:51.839634   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:51.839785   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHPort
	I0814 01:05:51.839998   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 01:05:51.840158   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHUsername
	I0814 01:05:51.840306   61689 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/default-k8s-diff-port-585256/id_rsa Username:docker}
	I0814 01:05:51.928323   61689 ssh_runner.go:195] Run: cat /etc/os-release
	I0814 01:05:51.932227   61689 info.go:137] Remote host: Buildroot 2023.02.9
	I0814 01:05:51.932252   61689 filesync.go:126] Scanning /home/jenkins/minikube-integration/19429-9425/.minikube/addons for local assets ...
	I0814 01:05:51.932331   61689 filesync.go:126] Scanning /home/jenkins/minikube-integration/19429-9425/.minikube/files for local assets ...
	I0814 01:05:51.932417   61689 filesync.go:149] local asset: /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem -> 165892.pem in /etc/ssl/certs
	I0814 01:05:51.932539   61689 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0814 01:05:51.941299   61689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem --> /etc/ssl/certs/165892.pem (1708 bytes)
	I0814 01:05:51.966445   61689 start.go:296] duration metric: took 130.370634ms for postStartSetup
	I0814 01:05:51.966488   61689 fix.go:56] duration metric: took 20.140102397s for fixHost
	I0814 01:05:51.966509   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHHostname
	I0814 01:05:51.969169   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:51.969542   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:05:51.969574   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:51.970716   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHPort
	I0814 01:05:51.970923   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 01:05:51.971093   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 01:05:51.971233   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHUsername
	I0814 01:05:51.971411   61689 main.go:141] libmachine: Using SSH client type: native
	I0814 01:05:51.971649   61689 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0814 01:05:51.971663   61689 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0814 01:05:52.086583   61689 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723597552.047212997
	
	I0814 01:05:52.086611   61689 fix.go:216] guest clock: 1723597552.047212997
	I0814 01:05:52.086621   61689 fix.go:229] Guest: 2024-08-14 01:05:52.047212997 +0000 UTC Remote: 2024-08-14 01:05:51.966492542 +0000 UTC m=+253.980961749 (delta=80.720455ms)
	I0814 01:05:52.086647   61689 fix.go:200] guest clock delta is within tolerance: 80.720455ms
	I0814 01:05:52.086653   61689 start.go:83] releasing machines lock for "default-k8s-diff-port-585256", held for 20.260304872s
	I0814 01:05:52.086686   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .DriverName
	I0814 01:05:52.086988   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetIP
	I0814 01:05:52.089862   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:52.090237   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:05:52.090269   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:52.090388   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .DriverName
	I0814 01:05:52.090896   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .DriverName
	I0814 01:05:52.091065   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .DriverName
	I0814 01:05:52.091161   61689 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0814 01:05:52.091208   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHHostname
	I0814 01:05:52.091307   61689 ssh_runner.go:195] Run: cat /version.json
	I0814 01:05:52.091327   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHHostname
	I0814 01:05:52.094188   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:52.094456   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:52.094520   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:05:52.094539   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:52.094722   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHPort
	I0814 01:05:52.094906   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 01:05:52.095028   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:05:52.095052   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:52.095095   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHUsername
	I0814 01:05:52.095210   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHPort
	I0814 01:05:52.095290   61689 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/default-k8s-diff-port-585256/id_rsa Username:docker}
	I0814 01:05:52.095355   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 01:05:52.095505   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHUsername
	I0814 01:05:52.095657   61689 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/default-k8s-diff-port-585256/id_rsa Username:docker}
	I0814 01:05:52.214838   61689 ssh_runner.go:195] Run: systemctl --version
	I0814 01:05:52.222204   61689 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0814 01:05:52.375439   61689 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0814 01:05:52.381523   61689 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0814 01:05:52.381609   61689 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0814 01:05:52.401552   61689 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0814 01:05:52.401582   61689 start.go:495] detecting cgroup driver to use...
	I0814 01:05:52.401651   61689 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0814 01:05:52.417919   61689 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0814 01:05:52.437217   61689 docker.go:217] disabling cri-docker service (if available) ...
	I0814 01:05:52.437288   61689 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0814 01:05:52.453875   61689 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0814 01:05:52.470300   61689 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0814 01:05:52.595346   61689 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0814 01:05:52.762539   61689 docker.go:233] disabling docker service ...
	I0814 01:05:52.762616   61689 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0814 01:05:52.778328   61689 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0814 01:05:52.791736   61689 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0814 01:05:52.935414   61689 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0814 01:05:53.120909   61689 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0814 01:05:53.134424   61689 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 01:05:53.152618   61689 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0814 01:05:53.152693   61689 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:05:53.164847   61689 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0814 01:05:53.164922   61689 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:05:53.176337   61689 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:05:53.187338   61689 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:05:53.198573   61689 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0814 01:05:53.208385   61689 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:05:53.218220   61689 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:05:53.234795   61689 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:05:53.251006   61689 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0814 01:05:53.265820   61689 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0814 01:05:53.265883   61689 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0814 01:05:53.285753   61689 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0814 01:05:53.298127   61689 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 01:05:53.458646   61689 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0814 01:05:53.610690   61689 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0814 01:05:53.610765   61689 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0814 01:05:53.615292   61689 start.go:563] Will wait 60s for crictl version
	I0814 01:05:53.615348   61689 ssh_runner.go:195] Run: which crictl
	I0814 01:05:53.618756   61689 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0814 01:05:53.658450   61689 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0814 01:05:53.658551   61689 ssh_runner.go:195] Run: crio --version
	I0814 01:05:53.685316   61689 ssh_runner.go:195] Run: crio --version
	I0814 01:05:53.715106   61689 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0814 01:05:52.110579   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .Start
	I0814 01:05:52.110744   61804 main.go:141] libmachine: (old-k8s-version-179312) Ensuring networks are active...
	I0814 01:05:52.111309   61804 main.go:141] libmachine: (old-k8s-version-179312) Ensuring network default is active
	I0814 01:05:52.111709   61804 main.go:141] libmachine: (old-k8s-version-179312) Ensuring network mk-old-k8s-version-179312 is active
	I0814 01:05:52.112094   61804 main.go:141] libmachine: (old-k8s-version-179312) Getting domain xml...
	I0814 01:05:52.112845   61804 main.go:141] libmachine: (old-k8s-version-179312) Creating domain...
	I0814 01:05:53.502995   61804 main.go:141] libmachine: (old-k8s-version-179312) Waiting to get IP...
	I0814 01:05:53.504003   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:05:53.504428   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 01:05:53.504496   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 01:05:53.504392   62858 retry.go:31] will retry after 197.24813ms: waiting for machine to come up
	I0814 01:05:53.702874   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:05:53.703413   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 01:05:53.703435   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 01:05:53.703362   62858 retry.go:31] will retry after 310.273767ms: waiting for machine to come up
	I0814 01:05:54.015867   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:05:54.016309   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 01:05:54.016343   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 01:05:54.016247   62858 retry.go:31] will retry after 401.494411ms: waiting for machine to come up
	I0814 01:05:54.419847   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:05:54.420305   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 01:05:54.420330   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 01:05:54.420256   62858 retry.go:31] will retry after 407.322632ms: waiting for machine to come up
	I0814 01:05:53.379895   61447 api_server.go:279] https://192.168.72.94:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0814 01:05:53.379926   61447 api_server.go:103] status: https://192.168.72.94:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0814 01:05:53.379939   61447 api_server.go:253] Checking apiserver healthz at https://192.168.72.94:8443/healthz ...
	I0814 01:05:53.410913   61447 api_server.go:279] https://192.168.72.94:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0814 01:05:53.410945   61447 api_server.go:103] status: https://192.168.72.94:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0814 01:05:53.727193   61447 api_server.go:253] Checking apiserver healthz at https://192.168.72.94:8443/healthz ...
	I0814 01:05:53.740840   61447 api_server.go:279] https://192.168.72.94:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0814 01:05:53.740877   61447 api_server.go:103] status: https://192.168.72.94:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0814 01:05:54.227186   61447 api_server.go:253] Checking apiserver healthz at https://192.168.72.94:8443/healthz ...
	I0814 01:05:54.238685   61447 api_server.go:279] https://192.168.72.94:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0814 01:05:54.238721   61447 api_server.go:103] status: https://192.168.72.94:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0814 01:05:54.727193   61447 api_server.go:253] Checking apiserver healthz at https://192.168.72.94:8443/healthz ...
	I0814 01:05:54.733996   61447 api_server.go:279] https://192.168.72.94:8443/healthz returned 200:
	ok
	I0814 01:05:54.744409   61447 api_server.go:141] control plane version: v1.31.0
	I0814 01:05:54.744439   61447 api_server.go:131] duration metric: took 4.018095644s to wait for apiserver health ...
	I0814 01:05:54.744455   61447 cni.go:84] Creating CNI manager for ""
	I0814 01:05:54.744495   61447 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 01:05:54.746461   61447 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0814 01:05:54.748115   61447 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0814 01:05:54.764310   61447 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0814 01:05:54.794096   61447 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 01:05:54.818989   61447 system_pods.go:59] 8 kube-system pods found
	I0814 01:05:54.819032   61447 system_pods.go:61] "coredns-6f6b679f8f-dz9zk" [67e29ce3-7f67-4b96-8030-c980773b5772] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0814 01:05:54.819042   61447 system_pods.go:61] "etcd-no-preload-776907" [b81b7341-dcd8-4374-8241-8797eb33d707] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0814 01:05:54.819081   61447 system_pods.go:61] "kube-apiserver-no-preload-776907" [33b066e2-28ef-46a7-95d7-b17806cdbde6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0814 01:05:54.819094   61447 system_pods.go:61] "kube-controller-manager-no-preload-776907" [1de07b1f-7e0d-4704-84dc-fbb1280fc3bf] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0814 01:05:54.819106   61447 system_pods.go:61] "kube-proxy-pgm9t" [efad60b0-c62e-4c47-974b-98fdca9d3496] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0814 01:05:54.819119   61447 system_pods.go:61] "kube-scheduler-no-preload-776907" [6a57c2f5-6194-4e84-bfd3-985a6ff2333d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0814 01:05:54.819136   61447 system_pods.go:61] "metrics-server-6867b74b74-gb2dt" [c950c58e-c5c3-4535-b10f-f4379ff03409] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 01:05:54.819157   61447 system_pods.go:61] "storage-provisioner" [d0ba9510-e0a5-4558-98e3-a9510920f93a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0814 01:05:54.819172   61447 system_pods.go:74] duration metric: took 25.05113ms to wait for pod list to return data ...
	I0814 01:05:54.819195   61447 node_conditions.go:102] verifying NodePressure condition ...
	I0814 01:05:54.826286   61447 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0814 01:05:54.826394   61447 node_conditions.go:123] node cpu capacity is 2
	I0814 01:05:54.826437   61447 node_conditions.go:105] duration metric: took 7.224617ms to run NodePressure ...
	I0814 01:05:54.826473   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:05:55.135886   61447 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0814 01:05:55.142122   61447 kubeadm.go:739] kubelet initialised
	I0814 01:05:55.142142   61447 kubeadm.go:740] duration metric: took 6.231178ms waiting for restarted kubelet to initialise ...
	I0814 01:05:55.142157   61447 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 01:05:55.147513   61447 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6f6b679f8f-dz9zk" in "kube-system" namespace to be "Ready" ...
	I0814 01:05:55.153178   61447 pod_ready.go:97] node "no-preload-776907" hosting pod "coredns-6f6b679f8f-dz9zk" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-776907" has status "Ready":"False"
	I0814 01:05:55.153200   61447 pod_ready.go:81] duration metric: took 5.659541ms for pod "coredns-6f6b679f8f-dz9zk" in "kube-system" namespace to be "Ready" ...
	E0814 01:05:55.153208   61447 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-776907" hosting pod "coredns-6f6b679f8f-dz9zk" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-776907" has status "Ready":"False"
	I0814 01:05:55.153215   61447 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-776907" in "kube-system" namespace to be "Ready" ...
	I0814 01:05:55.158158   61447 pod_ready.go:97] node "no-preload-776907" hosting pod "etcd-no-preload-776907" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-776907" has status "Ready":"False"
	I0814 01:05:55.158182   61447 pod_ready.go:81] duration metric: took 4.958453ms for pod "etcd-no-preload-776907" in "kube-system" namespace to be "Ready" ...
	E0814 01:05:55.158192   61447 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-776907" hosting pod "etcd-no-preload-776907" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-776907" has status "Ready":"False"
	I0814 01:05:55.158199   61447 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-776907" in "kube-system" namespace to be "Ready" ...
	I0814 01:05:55.164468   61447 pod_ready.go:97] node "no-preload-776907" hosting pod "kube-apiserver-no-preload-776907" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-776907" has status "Ready":"False"
	I0814 01:05:55.164490   61447 pod_ready.go:81] duration metric: took 6.286201ms for pod "kube-apiserver-no-preload-776907" in "kube-system" namespace to be "Ready" ...
	E0814 01:05:55.164499   61447 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-776907" hosting pod "kube-apiserver-no-preload-776907" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-776907" has status "Ready":"False"
	I0814 01:05:55.164506   61447 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-776907" in "kube-system" namespace to be "Ready" ...
	I0814 01:05:55.198966   61447 pod_ready.go:97] node "no-preload-776907" hosting pod "kube-controller-manager-no-preload-776907" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-776907" has status "Ready":"False"
	I0814 01:05:55.199003   61447 pod_ready.go:81] duration metric: took 34.484311ms for pod "kube-controller-manager-no-preload-776907" in "kube-system" namespace to be "Ready" ...
	E0814 01:05:55.199017   61447 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-776907" hosting pod "kube-controller-manager-no-preload-776907" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-776907" has status "Ready":"False"
	I0814 01:05:55.199026   61447 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-pgm9t" in "kube-system" namespace to be "Ready" ...
	I0814 01:05:55.598334   61447 pod_ready.go:97] node "no-preload-776907" hosting pod "kube-proxy-pgm9t" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-776907" has status "Ready":"False"
	I0814 01:05:55.598365   61447 pod_ready.go:81] duration metric: took 399.329275ms for pod "kube-proxy-pgm9t" in "kube-system" namespace to be "Ready" ...
	E0814 01:05:55.598377   61447 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-776907" hosting pod "kube-proxy-pgm9t" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-776907" has status "Ready":"False"
	I0814 01:05:55.598386   61447 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-776907" in "kube-system" namespace to be "Ready" ...
	I0814 01:05:55.998091   61447 pod_ready.go:97] node "no-preload-776907" hosting pod "kube-scheduler-no-preload-776907" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-776907" has status "Ready":"False"
	I0814 01:05:55.998127   61447 pod_ready.go:81] duration metric: took 399.731033ms for pod "kube-scheduler-no-preload-776907" in "kube-system" namespace to be "Ready" ...
	E0814 01:05:55.998142   61447 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-776907" hosting pod "kube-scheduler-no-preload-776907" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-776907" has status "Ready":"False"
	I0814 01:05:55.998152   61447 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace to be "Ready" ...
	I0814 01:05:56.397421   61447 pod_ready.go:97] node "no-preload-776907" hosting pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-776907" has status "Ready":"False"
	I0814 01:05:56.397448   61447 pod_ready.go:81] duration metric: took 399.277712ms for pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace to be "Ready" ...
	E0814 01:05:56.397458   61447 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-776907" hosting pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-776907" has status "Ready":"False"
	I0814 01:05:56.397465   61447 pod_ready.go:38] duration metric: took 1.255299191s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 01:05:56.397481   61447 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0814 01:05:56.409600   61447 ops.go:34] apiserver oom_adj: -16
	I0814 01:05:56.409643   61447 kubeadm.go:597] duration metric: took 8.215521031s to restartPrimaryControlPlane
	I0814 01:05:56.409656   61447 kubeadm.go:394] duration metric: took 8.258927601s to StartCluster
	I0814 01:05:56.409677   61447 settings.go:142] acquiring lock: {Name:mkb0f793aa2a6618ff3457f9cd2d34beec5f1b5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 01:05:56.409769   61447 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19429-9425/kubeconfig
	I0814 01:05:56.411135   61447 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19429-9425/kubeconfig: {Name:mkd99f966962f24d3ec49056c344f5320df43dba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 01:05:56.411434   61447 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.94 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0814 01:05:56.411510   61447 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0814 01:05:56.411605   61447 addons.go:69] Setting storage-provisioner=true in profile "no-preload-776907"
	I0814 01:05:56.411639   61447 addons.go:234] Setting addon storage-provisioner=true in "no-preload-776907"
	W0814 01:05:56.411651   61447 addons.go:243] addon storage-provisioner should already be in state true
	I0814 01:05:56.411692   61447 host.go:66] Checking if "no-preload-776907" exists ...
	I0814 01:05:56.411702   61447 config.go:182] Loaded profile config "no-preload-776907": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 01:05:56.411755   61447 addons.go:69] Setting default-storageclass=true in profile "no-preload-776907"
	I0814 01:05:56.411792   61447 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-776907"
	I0814 01:05:56.412127   61447 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:05:56.412169   61447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:05:56.412221   61447 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:05:56.412238   61447 addons.go:69] Setting metrics-server=true in profile "no-preload-776907"
	I0814 01:05:56.412249   61447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:05:56.412272   61447 addons.go:234] Setting addon metrics-server=true in "no-preload-776907"
	W0814 01:05:56.412289   61447 addons.go:243] addon metrics-server should already be in state true
	I0814 01:05:56.412325   61447 host.go:66] Checking if "no-preload-776907" exists ...
	I0814 01:05:56.412679   61447 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:05:56.412726   61447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:05:56.413470   61447 out.go:177] * Verifying Kubernetes components...
	I0814 01:05:56.414907   61447 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 01:05:56.432617   61447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40991
	I0814 01:05:56.433633   61447 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:05:56.433655   61447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42565
	I0814 01:05:56.433682   61447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33323
	I0814 01:05:56.434304   61447 main.go:141] libmachine: Using API Version  1
	I0814 01:05:56.434325   61447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:05:56.434348   61447 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:05:56.434768   61447 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:05:56.434828   61447 main.go:141] libmachine: Using API Version  1
	I0814 01:05:56.434849   61447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:05:56.435292   61447 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:05:56.435318   61447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:05:56.435500   61447 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:05:56.436085   61447 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:05:56.436133   61447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:05:56.436678   61447 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:05:56.438722   61447 main.go:141] libmachine: Using API Version  1
	I0814 01:05:56.438744   61447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:05:56.439300   61447 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:05:56.442254   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetState
	I0814 01:05:56.445951   61447 addons.go:234] Setting addon default-storageclass=true in "no-preload-776907"
	W0814 01:05:56.445969   61447 addons.go:243] addon default-storageclass should already be in state true
	I0814 01:05:56.445997   61447 host.go:66] Checking if "no-preload-776907" exists ...
	I0814 01:05:56.446331   61447 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:05:56.446364   61447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:05:56.457855   61447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36297
	I0814 01:05:56.459973   61447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40635
	I0814 01:05:56.460484   61447 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:05:56.461068   61447 main.go:141] libmachine: Using API Version  1
	I0814 01:05:56.461089   61447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:05:56.461565   61447 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:05:56.462741   61447 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:05:56.462899   61447 main.go:141] libmachine: Using API Version  1
	I0814 01:05:56.462913   61447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:05:56.463577   61447 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:05:56.463640   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetState
	I0814 01:05:56.464100   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetState
	I0814 01:05:56.464341   61447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38841
	I0814 01:05:56.465394   61447 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:05:56.465878   61447 main.go:141] libmachine: (no-preload-776907) Calling .DriverName
	I0814 01:05:56.465995   61447 main.go:141] libmachine: Using API Version  1
	I0814 01:05:56.466007   61447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:05:56.466617   61447 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:05:56.466684   61447 main.go:141] libmachine: (no-preload-776907) Calling .DriverName
	I0814 01:05:56.467327   61447 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:05:56.467367   61447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:05:56.468708   61447 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0814 01:05:56.468802   61447 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 01:05:56.469927   61447 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0814 01:05:56.469944   61447 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0814 01:05:56.469963   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHHostname
	I0814 01:05:56.473235   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:56.473684   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:56.473705   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:56.473879   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHPort
	I0814 01:05:56.474052   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 01:05:56.474176   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHUsername
	I0814 01:05:56.474181   61447 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 01:05:56.474230   61447 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0814 01:05:56.474244   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHHostname
	I0814 01:05:56.474328   61447 sshutil.go:53] new ssh client: &{IP:192.168.72.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/no-preload-776907/id_rsa Username:docker}
	I0814 01:05:56.477789   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:56.478291   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:56.478307   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:56.478643   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHPort
	I0814 01:05:56.478813   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 01:05:56.478932   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHUsername
	I0814 01:05:56.479056   61447 sshutil.go:53] new ssh client: &{IP:192.168.72.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/no-preload-776907/id_rsa Username:docker}
	I0814 01:05:56.506690   61447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40059
	I0814 01:05:56.507196   61447 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:05:56.507726   61447 main.go:141] libmachine: Using API Version  1
	I0814 01:05:56.507750   61447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:05:56.508129   61447 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:05:56.508352   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetState
	I0814 01:05:53.716678   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetIP
	I0814 01:05:53.719662   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:53.720132   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:05:53.720161   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:53.720382   61689 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0814 01:05:53.724276   61689 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 01:05:53.736896   61689 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-585256 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:default-k8s-diff-port-585256 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0814 01:05:53.737033   61689 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0814 01:05:53.737090   61689 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 01:05:53.786464   61689 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0814 01:05:53.786549   61689 ssh_runner.go:195] Run: which lz4
	I0814 01:05:53.791254   61689 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0814 01:05:53.796216   61689 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0814 01:05:53.796251   61689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0814 01:05:55.074296   61689 crio.go:462] duration metric: took 1.283077887s to copy over tarball
	I0814 01:05:55.074381   61689 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0814 01:05:57.330151   61689 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.255736783s)
	I0814 01:05:57.330183   61689 crio.go:469] duration metric: took 2.255855524s to extract the tarball
	I0814 01:05:57.330193   61689 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0814 01:05:57.390001   61689 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 01:05:57.438765   61689 crio.go:514] all images are preloaded for cri-o runtime.
	I0814 01:05:57.438795   61689 cache_images.go:84] Images are preloaded, skipping loading
	I0814 01:05:57.438804   61689 kubeadm.go:934] updating node { 192.168.39.110 8444 v1.31.0 crio true true} ...
	I0814 01:05:57.438939   61689 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-585256 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.110
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-585256 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0814 01:05:57.439019   61689 ssh_runner.go:195] Run: crio config
	I0814 01:05:57.487432   61689 cni.go:84] Creating CNI manager for ""
	I0814 01:05:57.487456   61689 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 01:05:57.487468   61689 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0814 01:05:57.487488   61689 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.110 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-585256 NodeName:default-k8s-diff-port-585256 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.110"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.110 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0814 01:05:57.487628   61689 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.110
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-585256"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.110
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.110"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0814 01:05:57.487683   61689 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0814 01:05:57.499806   61689 binaries.go:44] Found k8s binaries, skipping transfer
	I0814 01:05:57.499875   61689 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0814 01:05:57.508987   61689 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0814 01:05:57.527561   61689 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0814 01:05:57.546193   61689 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0814 01:05:57.566209   61689 ssh_runner.go:195] Run: grep 192.168.39.110	control-plane.minikube.internal$ /etc/hosts
	I0814 01:05:57.569852   61689 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.110	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 01:05:57.584800   61689 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 01:05:57.718643   61689 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 01:05:57.739124   61689 certs.go:68] Setting up /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/default-k8s-diff-port-585256 for IP: 192.168.39.110
	I0814 01:05:57.739153   61689 certs.go:194] generating shared ca certs ...
	I0814 01:05:57.739174   61689 certs.go:226] acquiring lock for ca certs: {Name:mke321171338291cf6d66f3acbfe43d3fabcafa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 01:05:57.739390   61689 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19429-9425/.minikube/ca.key
	I0814 01:05:57.739461   61689 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.key
	I0814 01:05:57.739476   61689 certs.go:256] generating profile certs ...
	I0814 01:05:57.739607   61689 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/default-k8s-diff-port-585256/client.key
	I0814 01:05:57.739700   61689 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/default-k8s-diff-port-585256/apiserver.key.7cbada89
	I0814 01:05:57.739764   61689 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/default-k8s-diff-port-585256/proxy-client.key
	I0814 01:05:57.739951   61689 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589.pem (1338 bytes)
	W0814 01:05:57.740000   61689 certs.go:480] ignoring /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589_empty.pem, impossibly tiny 0 bytes
	I0814 01:05:57.740017   61689 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem (1679 bytes)
	I0814 01:05:57.740054   61689 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem (1082 bytes)
	I0814 01:05:57.740096   61689 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem (1123 bytes)
	I0814 01:05:57.740128   61689 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem (1675 bytes)
	I0814 01:05:57.740198   61689 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem (1708 bytes)
	I0814 01:05:57.740914   61689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0814 01:05:57.776830   61689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0814 01:05:57.805557   61689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0814 01:05:57.838303   61689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0814 01:05:57.878807   61689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/default-k8s-diff-port-585256/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0814 01:05:57.918149   61689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/default-k8s-diff-port-585256/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0814 01:05:57.951098   61689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/default-k8s-diff-port-585256/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0814 01:05:57.979966   61689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/default-k8s-diff-port-585256/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0814 01:05:58.008045   61689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem --> /usr/share/ca-certificates/165892.pem (1708 bytes)
	I0814 01:05:56.510326   61447 main.go:141] libmachine: (no-preload-776907) Calling .DriverName
	I0814 01:05:56.510711   61447 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0814 01:05:56.510727   61447 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0814 01:05:56.510746   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHHostname
	I0814 01:05:56.513933   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:56.514347   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:56.514366   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:56.514640   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHPort
	I0814 01:05:56.514790   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 01:05:56.514921   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHUsername
	I0814 01:05:56.515041   61447 sshutil.go:53] new ssh client: &{IP:192.168.72.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/no-preload-776907/id_rsa Username:docker}
	I0814 01:05:56.648210   61447 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 01:05:56.669968   61447 node_ready.go:35] waiting up to 6m0s for node "no-preload-776907" to be "Ready" ...
	I0814 01:05:56.752258   61447 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0814 01:05:56.752282   61447 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0814 01:05:56.784534   61447 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0814 01:05:56.784570   61447 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0814 01:05:56.797555   61447 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0814 01:05:56.811711   61447 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 01:05:56.852143   61447 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0814 01:05:56.852222   61447 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0814 01:05:56.896802   61447 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0814 01:05:57.332181   61447 main.go:141] libmachine: Making call to close driver server
	I0814 01:05:57.332207   61447 main.go:141] libmachine: (no-preload-776907) Calling .Close
	I0814 01:05:57.332534   61447 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:05:57.332552   61447 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:05:57.332562   61447 main.go:141] libmachine: Making call to close driver server
	I0814 01:05:57.332570   61447 main.go:141] libmachine: (no-preload-776907) Calling .Close
	I0814 01:05:57.332892   61447 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:05:57.332908   61447 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:05:57.332999   61447 main.go:141] libmachine: (no-preload-776907) DBG | Closing plugin on server side
	I0814 01:05:57.377695   61447 main.go:141] libmachine: Making call to close driver server
	I0814 01:05:57.377726   61447 main.go:141] libmachine: (no-preload-776907) Calling .Close
	I0814 01:05:57.378310   61447 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:05:57.378335   61447 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:05:57.378307   61447 main.go:141] libmachine: (no-preload-776907) DBG | Closing plugin on server side
	I0814 01:05:58.285384   61447 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.388491618s)
	I0814 01:05:58.285399   61447 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.473604802s)
	I0814 01:05:58.285438   61447 main.go:141] libmachine: Making call to close driver server
	I0814 01:05:58.285466   61447 main.go:141] libmachine: (no-preload-776907) Calling .Close
	I0814 01:05:58.285438   61447 main.go:141] libmachine: Making call to close driver server
	I0814 01:05:58.285542   61447 main.go:141] libmachine: (no-preload-776907) Calling .Close
	I0814 01:05:58.285816   61447 main.go:141] libmachine: (no-preload-776907) DBG | Closing plugin on server side
	I0814 01:05:58.285858   61447 main.go:141] libmachine: (no-preload-776907) DBG | Closing plugin on server side
	I0814 01:05:58.285874   61447 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:05:58.285881   61447 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:05:58.285890   61447 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:05:58.285897   61447 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:05:58.285903   61447 main.go:141] libmachine: Making call to close driver server
	I0814 01:05:58.285908   61447 main.go:141] libmachine: Making call to close driver server
	I0814 01:05:58.285915   61447 main.go:141] libmachine: (no-preload-776907) Calling .Close
	I0814 01:05:58.285934   61447 main.go:141] libmachine: (no-preload-776907) Calling .Close
	I0814 01:05:58.286168   61447 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:05:58.286180   61447 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:05:58.287529   61447 main.go:141] libmachine: (no-preload-776907) DBG | Closing plugin on server side
	I0814 01:05:58.287541   61447 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:05:58.287560   61447 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:05:58.287576   61447 addons.go:475] Verifying addon metrics-server=true in "no-preload-776907"
	I0814 01:05:58.289411   61447 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0814 01:05:54.828943   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:05:54.829542   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 01:05:54.829567   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 01:05:54.829451   62858 retry.go:31] will retry after 761.368258ms: waiting for machine to come up
	I0814 01:05:55.592398   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:05:55.593051   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 01:05:55.593077   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 01:05:55.592959   62858 retry.go:31] will retry after 776.526082ms: waiting for machine to come up
	I0814 01:05:56.370701   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:05:56.371193   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 01:05:56.371214   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 01:05:56.371176   62858 retry.go:31] will retry after 1.033572565s: waiting for machine to come up
	I0814 01:05:57.407052   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:05:57.407572   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 01:05:57.407608   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 01:05:57.407514   62858 retry.go:31] will retry after 1.075443116s: waiting for machine to come up
	I0814 01:05:58.484020   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:05:58.484428   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 01:05:58.484450   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 01:05:58.484400   62858 retry.go:31] will retry after 1.753983606s: waiting for machine to come up
	I0814 01:05:58.290516   61447 addons.go:510] duration metric: took 1.879011423s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0814 01:05:58.674495   61447 node_ready.go:53] node "no-preload-776907" has status "Ready":"False"
	I0814 01:06:00.726396   61447 node_ready.go:53] node "no-preload-776907" has status "Ready":"False"
	I0814 01:05:58.035164   61689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0814 01:05:58.062151   61689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589.pem --> /usr/share/ca-certificates/16589.pem (1338 bytes)
	I0814 01:05:58.088779   61689 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0814 01:05:58.104815   61689 ssh_runner.go:195] Run: openssl version
	I0814 01:05:58.111743   61689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0814 01:05:58.122523   61689 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0814 01:05:58.126771   61689 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 13 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0814 01:05:58.126827   61689 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0814 01:05:58.132103   61689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0814 01:05:58.143604   61689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16589.pem && ln -fs /usr/share/ca-certificates/16589.pem /etc/ssl/certs/16589.pem"
	I0814 01:05:58.155065   61689 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16589.pem
	I0814 01:05:58.160457   61689 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 13 23:59 /usr/share/ca-certificates/16589.pem
	I0814 01:05:58.160511   61689 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16589.pem
	I0814 01:05:58.167417   61689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16589.pem /etc/ssl/certs/51391683.0"
	I0814 01:05:58.180825   61689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/165892.pem && ln -fs /usr/share/ca-certificates/165892.pem /etc/ssl/certs/165892.pem"
	I0814 01:05:58.193263   61689 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/165892.pem
	I0814 01:05:58.198571   61689 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 13 23:59 /usr/share/ca-certificates/165892.pem
	I0814 01:05:58.198637   61689 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/165892.pem
	I0814 01:05:58.205645   61689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/165892.pem /etc/ssl/certs/3ec20f2e.0"
	I0814 01:05:58.219088   61689 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0814 01:05:58.224431   61689 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0814 01:05:58.231762   61689 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0814 01:05:58.238996   61689 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0814 01:05:58.244758   61689 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0814 01:05:58.250112   61689 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0814 01:05:58.257224   61689 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0814 01:05:58.262563   61689 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-585256 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:default-k8s-diff-port-585256 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 01:05:58.262677   61689 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0814 01:05:58.262745   61689 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 01:05:58.309680   61689 cri.go:89] found id: ""
	I0814 01:05:58.309753   61689 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0814 01:05:58.319775   61689 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0814 01:05:58.319796   61689 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0814 01:05:58.319852   61689 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0814 01:05:58.329093   61689 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0814 01:05:58.330026   61689 kubeconfig.go:125] found "default-k8s-diff-port-585256" server: "https://192.168.39.110:8444"
	I0814 01:05:58.332001   61689 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0814 01:05:58.341206   61689 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.110
	I0814 01:05:58.341235   61689 kubeadm.go:1160] stopping kube-system containers ...
	I0814 01:05:58.341247   61689 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0814 01:05:58.341311   61689 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 01:05:58.376929   61689 cri.go:89] found id: ""
	I0814 01:05:58.376991   61689 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0814 01:05:58.393789   61689 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 01:05:58.402954   61689 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 01:05:58.402979   61689 kubeadm.go:157] found existing configuration files:
	
	I0814 01:05:58.403032   61689 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0814 01:05:58.412025   61689 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 01:05:58.412081   61689 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 01:05:58.421031   61689 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0814 01:05:58.429702   61689 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 01:05:58.429774   61689 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 01:05:58.438859   61689 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0814 01:05:58.447047   61689 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 01:05:58.447106   61689 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 01:05:58.455697   61689 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0814 01:05:58.463942   61689 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 01:05:58.464004   61689 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 01:05:58.472399   61689 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 01:05:58.481173   61689 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:05:58.591187   61689 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:05:59.150641   61689 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:05:59.356842   61689 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:05:59.416846   61689 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:05:59.500693   61689 api_server.go:52] waiting for apiserver process to appear ...
	I0814 01:05:59.500779   61689 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:00.001860   61689 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:00.500969   61689 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:01.001662   61689 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:01.030737   61689 api_server.go:72] duration metric: took 1.530044643s to wait for apiserver process to appear ...
	I0814 01:06:01.030766   61689 api_server.go:88] waiting for apiserver healthz status ...
	I0814 01:06:01.030790   61689 api_server.go:253] Checking apiserver healthz at https://192.168.39.110:8444/healthz ...
	I0814 01:06:01.031270   61689 api_server.go:269] stopped: https://192.168.39.110:8444/healthz: Get "https://192.168.39.110:8444/healthz": dial tcp 192.168.39.110:8444: connect: connection refused
	I0814 01:06:01.530913   61689 api_server.go:253] Checking apiserver healthz at https://192.168.39.110:8444/healthz ...
	I0814 01:06:00.239701   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:00.240210   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 01:06:00.240234   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 01:06:00.240157   62858 retry.go:31] will retry after 1.471169968s: waiting for machine to come up
	I0814 01:06:01.713921   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:01.714410   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 01:06:01.714449   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 01:06:01.714385   62858 retry.go:31] will retry after 2.509653415s: waiting for machine to come up
	I0814 01:06:04.225883   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:04.226391   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 01:06:04.226417   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 01:06:04.226346   62858 retry.go:31] will retry after 3.61921572s: waiting for machine to come up
	I0814 01:06:04.011296   61689 api_server.go:279] https://192.168.39.110:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0814 01:06:04.011342   61689 api_server.go:103] status: https://192.168.39.110:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0814 01:06:04.011359   61689 api_server.go:253] Checking apiserver healthz at https://192.168.39.110:8444/healthz ...
	I0814 01:06:04.030095   61689 api_server.go:279] https://192.168.39.110:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0814 01:06:04.030128   61689 api_server.go:103] status: https://192.168.39.110:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0814 01:06:04.031159   61689 api_server.go:253] Checking apiserver healthz at https://192.168.39.110:8444/healthz ...
	I0814 01:06:04.149715   61689 api_server.go:279] https://192.168.39.110:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0814 01:06:04.149760   61689 api_server.go:103] status: https://192.168.39.110:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0814 01:06:04.530942   61689 api_server.go:253] Checking apiserver healthz at https://192.168.39.110:8444/healthz ...
	I0814 01:06:04.541074   61689 api_server.go:279] https://192.168.39.110:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0814 01:06:04.541119   61689 api_server.go:103] status: https://192.168.39.110:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0814 01:06:05.031232   61689 api_server.go:253] Checking apiserver healthz at https://192.168.39.110:8444/healthz ...
	I0814 01:06:05.036252   61689 api_server.go:279] https://192.168.39.110:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0814 01:06:05.036278   61689 api_server.go:103] status: https://192.168.39.110:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0814 01:06:05.531902   61689 api_server.go:253] Checking apiserver healthz at https://192.168.39.110:8444/healthz ...
	I0814 01:06:05.536016   61689 api_server.go:279] https://192.168.39.110:8444/healthz returned 200:
	ok
	I0814 01:06:05.542693   61689 api_server.go:141] control plane version: v1.31.0
	I0814 01:06:05.542718   61689 api_server.go:131] duration metric: took 4.511944733s to wait for apiserver health ...
	I0814 01:06:05.542728   61689 cni.go:84] Creating CNI manager for ""
	I0814 01:06:05.542736   61689 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 01:06:05.544557   61689 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0814 01:06:03.174271   61447 node_ready.go:53] node "no-preload-776907" has status "Ready":"False"
	I0814 01:06:04.174287   61447 node_ready.go:49] node "no-preload-776907" has status "Ready":"True"
	I0814 01:06:04.174312   61447 node_ready.go:38] duration metric: took 7.504312709s for node "no-preload-776907" to be "Ready" ...
	I0814 01:06:04.174324   61447 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 01:06:04.181275   61447 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-dz9zk" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:04.187150   61447 pod_ready.go:92] pod "coredns-6f6b679f8f-dz9zk" in "kube-system" namespace has status "Ready":"True"
	I0814 01:06:04.187171   61447 pod_ready.go:81] duration metric: took 5.866488ms for pod "coredns-6f6b679f8f-dz9zk" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:04.187180   61447 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-776907" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:04.192673   61447 pod_ready.go:92] pod "etcd-no-preload-776907" in "kube-system" namespace has status "Ready":"True"
	I0814 01:06:04.192694   61447 pod_ready.go:81] duration metric: took 5.50752ms for pod "etcd-no-preload-776907" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:04.192705   61447 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-776907" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:06.199283   61447 pod_ready.go:102] pod "kube-apiserver-no-preload-776907" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:05.545819   61689 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0814 01:06:05.556019   61689 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0814 01:06:05.598403   61689 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 01:06:05.608687   61689 system_pods.go:59] 8 kube-system pods found
	I0814 01:06:05.608718   61689 system_pods.go:61] "coredns-6f6b679f8f-7vdsf" [ea069874-e3a9-41a4-b038-cfca429e60cc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0814 01:06:05.608730   61689 system_pods.go:61] "etcd-default-k8s-diff-port-585256" [922a7db1-2b4d-4f7b-af08-3ed730f1d6e3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0814 01:06:05.608737   61689 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-585256" [2db632ae-aaf3-4df4-85b2-7ba505297efb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0814 01:06:05.608743   61689 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-585256" [d9cc182b-9153-4606-a719-465aed72c481] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0814 01:06:05.608747   61689 system_pods.go:61] "kube-proxy-cz77l" [67d1af69-ecbd-4564-be50-f96936604345] Running
	I0814 01:06:05.608751   61689 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-585256" [f0e99120-b573-4eb6-909f-a9b79886ec47] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0814 01:06:05.608755   61689 system_pods.go:61] "metrics-server-6867b74b74-6cql9" [f1213ad4-770d-4b81-96b9-7b5e10f2a23a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 01:06:05.608760   61689 system_pods.go:61] "storage-provisioner" [589b83be-2ad6-4b16-829f-cb944487303c] Running
	I0814 01:06:05.608766   61689 system_pods.go:74] duration metric: took 10.339955ms to wait for pod list to return data ...
	I0814 01:06:05.608772   61689 node_conditions.go:102] verifying NodePressure condition ...
	I0814 01:06:05.612993   61689 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0814 01:06:05.613024   61689 node_conditions.go:123] node cpu capacity is 2
	I0814 01:06:05.613037   61689 node_conditions.go:105] duration metric: took 4.259435ms to run NodePressure ...
	I0814 01:06:05.613055   61689 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:06:05.884859   61689 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0814 01:06:05.889608   61689 kubeadm.go:739] kubelet initialised
	I0814 01:06:05.889636   61689 kubeadm.go:740] duration metric: took 4.742229ms waiting for restarted kubelet to initialise ...
	I0814 01:06:05.889644   61689 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 01:06:05.991222   61689 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6f6b679f8f-7vdsf" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:05.997411   61689 pod_ready.go:97] node "default-k8s-diff-port-585256" hosting pod "coredns-6f6b679f8f-7vdsf" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-585256" has status "Ready":"False"
	I0814 01:06:05.997442   61689 pod_ready.go:81] duration metric: took 6.186188ms for pod "coredns-6f6b679f8f-7vdsf" in "kube-system" namespace to be "Ready" ...
	E0814 01:06:05.997455   61689 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-585256" hosting pod "coredns-6f6b679f8f-7vdsf" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-585256" has status "Ready":"False"
	I0814 01:06:05.997463   61689 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-585256" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:06.008153   61689 pod_ready.go:97] node "default-k8s-diff-port-585256" hosting pod "etcd-default-k8s-diff-port-585256" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-585256" has status "Ready":"False"
	I0814 01:06:06.008188   61689 pod_ready.go:81] duration metric: took 10.714691ms for pod "etcd-default-k8s-diff-port-585256" in "kube-system" namespace to be "Ready" ...
	E0814 01:06:06.008204   61689 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-585256" hosting pod "etcd-default-k8s-diff-port-585256" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-585256" has status "Ready":"False"
	I0814 01:06:06.008213   61689 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-585256" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:06.013480   61689 pod_ready.go:97] node "default-k8s-diff-port-585256" hosting pod "kube-apiserver-default-k8s-diff-port-585256" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-585256" has status "Ready":"False"
	I0814 01:06:06.013500   61689 pod_ready.go:81] duration metric: took 5.279106ms for pod "kube-apiserver-default-k8s-diff-port-585256" in "kube-system" namespace to be "Ready" ...
	E0814 01:06:06.013510   61689 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-585256" hosting pod "kube-apiserver-default-k8s-diff-port-585256" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-585256" has status "Ready":"False"
	I0814 01:06:06.013517   61689 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-585256" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:06.022821   61689 pod_ready.go:97] node "default-k8s-diff-port-585256" hosting pod "kube-controller-manager-default-k8s-diff-port-585256" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-585256" has status "Ready":"False"
	I0814 01:06:06.022841   61689 pod_ready.go:81] duration metric: took 9.318586ms for pod "kube-controller-manager-default-k8s-diff-port-585256" in "kube-system" namespace to be "Ready" ...
	E0814 01:06:06.022851   61689 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-585256" hosting pod "kube-controller-manager-default-k8s-diff-port-585256" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-585256" has status "Ready":"False"
	I0814 01:06:06.022857   61689 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-cz77l" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:06.402225   61689 pod_ready.go:92] pod "kube-proxy-cz77l" in "kube-system" namespace has status "Ready":"True"
	I0814 01:06:06.402251   61689 pod_ready.go:81] duration metric: took 379.387097ms for pod "kube-proxy-cz77l" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:06.402267   61689 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-585256" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:07.847343   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:07.847844   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 01:06:07.847879   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 01:06:07.847800   62858 retry.go:31] will retry after 2.983420512s: waiting for machine to come up
	I0814 01:06:07.699362   61447 pod_ready.go:92] pod "kube-apiserver-no-preload-776907" in "kube-system" namespace has status "Ready":"True"
	I0814 01:06:07.699393   61447 pod_ready.go:81] duration metric: took 3.506678951s for pod "kube-apiserver-no-preload-776907" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:07.699407   61447 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-776907" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:07.704007   61447 pod_ready.go:92] pod "kube-controller-manager-no-preload-776907" in "kube-system" namespace has status "Ready":"True"
	I0814 01:06:07.704028   61447 pod_ready.go:81] duration metric: took 4.613152ms for pod "kube-controller-manager-no-preload-776907" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:07.704038   61447 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pgm9t" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:07.708027   61447 pod_ready.go:92] pod "kube-proxy-pgm9t" in "kube-system" namespace has status "Ready":"True"
	I0814 01:06:07.708044   61447 pod_ready.go:81] duration metric: took 3.999792ms for pod "kube-proxy-pgm9t" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:07.708052   61447 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-776907" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:07.774591   61447 pod_ready.go:92] pod "kube-scheduler-no-preload-776907" in "kube-system" namespace has status "Ready":"True"
	I0814 01:06:07.774621   61447 pod_ready.go:81] duration metric: took 66.56102ms for pod "kube-scheduler-no-preload-776907" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:07.774642   61447 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:09.781156   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:12.050400   61115 start.go:364] duration metric: took 54.455049928s to acquireMachinesLock for "embed-certs-901410"
	I0814 01:06:12.050448   61115 start.go:96] Skipping create...Using existing machine configuration
	I0814 01:06:12.050458   61115 fix.go:54] fixHost starting: 
	I0814 01:06:12.050897   61115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:06:12.050932   61115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:06:12.067865   61115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41559
	I0814 01:06:12.068209   61115 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:06:12.068726   61115 main.go:141] libmachine: Using API Version  1
	I0814 01:06:12.068757   61115 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:06:12.069116   61115 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:06:12.069354   61115 main.go:141] libmachine: (embed-certs-901410) Calling .DriverName
	I0814 01:06:12.069516   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetState
	I0814 01:06:12.070994   61115 fix.go:112] recreateIfNeeded on embed-certs-901410: state=Stopped err=<nil>
	I0814 01:06:12.071029   61115 main.go:141] libmachine: (embed-certs-901410) Calling .DriverName
	W0814 01:06:12.071156   61115 fix.go:138] unexpected machine state, will restart: <nil>
	I0814 01:06:12.072932   61115 out.go:177] * Restarting existing kvm2 VM for "embed-certs-901410" ...
	I0814 01:06:08.410114   61689 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-585256" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:10.909528   61689 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-585256" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:12.911385   61689 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-585256" in "kube-system" namespace has status "Ready":"True"
	I0814 01:06:12.911416   61689 pod_ready.go:81] duration metric: took 6.509140238s for pod "kube-scheduler-default-k8s-diff-port-585256" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:12.911432   61689 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:10.834861   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:10.835358   61804 main.go:141] libmachine: (old-k8s-version-179312) Found IP for machine: 192.168.61.123
	I0814 01:06:10.835381   61804 main.go:141] libmachine: (old-k8s-version-179312) Reserving static IP address...
	I0814 01:06:10.835396   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has current primary IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:10.835795   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "old-k8s-version-179312", mac: "52:54:00:b2:76:73", ip: "192.168.61.123"} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:10.835827   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | skip adding static IP to network mk-old-k8s-version-179312 - found existing host DHCP lease matching {name: "old-k8s-version-179312", mac: "52:54:00:b2:76:73", ip: "192.168.61.123"}
	I0814 01:06:10.835846   61804 main.go:141] libmachine: (old-k8s-version-179312) Reserved static IP address: 192.168.61.123
	I0814 01:06:10.835866   61804 main.go:141] libmachine: (old-k8s-version-179312) Waiting for SSH to be available...
	I0814 01:06:10.835880   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | Getting to WaitForSSH function...
	I0814 01:06:10.837965   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:10.838336   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:10.838379   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:10.838482   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | Using SSH client type: external
	I0814 01:06:10.838520   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | Using SSH private key: /home/jenkins/minikube-integration/19429-9425/.minikube/machines/old-k8s-version-179312/id_rsa (-rw-------)
	I0814 01:06:10.838549   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.123 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19429-9425/.minikube/machines/old-k8s-version-179312/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0814 01:06:10.838568   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | About to run SSH command:
	I0814 01:06:10.838578   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | exit 0
	I0814 01:06:10.965836   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | SSH cmd err, output: <nil>: 
	I0814 01:06:10.966231   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetConfigRaw
	I0814 01:06:10.966912   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetIP
	I0814 01:06:10.969194   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:10.969535   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:10.969560   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:10.969789   61804 profile.go:143] Saving config to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312/config.json ...
	I0814 01:06:10.969969   61804 machine.go:94] provisionDockerMachine start ...
	I0814 01:06:10.969987   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .DriverName
	I0814 01:06:10.970183   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHHostname
	I0814 01:06:10.972010   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:10.972332   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:10.972361   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:10.972476   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHPort
	I0814 01:06:10.972658   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 01:06:10.972807   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 01:06:10.972942   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHUsername
	I0814 01:06:10.973088   61804 main.go:141] libmachine: Using SSH client type: native
	I0814 01:06:10.973257   61804 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I0814 01:06:10.973267   61804 main.go:141] libmachine: About to run SSH command:
	hostname
	I0814 01:06:11.074077   61804 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0814 01:06:11.074111   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetMachineName
	I0814 01:06:11.074328   61804 buildroot.go:166] provisioning hostname "old-k8s-version-179312"
	I0814 01:06:11.074364   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetMachineName
	I0814 01:06:11.074666   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHHostname
	I0814 01:06:11.077309   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.077697   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:11.077730   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.077803   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHPort
	I0814 01:06:11.077990   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 01:06:11.078161   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 01:06:11.078304   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHUsername
	I0814 01:06:11.078510   61804 main.go:141] libmachine: Using SSH client type: native
	I0814 01:06:11.078729   61804 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I0814 01:06:11.078743   61804 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-179312 && echo "old-k8s-version-179312" | sudo tee /etc/hostname
	I0814 01:06:11.193209   61804 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-179312
	
	I0814 01:06:11.193241   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHHostname
	I0814 01:06:11.195907   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.196315   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:11.196342   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.196569   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHPort
	I0814 01:06:11.196774   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 01:06:11.196936   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 01:06:11.197079   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHUsername
	I0814 01:06:11.197234   61804 main.go:141] libmachine: Using SSH client type: native
	I0814 01:06:11.197448   61804 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I0814 01:06:11.197477   61804 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-179312' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-179312/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-179312' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0814 01:06:11.312005   61804 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 01:06:11.312037   61804 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19429-9425/.minikube CaCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19429-9425/.minikube}
	I0814 01:06:11.312082   61804 buildroot.go:174] setting up certificates
	I0814 01:06:11.312093   61804 provision.go:84] configureAuth start
	I0814 01:06:11.312103   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetMachineName
	I0814 01:06:11.312396   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetIP
	I0814 01:06:11.315412   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.315909   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:11.315952   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.316043   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHHostname
	I0814 01:06:11.318283   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.318603   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:11.318630   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.318791   61804 provision.go:143] copyHostCerts
	I0814 01:06:11.318852   61804 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem, removing ...
	I0814 01:06:11.318875   61804 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem
	I0814 01:06:11.318944   61804 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem (1082 bytes)
	I0814 01:06:11.319073   61804 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem, removing ...
	I0814 01:06:11.319085   61804 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem
	I0814 01:06:11.319115   61804 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem (1123 bytes)
	I0814 01:06:11.319199   61804 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem, removing ...
	I0814 01:06:11.319209   61804 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem
	I0814 01:06:11.319262   61804 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem (1675 bytes)
	I0814 01:06:11.319351   61804 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-179312 san=[127.0.0.1 192.168.61.123 localhost minikube old-k8s-version-179312]
	I0814 01:06:11.396260   61804 provision.go:177] copyRemoteCerts
	I0814 01:06:11.396338   61804 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0814 01:06:11.396372   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHHostname
	I0814 01:06:11.399365   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.399788   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:11.399824   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.399989   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHPort
	I0814 01:06:11.400186   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 01:06:11.400349   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHUsername
	I0814 01:06:11.400555   61804 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/old-k8s-version-179312/id_rsa Username:docker}
	I0814 01:06:11.483862   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0814 01:06:11.506282   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0814 01:06:11.529014   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0814 01:06:11.550986   61804 provision.go:87] duration metric: took 238.880389ms to configureAuth
	I0814 01:06:11.551022   61804 buildroot.go:189] setting minikube options for container-runtime
	I0814 01:06:11.551253   61804 config.go:182] Loaded profile config "old-k8s-version-179312": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0814 01:06:11.551330   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHHostname
	I0814 01:06:11.554244   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.554622   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:11.554655   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.554880   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHPort
	I0814 01:06:11.555073   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 01:06:11.555249   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 01:06:11.555402   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHUsername
	I0814 01:06:11.555590   61804 main.go:141] libmachine: Using SSH client type: native
	I0814 01:06:11.555834   61804 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I0814 01:06:11.555856   61804 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0814 01:06:11.824529   61804 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0814 01:06:11.824553   61804 machine.go:97] duration metric: took 854.572333ms to provisionDockerMachine
	I0814 01:06:11.824569   61804 start.go:293] postStartSetup for "old-k8s-version-179312" (driver="kvm2")
	I0814 01:06:11.824581   61804 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0814 01:06:11.824626   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .DriverName
	I0814 01:06:11.824929   61804 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0814 01:06:11.824952   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHHostname
	I0814 01:06:11.828165   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.828510   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:11.828545   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.828693   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHPort
	I0814 01:06:11.828883   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 01:06:11.829032   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHUsername
	I0814 01:06:11.829206   61804 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/old-k8s-version-179312/id_rsa Username:docker}
	I0814 01:06:11.909667   61804 ssh_runner.go:195] Run: cat /etc/os-release
	I0814 01:06:11.913426   61804 info.go:137] Remote host: Buildroot 2023.02.9
	I0814 01:06:11.913452   61804 filesync.go:126] Scanning /home/jenkins/minikube-integration/19429-9425/.minikube/addons for local assets ...
	I0814 01:06:11.913530   61804 filesync.go:126] Scanning /home/jenkins/minikube-integration/19429-9425/.minikube/files for local assets ...
	I0814 01:06:11.913630   61804 filesync.go:149] local asset: /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem -> 165892.pem in /etc/ssl/certs
	I0814 01:06:11.913753   61804 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0814 01:06:11.923687   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem --> /etc/ssl/certs/165892.pem (1708 bytes)
	I0814 01:06:11.946123   61804 start.go:296] duration metric: took 121.53594ms for postStartSetup
	I0814 01:06:11.946172   61804 fix.go:56] duration metric: took 19.859362691s for fixHost
	I0814 01:06:11.946192   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHHostname
	I0814 01:06:11.948880   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.949241   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:11.949264   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.949490   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHPort
	I0814 01:06:11.949702   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 01:06:11.949889   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 01:06:11.950031   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHUsername
	I0814 01:06:11.950210   61804 main.go:141] libmachine: Using SSH client type: native
	I0814 01:06:11.950390   61804 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I0814 01:06:11.950403   61804 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0814 01:06:12.050230   61804 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723597572.007643909
	
	I0814 01:06:12.050252   61804 fix.go:216] guest clock: 1723597572.007643909
	I0814 01:06:12.050259   61804 fix.go:229] Guest: 2024-08-14 01:06:12.007643909 +0000 UTC Remote: 2024-08-14 01:06:11.946176003 +0000 UTC m=+272.466568091 (delta=61.467906ms)
	I0814 01:06:12.050292   61804 fix.go:200] guest clock delta is within tolerance: 61.467906ms
	I0814 01:06:12.050297   61804 start.go:83] releasing machines lock for "old-k8s-version-179312", held for 19.963518958s
	I0814 01:06:12.050328   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .DriverName
	I0814 01:06:12.050593   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetIP
	I0814 01:06:12.053723   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:12.054140   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:12.054170   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:12.054376   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .DriverName
	I0814 01:06:12.054804   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .DriverName
	I0814 01:06:12.054992   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .DriverName
	I0814 01:06:12.055076   61804 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0814 01:06:12.055137   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHHostname
	I0814 01:06:12.055191   61804 ssh_runner.go:195] Run: cat /version.json
	I0814 01:06:12.055216   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHHostname
	I0814 01:06:12.058027   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:12.058378   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:12.058404   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:12.058455   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:12.058684   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHPort
	I0814 01:06:12.058796   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:12.058828   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:12.058874   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 01:06:12.059041   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHUsername
	I0814 01:06:12.059107   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHPort
	I0814 01:06:12.059179   61804 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/old-k8s-version-179312/id_rsa Username:docker}
	I0814 01:06:12.059276   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 01:06:12.059582   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHUsername
	I0814 01:06:12.059721   61804 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/old-k8s-version-179312/id_rsa Username:docker}
	I0814 01:06:12.169671   61804 ssh_runner.go:195] Run: systemctl --version
	I0814 01:06:12.175640   61804 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0814 01:06:12.326156   61804 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0814 01:06:12.332951   61804 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0814 01:06:12.333015   61804 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0814 01:06:12.351706   61804 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0814 01:06:12.351737   61804 start.go:495] detecting cgroup driver to use...
	I0814 01:06:12.351808   61804 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0814 01:06:12.367945   61804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0814 01:06:12.381540   61804 docker.go:217] disabling cri-docker service (if available) ...
	I0814 01:06:12.381607   61804 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0814 01:06:12.394497   61804 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0814 01:06:12.408848   61804 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0814 01:06:12.530080   61804 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0814 01:06:12.705566   61804 docker.go:233] disabling docker service ...
	I0814 01:06:12.705627   61804 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0814 01:06:12.721274   61804 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0814 01:06:12.736855   61804 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0814 01:06:12.851178   61804 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0814 01:06:12.973876   61804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0814 01:06:12.987600   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 01:06:13.004553   61804 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0814 01:06:13.004656   61804 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:06:13.014424   61804 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0814 01:06:13.014507   61804 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:06:13.024038   61804 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:06:13.033588   61804 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:06:13.043124   61804 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0814 01:06:13.052585   61804 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0814 01:06:13.061221   61804 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0814 01:06:13.061308   61804 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0814 01:06:13.075277   61804 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0814 01:06:13.087018   61804 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 01:06:13.227288   61804 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0814 01:06:13.372753   61804 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0814 01:06:13.372848   61804 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0814 01:06:13.377444   61804 start.go:563] Will wait 60s for crictl version
	I0814 01:06:13.377499   61804 ssh_runner.go:195] Run: which crictl
	I0814 01:06:13.381068   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0814 01:06:13.430604   61804 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0814 01:06:13.430694   61804 ssh_runner.go:195] Run: crio --version
	I0814 01:06:13.460827   61804 ssh_runner.go:195] Run: crio --version
	I0814 01:06:13.491550   61804 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0814 01:06:13.492760   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetIP
	I0814 01:06:13.495846   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:13.496218   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:13.496255   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:13.496435   61804 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0814 01:06:13.500489   61804 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 01:06:13.512643   61804 kubeadm.go:883] updating cluster {Name:old-k8s-version-179312 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-179312 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.123 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0814 01:06:13.512785   61804 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0814 01:06:13.512842   61804 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 01:06:13.560050   61804 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0814 01:06:13.560112   61804 ssh_runner.go:195] Run: which lz4
	I0814 01:06:13.564105   61804 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0814 01:06:13.567985   61804 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0814 01:06:13.568014   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0814 01:06:12.074155   61115 main.go:141] libmachine: (embed-certs-901410) Calling .Start
	I0814 01:06:12.074285   61115 main.go:141] libmachine: (embed-certs-901410) Ensuring networks are active...
	I0814 01:06:12.074948   61115 main.go:141] libmachine: (embed-certs-901410) Ensuring network default is active
	I0814 01:06:12.075282   61115 main.go:141] libmachine: (embed-certs-901410) Ensuring network mk-embed-certs-901410 is active
	I0814 01:06:12.075694   61115 main.go:141] libmachine: (embed-certs-901410) Getting domain xml...
	I0814 01:06:12.076354   61115 main.go:141] libmachine: (embed-certs-901410) Creating domain...
	I0814 01:06:13.425468   61115 main.go:141] libmachine: (embed-certs-901410) Waiting to get IP...
	I0814 01:06:13.426367   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:13.426876   61115 main.go:141] libmachine: (embed-certs-901410) DBG | unable to find current IP address of domain embed-certs-901410 in network mk-embed-certs-901410
	I0814 01:06:13.426936   61115 main.go:141] libmachine: (embed-certs-901410) DBG | I0814 01:06:13.426842   63044 retry.go:31] will retry after 280.861769ms: waiting for machine to come up
	I0814 01:06:13.709645   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:13.710369   61115 main.go:141] libmachine: (embed-certs-901410) DBG | unable to find current IP address of domain embed-certs-901410 in network mk-embed-certs-901410
	I0814 01:06:13.710524   61115 main.go:141] libmachine: (embed-certs-901410) DBG | I0814 01:06:13.710442   63044 retry.go:31] will retry after 316.02196ms: waiting for machine to come up
	I0814 01:06:14.028197   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:14.028722   61115 main.go:141] libmachine: (embed-certs-901410) DBG | unable to find current IP address of domain embed-certs-901410 in network mk-embed-certs-901410
	I0814 01:06:14.028751   61115 main.go:141] libmachine: (embed-certs-901410) DBG | I0814 01:06:14.028683   63044 retry.go:31] will retry after 317.388844ms: waiting for machine to come up
	I0814 01:06:14.347390   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:14.347888   61115 main.go:141] libmachine: (embed-certs-901410) DBG | unable to find current IP address of domain embed-certs-901410 in network mk-embed-certs-901410
	I0814 01:06:14.347917   61115 main.go:141] libmachine: (embed-certs-901410) DBG | I0814 01:06:14.347834   63044 retry.go:31] will retry after 422.687955ms: waiting for machine to come up
	I0814 01:06:14.772182   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:14.772756   61115 main.go:141] libmachine: (embed-certs-901410) DBG | unable to find current IP address of domain embed-certs-901410 in network mk-embed-certs-901410
	I0814 01:06:14.772785   61115 main.go:141] libmachine: (embed-certs-901410) DBG | I0814 01:06:14.772704   63044 retry.go:31] will retry after 517.722001ms: waiting for machine to come up
	I0814 01:06:11.781300   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:13.782226   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:15.782509   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:14.919068   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:16.920536   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:15.010425   61804 crio.go:462] duration metric: took 1.446361159s to copy over tarball
	I0814 01:06:15.010503   61804 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0814 01:06:17.960543   61804 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.950002604s)
	I0814 01:06:17.960583   61804 crio.go:469] duration metric: took 2.950131362s to extract the tarball
	I0814 01:06:17.960595   61804 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0814 01:06:18.002898   61804 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 01:06:18.039862   61804 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0814 01:06:18.039887   61804 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0814 01:06:18.039949   61804 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 01:06:18.039976   61804 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0814 01:06:18.040029   61804 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0814 01:06:18.040037   61804 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 01:06:18.040076   61804 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0814 01:06:18.040092   61804 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0814 01:06:18.040279   61804 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0814 01:06:18.040285   61804 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0814 01:06:18.041502   61804 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 01:06:18.041605   61804 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0814 01:06:18.041642   61804 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 01:06:18.041655   61804 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0814 01:06:18.041683   61804 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0814 01:06:18.041709   61804 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0814 01:06:18.041712   61804 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0814 01:06:18.041643   61804 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0814 01:06:18.267865   61804 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0814 01:06:18.300630   61804 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0814 01:06:18.309691   61804 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0814 01:06:18.312711   61804 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0814 01:06:18.319830   61804 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 01:06:18.333483   61804 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0814 01:06:18.333571   61804 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0814 01:06:18.333617   61804 ssh_runner.go:195] Run: which crictl
	I0814 01:06:18.333854   61804 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0814 01:06:18.355530   61804 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0814 01:06:18.460940   61804 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0814 01:06:18.460989   61804 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0814 01:06:18.460991   61804 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0814 01:06:18.461028   61804 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0814 01:06:18.461038   61804 ssh_runner.go:195] Run: which crictl
	I0814 01:06:18.461072   61804 ssh_runner.go:195] Run: which crictl
	I0814 01:06:18.466105   61804 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0814 01:06:18.466146   61804 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 01:06:18.466158   61804 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0814 01:06:18.466194   61804 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0814 01:06:18.466200   61804 ssh_runner.go:195] Run: which crictl
	I0814 01:06:18.466232   61804 ssh_runner.go:195] Run: which crictl
	I0814 01:06:18.466109   61804 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0814 01:06:18.466290   61804 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0814 01:06:18.466163   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0814 01:06:18.466338   61804 ssh_runner.go:195] Run: which crictl
	I0814 01:06:18.471203   61804 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0814 01:06:18.471244   61804 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0814 01:06:18.471327   61804 ssh_runner.go:195] Run: which crictl
	I0814 01:06:18.477596   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0814 01:06:18.477709   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 01:06:18.477741   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0814 01:06:18.536417   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0814 01:06:18.536483   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0814 01:06:18.536443   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0814 01:06:18.536516   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0814 01:06:18.560937   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0814 01:06:18.560979   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0814 01:06:18.571932   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 01:06:18.690215   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0814 01:06:18.690271   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0814 01:06:18.690385   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0814 01:06:18.690416   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0814 01:06:18.710801   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0814 01:06:18.722130   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0814 01:06:18.722180   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 01:06:18.854942   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0814 01:06:18.854975   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0814 01:06:18.855019   61804 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0814 01:06:18.855064   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0814 01:06:18.855069   61804 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0814 01:06:18.855143   61804 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0814 01:06:18.855197   61804 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0814 01:06:18.917832   61804 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0814 01:06:18.917892   61804 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0814 01:06:18.919778   61804 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0814 01:06:18.937014   61804 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 01:06:19.077956   61804 cache_images.go:92] duration metric: took 1.038051355s to LoadCachedImages
	W0814 01:06:19.078050   61804 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0814 01:06:19.078068   61804 kubeadm.go:934] updating node { 192.168.61.123 8443 v1.20.0 crio true true} ...
	I0814 01:06:19.078198   61804 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-179312 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.123
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-179312 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0814 01:06:19.078309   61804 ssh_runner.go:195] Run: crio config
	I0814 01:06:19.126091   61804 cni.go:84] Creating CNI manager for ""
	I0814 01:06:19.126114   61804 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 01:06:19.126129   61804 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0814 01:06:19.126159   61804 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.123 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-179312 NodeName:old-k8s-version-179312 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.123"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.123 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0814 01:06:19.126325   61804 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.123
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-179312"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.123
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.123"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0814 01:06:19.126402   61804 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0814 01:06:19.136422   61804 binaries.go:44] Found k8s binaries, skipping transfer
	I0814 01:06:19.136481   61804 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0814 01:06:19.145476   61804 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0814 01:06:19.161780   61804 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0814 01:06:19.178893   61804 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0814 01:06:19.196515   61804 ssh_runner.go:195] Run: grep 192.168.61.123	control-plane.minikube.internal$ /etc/hosts
	I0814 01:06:19.200204   61804 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.123	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 01:06:19.211943   61804 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 01:06:19.333517   61804 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 01:06:19.350008   61804 certs.go:68] Setting up /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312 for IP: 192.168.61.123
	I0814 01:06:19.350055   61804 certs.go:194] generating shared ca certs ...
	I0814 01:06:19.350094   61804 certs.go:226] acquiring lock for ca certs: {Name:mke321171338291cf6d66f3acbfe43d3fabcafa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 01:06:19.350294   61804 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19429-9425/.minikube/ca.key
	I0814 01:06:19.350371   61804 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.key
	I0814 01:06:19.350387   61804 certs.go:256] generating profile certs ...
	I0814 01:06:19.350530   61804 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312/client.key
	I0814 01:06:19.350603   61804 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312/apiserver.key.6e56bf34
	I0814 01:06:19.350667   61804 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312/proxy-client.key
	I0814 01:06:19.350846   61804 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589.pem (1338 bytes)
	W0814 01:06:19.350928   61804 certs.go:480] ignoring /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589_empty.pem, impossibly tiny 0 bytes
	I0814 01:06:19.350958   61804 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem (1679 bytes)
	I0814 01:06:19.350995   61804 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem (1082 bytes)
	I0814 01:06:19.351032   61804 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem (1123 bytes)
	I0814 01:06:19.351076   61804 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem (1675 bytes)
	I0814 01:06:19.351152   61804 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem (1708 bytes)
	I0814 01:06:19.352060   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0814 01:06:19.400249   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0814 01:06:19.430497   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0814 01:06:19.478315   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0814 01:06:19.507327   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0814 01:06:15.292336   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:15.292816   61115 main.go:141] libmachine: (embed-certs-901410) DBG | unable to find current IP address of domain embed-certs-901410 in network mk-embed-certs-901410
	I0814 01:06:15.292847   61115 main.go:141] libmachine: (embed-certs-901410) DBG | I0814 01:06:15.292765   63044 retry.go:31] will retry after 585.844986ms: waiting for machine to come up
	I0814 01:06:15.880233   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:15.880833   61115 main.go:141] libmachine: (embed-certs-901410) DBG | unable to find current IP address of domain embed-certs-901410 in network mk-embed-certs-901410
	I0814 01:06:15.880903   61115 main.go:141] libmachine: (embed-certs-901410) DBG | I0814 01:06:15.880810   63044 retry.go:31] will retry after 827.81891ms: waiting for machine to come up
	I0814 01:06:16.710168   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:16.710630   61115 main.go:141] libmachine: (embed-certs-901410) DBG | unable to find current IP address of domain embed-certs-901410 in network mk-embed-certs-901410
	I0814 01:06:16.710671   61115 main.go:141] libmachine: (embed-certs-901410) DBG | I0814 01:06:16.710577   63044 retry.go:31] will retry after 1.430172339s: waiting for machine to come up
	I0814 01:06:18.142094   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:18.142557   61115 main.go:141] libmachine: (embed-certs-901410) DBG | unable to find current IP address of domain embed-certs-901410 in network mk-embed-certs-901410
	I0814 01:06:18.142604   61115 main.go:141] libmachine: (embed-certs-901410) DBG | I0814 01:06:18.142477   63044 retry.go:31] will retry after 1.240583508s: waiting for machine to come up
	I0814 01:06:19.384686   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:19.385102   61115 main.go:141] libmachine: (embed-certs-901410) DBG | unable to find current IP address of domain embed-certs-901410 in network mk-embed-certs-901410
	I0814 01:06:19.385132   61115 main.go:141] libmachine: (embed-certs-901410) DBG | I0814 01:06:19.385044   63044 retry.go:31] will retry after 2.005758756s: waiting for machine to come up
	I0814 01:06:18.281722   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:20.571594   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:19.619695   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:21.918897   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:19.535095   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0814 01:06:19.564128   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0814 01:06:19.600227   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0814 01:06:19.624712   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0814 01:06:19.649975   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589.pem --> /usr/share/ca-certificates/16589.pem (1338 bytes)
	I0814 01:06:19.673278   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem --> /usr/share/ca-certificates/165892.pem (1708 bytes)
	I0814 01:06:19.697408   61804 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0814 01:06:19.716197   61804 ssh_runner.go:195] Run: openssl version
	I0814 01:06:19.723669   61804 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/165892.pem && ln -fs /usr/share/ca-certificates/165892.pem /etc/ssl/certs/165892.pem"
	I0814 01:06:19.737165   61804 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/165892.pem
	I0814 01:06:19.742731   61804 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 13 23:59 /usr/share/ca-certificates/165892.pem
	I0814 01:06:19.742778   61804 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/165892.pem
	I0814 01:06:19.750009   61804 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/165892.pem /etc/ssl/certs/3ec20f2e.0"
	I0814 01:06:19.761830   61804 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0814 01:06:19.772601   61804 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0814 01:06:19.777222   61804 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 13 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0814 01:06:19.777311   61804 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0814 01:06:19.784554   61804 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0814 01:06:19.794731   61804 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16589.pem && ln -fs /usr/share/ca-certificates/16589.pem /etc/ssl/certs/16589.pem"
	I0814 01:06:19.804326   61804 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16589.pem
	I0814 01:06:19.808528   61804 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 13 23:59 /usr/share/ca-certificates/16589.pem
	I0814 01:06:19.808589   61804 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16589.pem
	I0814 01:06:19.815518   61804 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16589.pem /etc/ssl/certs/51391683.0"
	I0814 01:06:19.828687   61804 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0814 01:06:19.833943   61804 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0814 01:06:19.839826   61804 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0814 01:06:19.845576   61804 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0814 01:06:19.851700   61804 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0814 01:06:19.857179   61804 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0814 01:06:19.862728   61804 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0814 01:06:19.868172   61804 kubeadm.go:392] StartCluster: {Name:old-k8s-version-179312 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-179312 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.123 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 01:06:19.868280   61804 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0814 01:06:19.868327   61804 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 01:06:19.905130   61804 cri.go:89] found id: ""
	I0814 01:06:19.905208   61804 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0814 01:06:19.915743   61804 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0814 01:06:19.915763   61804 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0814 01:06:19.915812   61804 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0814 01:06:19.926673   61804 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0814 01:06:19.928112   61804 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-179312" does not appear in /home/jenkins/minikube-integration/19429-9425/kubeconfig
	I0814 01:06:19.929057   61804 kubeconfig.go:62] /home/jenkins/minikube-integration/19429-9425/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-179312" cluster setting kubeconfig missing "old-k8s-version-179312" context setting]
	I0814 01:06:19.931588   61804 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19429-9425/kubeconfig: {Name:mkd99f966962f24d3ec49056c344f5320df43dba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 01:06:19.938507   61804 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0814 01:06:19.947574   61804 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.123
	I0814 01:06:19.947601   61804 kubeadm.go:1160] stopping kube-system containers ...
	I0814 01:06:19.947641   61804 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0814 01:06:19.947698   61804 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 01:06:19.986219   61804 cri.go:89] found id: ""
	I0814 01:06:19.986301   61804 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0814 01:06:20.001325   61804 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 01:06:20.010260   61804 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 01:06:20.010278   61804 kubeadm.go:157] found existing configuration files:
	
	I0814 01:06:20.010320   61804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 01:06:20.018691   61804 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 01:06:20.018753   61804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 01:06:20.027627   61804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 01:06:20.035892   61804 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 01:06:20.035948   61804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 01:06:20.044508   61804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 01:06:20.052714   61804 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 01:06:20.052760   61804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 01:06:20.062524   61804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 01:06:20.070978   61804 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 01:06:20.071037   61804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 01:06:20.079423   61804 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 01:06:20.088368   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:06:20.206955   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:06:21.197237   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:06:21.439928   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:06:21.552279   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:06:21.636249   61804 api_server.go:52] waiting for apiserver process to appear ...
	I0814 01:06:21.636337   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:22.136661   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:22.636861   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:23.136511   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:23.636583   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:24.136899   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:21.392188   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:21.392717   61115 main.go:141] libmachine: (embed-certs-901410) DBG | unable to find current IP address of domain embed-certs-901410 in network mk-embed-certs-901410
	I0814 01:06:21.392744   61115 main.go:141] libmachine: (embed-certs-901410) DBG | I0814 01:06:21.392636   63044 retry.go:31] will retry after 2.297974145s: waiting for machine to come up
	I0814 01:06:23.692024   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:23.692545   61115 main.go:141] libmachine: (embed-certs-901410) DBG | unable to find current IP address of domain embed-certs-901410 in network mk-embed-certs-901410
	I0814 01:06:23.692574   61115 main.go:141] libmachine: (embed-certs-901410) DBG | I0814 01:06:23.692496   63044 retry.go:31] will retry after 2.273164713s: waiting for machine to come up
	I0814 01:06:22.780588   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:24.781349   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:23.919847   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:26.417563   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:24.636605   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:25.136809   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:25.636474   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:26.137253   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:26.636758   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:27.137184   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:27.637201   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:28.137082   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:28.637409   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:29.136794   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:25.967275   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:25.967771   61115 main.go:141] libmachine: (embed-certs-901410) DBG | unable to find current IP address of domain embed-certs-901410 in network mk-embed-certs-901410
	I0814 01:06:25.967799   61115 main.go:141] libmachine: (embed-certs-901410) DBG | I0814 01:06:25.967714   63044 retry.go:31] will retry after 3.279375715s: waiting for machine to come up
	I0814 01:06:29.249387   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.249873   61115 main.go:141] libmachine: (embed-certs-901410) Found IP for machine: 192.168.50.210
	I0814 01:06:29.249893   61115 main.go:141] libmachine: (embed-certs-901410) Reserving static IP address...
	I0814 01:06:29.249911   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has current primary IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.250345   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "embed-certs-901410", mac: "52:54:00:fa:4e:56", ip: "192.168.50.210"} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:06:29.250380   61115 main.go:141] libmachine: (embed-certs-901410) DBG | skip adding static IP to network mk-embed-certs-901410 - found existing host DHCP lease matching {name: "embed-certs-901410", mac: "52:54:00:fa:4e:56", ip: "192.168.50.210"}
	I0814 01:06:29.250394   61115 main.go:141] libmachine: (embed-certs-901410) Reserved static IP address: 192.168.50.210
	I0814 01:06:29.250409   61115 main.go:141] libmachine: (embed-certs-901410) Waiting for SSH to be available...
	I0814 01:06:29.250425   61115 main.go:141] libmachine: (embed-certs-901410) DBG | Getting to WaitForSSH function...
	I0814 01:06:29.252472   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.252801   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:06:29.252825   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.252933   61115 main.go:141] libmachine: (embed-certs-901410) DBG | Using SSH client type: external
	I0814 01:06:29.252973   61115 main.go:141] libmachine: (embed-certs-901410) DBG | Using SSH private key: /home/jenkins/minikube-integration/19429-9425/.minikube/machines/embed-certs-901410/id_rsa (-rw-------)
	I0814 01:06:29.253015   61115 main.go:141] libmachine: (embed-certs-901410) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.210 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19429-9425/.minikube/machines/embed-certs-901410/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0814 01:06:29.253031   61115 main.go:141] libmachine: (embed-certs-901410) DBG | About to run SSH command:
	I0814 01:06:29.253044   61115 main.go:141] libmachine: (embed-certs-901410) DBG | exit 0
	I0814 01:06:29.381821   61115 main.go:141] libmachine: (embed-certs-901410) DBG | SSH cmd err, output: <nil>: 
	I0814 01:06:29.382216   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetConfigRaw
	I0814 01:06:29.382909   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetIP
	I0814 01:06:29.385247   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.385611   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:06:29.385648   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.385918   61115 profile.go:143] Saving config to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/embed-certs-901410/config.json ...
	I0814 01:06:29.386116   61115 machine.go:94] provisionDockerMachine start ...
	I0814 01:06:29.386151   61115 main.go:141] libmachine: (embed-certs-901410) Calling .DriverName
	I0814 01:06:29.386370   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHHostname
	I0814 01:06:29.388690   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.389026   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:06:29.389054   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.389185   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHPort
	I0814 01:06:29.389353   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 01:06:29.389510   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 01:06:29.389658   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHUsername
	I0814 01:06:29.389812   61115 main.go:141] libmachine: Using SSH client type: native
	I0814 01:06:29.390022   61115 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.210 22 <nil> <nil>}
	I0814 01:06:29.390033   61115 main.go:141] libmachine: About to run SSH command:
	hostname
	I0814 01:06:29.502650   61115 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0814 01:06:29.502704   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetMachineName
	I0814 01:06:29.502923   61115 buildroot.go:166] provisioning hostname "embed-certs-901410"
	I0814 01:06:29.502947   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetMachineName
	I0814 01:06:29.503141   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHHostname
	I0814 01:06:29.505440   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.505866   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:06:29.505903   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.506078   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHPort
	I0814 01:06:29.506278   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 01:06:29.506425   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 01:06:29.506558   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHUsername
	I0814 01:06:29.506733   61115 main.go:141] libmachine: Using SSH client type: native
	I0814 01:06:29.506942   61115 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.210 22 <nil> <nil>}
	I0814 01:06:29.506961   61115 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-901410 && echo "embed-certs-901410" | sudo tee /etc/hostname
	I0814 01:06:29.632717   61115 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-901410
	
	I0814 01:06:29.632749   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHHostname
	I0814 01:06:29.635919   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.636318   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:06:29.636346   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.636582   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHPort
	I0814 01:06:29.636804   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 01:06:29.637010   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 01:06:29.637205   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHUsername
	I0814 01:06:29.637413   61115 main.go:141] libmachine: Using SSH client type: native
	I0814 01:06:29.637574   61115 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.210 22 <nil> <nil>}
	I0814 01:06:29.637590   61115 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-901410' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-901410/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-901410' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0814 01:06:29.759030   61115 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 01:06:29.759059   61115 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19429-9425/.minikube CaCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19429-9425/.minikube}
	I0814 01:06:29.759100   61115 buildroot.go:174] setting up certificates
	I0814 01:06:29.759114   61115 provision.go:84] configureAuth start
	I0814 01:06:29.759126   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetMachineName
	I0814 01:06:29.759412   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetIP
	I0814 01:06:29.761597   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.761918   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:06:29.761946   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.762095   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHHostname
	I0814 01:06:29.763969   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.764320   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:06:29.764353   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.764497   61115 provision.go:143] copyHostCerts
	I0814 01:06:29.764568   61115 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem, removing ...
	I0814 01:06:29.764582   61115 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem
	I0814 01:06:29.764653   61115 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem (1082 bytes)
	I0814 01:06:29.764781   61115 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem, removing ...
	I0814 01:06:29.764791   61115 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem
	I0814 01:06:29.764814   61115 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem (1123 bytes)
	I0814 01:06:29.764875   61115 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem, removing ...
	I0814 01:06:29.764882   61115 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem
	I0814 01:06:29.764899   61115 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem (1675 bytes)
	I0814 01:06:29.764954   61115 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem org=jenkins.embed-certs-901410 san=[127.0.0.1 192.168.50.210 embed-certs-901410 localhost minikube]
	I0814 01:06:29.870234   61115 provision.go:177] copyRemoteCerts
	I0814 01:06:29.870290   61115 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0814 01:06:29.870314   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHHostname
	I0814 01:06:29.872903   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.873188   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:06:29.873220   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.873388   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHPort
	I0814 01:06:29.873582   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 01:06:29.873748   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHUsername
	I0814 01:06:29.873849   61115 sshutil.go:53] new ssh client: &{IP:192.168.50.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/embed-certs-901410/id_rsa Username:docker}
	I0814 01:06:29.959592   61115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0814 01:06:29.982484   61115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0814 01:06:30.005257   61115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0814 01:06:30.029571   61115 provision.go:87] duration metric: took 270.444778ms to configureAuth
	I0814 01:06:30.029595   61115 buildroot.go:189] setting minikube options for container-runtime
	I0814 01:06:30.029773   61115 config.go:182] Loaded profile config "embed-certs-901410": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 01:06:30.029836   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHHostname
	I0814 01:06:30.032696   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:30.033078   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:06:30.033115   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:30.033301   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHPort
	I0814 01:06:30.033492   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 01:06:30.033658   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 01:06:30.033798   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHUsername
	I0814 01:06:30.033953   61115 main.go:141] libmachine: Using SSH client type: native
	I0814 01:06:30.034162   61115 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.210 22 <nil> <nil>}
	I0814 01:06:30.034182   61115 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0814 01:06:27.281267   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:29.284406   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:30.310330   61115 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0814 01:06:30.310362   61115 machine.go:97] duration metric: took 924.221855ms to provisionDockerMachine
	I0814 01:06:30.310376   61115 start.go:293] postStartSetup for "embed-certs-901410" (driver="kvm2")
	I0814 01:06:30.310391   61115 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0814 01:06:30.310412   61115 main.go:141] libmachine: (embed-certs-901410) Calling .DriverName
	I0814 01:06:30.310792   61115 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0814 01:06:30.310829   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHHostname
	I0814 01:06:30.313781   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:30.314184   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:06:30.314211   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:30.314417   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHPort
	I0814 01:06:30.314605   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 01:06:30.314775   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHUsername
	I0814 01:06:30.314921   61115 sshutil.go:53] new ssh client: &{IP:192.168.50.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/embed-certs-901410/id_rsa Username:docker}
	I0814 01:06:30.400094   61115 ssh_runner.go:195] Run: cat /etc/os-release
	I0814 01:06:30.403861   61115 info.go:137] Remote host: Buildroot 2023.02.9
	I0814 01:06:30.403879   61115 filesync.go:126] Scanning /home/jenkins/minikube-integration/19429-9425/.minikube/addons for local assets ...
	I0814 01:06:30.403936   61115 filesync.go:126] Scanning /home/jenkins/minikube-integration/19429-9425/.minikube/files for local assets ...
	I0814 01:06:30.404014   61115 filesync.go:149] local asset: /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem -> 165892.pem in /etc/ssl/certs
	I0814 01:06:30.404128   61115 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0814 01:06:30.412469   61115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem --> /etc/ssl/certs/165892.pem (1708 bytes)
	I0814 01:06:30.434728   61115 start.go:296] duration metric: took 124.33735ms for postStartSetup
	I0814 01:06:30.434768   61115 fix.go:56] duration metric: took 18.384308902s for fixHost
	I0814 01:06:30.434792   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHHostname
	I0814 01:06:30.437730   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:30.438155   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:06:30.438177   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:30.438320   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHPort
	I0814 01:06:30.438510   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 01:06:30.438677   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 01:06:30.438818   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHUsername
	I0814 01:06:30.439014   61115 main.go:141] libmachine: Using SSH client type: native
	I0814 01:06:30.439219   61115 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.210 22 <nil> <nil>}
	I0814 01:06:30.439234   61115 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0814 01:06:30.550947   61115 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723597590.505165718
	
	I0814 01:06:30.550974   61115 fix.go:216] guest clock: 1723597590.505165718
	I0814 01:06:30.550984   61115 fix.go:229] Guest: 2024-08-14 01:06:30.505165718 +0000 UTC Remote: 2024-08-14 01:06:30.434773276 +0000 UTC m=+355.429845421 (delta=70.392442ms)
	I0814 01:06:30.551009   61115 fix.go:200] guest clock delta is within tolerance: 70.392442ms
	I0814 01:06:30.551018   61115 start.go:83] releasing machines lock for "embed-certs-901410", held for 18.500591627s
	I0814 01:06:30.551046   61115 main.go:141] libmachine: (embed-certs-901410) Calling .DriverName
	I0814 01:06:30.551330   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetIP
	I0814 01:06:30.553946   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:30.554367   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:06:30.554403   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:30.554586   61115 main.go:141] libmachine: (embed-certs-901410) Calling .DriverName
	I0814 01:06:30.555088   61115 main.go:141] libmachine: (embed-certs-901410) Calling .DriverName
	I0814 01:06:30.555280   61115 main.go:141] libmachine: (embed-certs-901410) Calling .DriverName
	I0814 01:06:30.555371   61115 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0814 01:06:30.555415   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHHostname
	I0814 01:06:30.555523   61115 ssh_runner.go:195] Run: cat /version.json
	I0814 01:06:30.555549   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHHostname
	I0814 01:06:30.558280   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:30.558369   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:30.558704   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:06:30.558730   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:30.558909   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHPort
	I0814 01:06:30.558922   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:06:30.558945   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:30.559110   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHPort
	I0814 01:06:30.559121   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 01:06:30.559307   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHUsername
	I0814 01:06:30.559319   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 01:06:30.559477   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHUsername
	I0814 01:06:30.559473   61115 sshutil.go:53] new ssh client: &{IP:192.168.50.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/embed-certs-901410/id_rsa Username:docker}
	I0814 01:06:30.559633   61115 sshutil.go:53] new ssh client: &{IP:192.168.50.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/embed-certs-901410/id_rsa Username:docker}
	I0814 01:06:30.650942   61115 ssh_runner.go:195] Run: systemctl --version
	I0814 01:06:30.686931   61115 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0814 01:06:30.834893   61115 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0814 01:06:30.840573   61115 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0814 01:06:30.840644   61115 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0814 01:06:30.856179   61115 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0814 01:06:30.856200   61115 start.go:495] detecting cgroup driver to use...
	I0814 01:06:30.856268   61115 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0814 01:06:30.872056   61115 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0814 01:06:30.884525   61115 docker.go:217] disabling cri-docker service (if available) ...
	I0814 01:06:30.884604   61115 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0814 01:06:30.897219   61115 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0814 01:06:30.910649   61115 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0814 01:06:31.031843   61115 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0814 01:06:31.170959   61115 docker.go:233] disabling docker service ...
	I0814 01:06:31.171034   61115 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0814 01:06:31.185812   61115 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0814 01:06:31.198349   61115 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0814 01:06:31.334492   61115 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0814 01:06:31.448638   61115 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0814 01:06:31.462494   61115 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 01:06:31.479307   61115 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0814 01:06:31.479376   61115 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:06:31.489135   61115 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0814 01:06:31.489202   61115 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:06:31.500174   61115 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:06:31.509884   61115 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:06:31.519412   61115 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0814 01:06:31.529352   61115 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:06:31.539360   61115 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:06:31.555761   61115 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:06:31.566278   61115 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0814 01:06:31.575191   61115 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0814 01:06:31.575242   61115 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0814 01:06:31.587429   61115 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0814 01:06:31.596637   61115 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 01:06:31.702555   61115 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0814 01:06:31.836836   61115 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0814 01:06:31.836908   61115 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0814 01:06:31.841202   61115 start.go:563] Will wait 60s for crictl version
	I0814 01:06:31.841272   61115 ssh_runner.go:195] Run: which crictl
	I0814 01:06:31.844681   61115 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0814 01:06:31.882260   61115 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0814 01:06:31.882348   61115 ssh_runner.go:195] Run: crio --version
	I0814 01:06:31.908181   61115 ssh_runner.go:195] Run: crio --version
	I0814 01:06:31.938158   61115 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0814 01:06:28.917018   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:30.917940   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:32.919466   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:29.636401   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:30.136547   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:30.636748   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:31.136557   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:31.636752   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:32.137082   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:32.637429   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:33.136895   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:33.636703   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:34.136811   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:31.939399   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetIP
	I0814 01:06:31.942325   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:31.942622   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:06:31.942660   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:31.942828   61115 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0814 01:06:31.947071   61115 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 01:06:31.958632   61115 kubeadm.go:883] updating cluster {Name:embed-certs-901410 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:embed-certs-901410 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.210 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0814 01:06:31.958783   61115 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0814 01:06:31.958853   61115 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 01:06:31.996526   61115 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0814 01:06:31.996602   61115 ssh_runner.go:195] Run: which lz4
	I0814 01:06:32.000322   61115 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0814 01:06:32.004629   61115 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0814 01:06:32.004661   61115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0814 01:06:33.171433   61115 crio.go:462] duration metric: took 1.171173942s to copy over tarball
	I0814 01:06:33.171504   61115 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0814 01:06:31.781468   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:33.781547   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:35.781641   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:35.418170   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:37.920694   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:34.637429   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:35.137322   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:35.636955   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:36.136713   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:36.636457   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:37.137396   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:37.637271   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:38.137099   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:38.637303   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:39.136673   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:35.285022   61115 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.11348357s)
	I0814 01:06:35.285047   61115 crio.go:469] duration metric: took 2.113589929s to extract the tarball
	I0814 01:06:35.285054   61115 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0814 01:06:35.320814   61115 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 01:06:35.362145   61115 crio.go:514] all images are preloaded for cri-o runtime.
	I0814 01:06:35.362169   61115 cache_images.go:84] Images are preloaded, skipping loading
	I0814 01:06:35.362177   61115 kubeadm.go:934] updating node { 192.168.50.210 8443 v1.31.0 crio true true} ...
	I0814 01:06:35.362289   61115 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-901410 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.210
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-901410 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0814 01:06:35.362359   61115 ssh_runner.go:195] Run: crio config
	I0814 01:06:35.413412   61115 cni.go:84] Creating CNI manager for ""
	I0814 01:06:35.413433   61115 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 01:06:35.413442   61115 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0814 01:06:35.413461   61115 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.210 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-901410 NodeName:embed-certs-901410 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.210"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.210 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0814 01:06:35.413620   61115 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.210
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-901410"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.210
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.210"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0814 01:06:35.413681   61115 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0814 01:06:35.424217   61115 binaries.go:44] Found k8s binaries, skipping transfer
	I0814 01:06:35.424287   61115 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0814 01:06:35.433358   61115 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0814 01:06:35.448828   61115 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0814 01:06:35.463579   61115 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0814 01:06:35.478423   61115 ssh_runner.go:195] Run: grep 192.168.50.210	control-plane.minikube.internal$ /etc/hosts
	I0814 01:06:35.482005   61115 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.210	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 01:06:35.493411   61115 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 01:06:35.625613   61115 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 01:06:35.642901   61115 certs.go:68] Setting up /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/embed-certs-901410 for IP: 192.168.50.210
	I0814 01:06:35.642927   61115 certs.go:194] generating shared ca certs ...
	I0814 01:06:35.642955   61115 certs.go:226] acquiring lock for ca certs: {Name:mke321171338291cf6d66f3acbfe43d3fabcafa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 01:06:35.643119   61115 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19429-9425/.minikube/ca.key
	I0814 01:06:35.643172   61115 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.key
	I0814 01:06:35.643184   61115 certs.go:256] generating profile certs ...
	I0814 01:06:35.643301   61115 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/embed-certs-901410/client.key
	I0814 01:06:35.643390   61115 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/embed-certs-901410/apiserver.key.0b2ea541
	I0814 01:06:35.643439   61115 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/embed-certs-901410/proxy-client.key
	I0814 01:06:35.643591   61115 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589.pem (1338 bytes)
	W0814 01:06:35.643630   61115 certs.go:480] ignoring /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589_empty.pem, impossibly tiny 0 bytes
	I0814 01:06:35.643648   61115 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem (1679 bytes)
	I0814 01:06:35.643682   61115 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem (1082 bytes)
	I0814 01:06:35.643727   61115 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem (1123 bytes)
	I0814 01:06:35.643768   61115 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem (1675 bytes)
	I0814 01:06:35.643825   61115 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem (1708 bytes)
	I0814 01:06:35.644478   61115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0814 01:06:35.681297   61115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0814 01:06:35.730067   61115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0814 01:06:35.763133   61115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0814 01:06:35.790593   61115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/embed-certs-901410/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0814 01:06:35.815663   61115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/embed-certs-901410/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0814 01:06:35.840763   61115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/embed-certs-901410/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0814 01:06:35.863820   61115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/embed-certs-901410/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0814 01:06:35.887018   61115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589.pem --> /usr/share/ca-certificates/16589.pem (1338 bytes)
	I0814 01:06:35.909408   61115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem --> /usr/share/ca-certificates/165892.pem (1708 bytes)
	I0814 01:06:35.934175   61115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0814 01:06:35.957179   61115 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0814 01:06:35.972922   61115 ssh_runner.go:195] Run: openssl version
	I0814 01:06:35.978523   61115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16589.pem && ln -fs /usr/share/ca-certificates/16589.pem /etc/ssl/certs/16589.pem"
	I0814 01:06:35.987896   61115 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16589.pem
	I0814 01:06:35.991861   61115 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 13 23:59 /usr/share/ca-certificates/16589.pem
	I0814 01:06:35.991922   61115 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16589.pem
	I0814 01:06:35.997354   61115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16589.pem /etc/ssl/certs/51391683.0"
	I0814 01:06:36.007366   61115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/165892.pem && ln -fs /usr/share/ca-certificates/165892.pem /etc/ssl/certs/165892.pem"
	I0814 01:06:36.017502   61115 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/165892.pem
	I0814 01:06:36.021456   61115 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 13 23:59 /usr/share/ca-certificates/165892.pem
	I0814 01:06:36.021506   61115 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/165892.pem
	I0814 01:06:36.026605   61115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/165892.pem /etc/ssl/certs/3ec20f2e.0"
	I0814 01:06:36.035758   61115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0814 01:06:36.044976   61115 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0814 01:06:36.048866   61115 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 13 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0814 01:06:36.048905   61115 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0814 01:06:36.053841   61115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0814 01:06:36.062901   61115 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0814 01:06:36.066905   61115 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0814 01:06:36.072359   61115 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0814 01:06:36.077384   61115 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0814 01:06:36.082634   61115 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0814 01:06:36.087734   61115 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0814 01:06:36.093076   61115 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0814 01:06:36.098239   61115 kubeadm.go:392] StartCluster: {Name:embed-certs-901410 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:embed-certs-901410 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.210 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 01:06:36.098366   61115 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0814 01:06:36.098414   61115 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 01:06:36.137745   61115 cri.go:89] found id: ""
	I0814 01:06:36.137812   61115 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0814 01:06:36.151288   61115 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0814 01:06:36.151304   61115 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0814 01:06:36.151346   61115 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0814 01:06:36.160854   61115 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0814 01:06:36.162454   61115 kubeconfig.go:125] found "embed-certs-901410" server: "https://192.168.50.210:8443"
	I0814 01:06:36.165608   61115 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0814 01:06:36.174251   61115 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.210
	I0814 01:06:36.174272   61115 kubeadm.go:1160] stopping kube-system containers ...
	I0814 01:06:36.174307   61115 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0814 01:06:36.174355   61115 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 01:06:36.208617   61115 cri.go:89] found id: ""
	I0814 01:06:36.208689   61115 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0814 01:06:36.223217   61115 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 01:06:36.231791   61115 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 01:06:36.231807   61115 kubeadm.go:157] found existing configuration files:
	
	I0814 01:06:36.231846   61115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 01:06:36.239738   61115 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 01:06:36.239779   61115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 01:06:36.248183   61115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 01:06:36.256052   61115 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 01:06:36.256099   61115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 01:06:36.264174   61115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 01:06:36.271909   61115 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 01:06:36.271951   61115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 01:06:36.280467   61115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 01:06:36.288795   61115 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 01:06:36.288841   61115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 01:06:36.297142   61115 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 01:06:36.305326   61115 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:06:36.419654   61115 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:06:37.266994   61115 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:06:37.469417   61115 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:06:37.544102   61115 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:06:37.616596   61115 api_server.go:52] waiting for apiserver process to appear ...
	I0814 01:06:37.616684   61115 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:38.117278   61115 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:38.616805   61115 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:39.117789   61115 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:39.616986   61115 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:39.684640   61115 api_server.go:72] duration metric: took 2.068036759s to wait for apiserver process to appear ...
	I0814 01:06:39.684668   61115 api_server.go:88] waiting for apiserver healthz status ...
	I0814 01:06:39.684690   61115 api_server.go:253] Checking apiserver healthz at https://192.168.50.210:8443/healthz ...
	I0814 01:06:39.685138   61115 api_server.go:269] stopped: https://192.168.50.210:8443/healthz: Get "https://192.168.50.210:8443/healthz": dial tcp 192.168.50.210:8443: connect: connection refused
	I0814 01:06:37.782873   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:40.281438   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:40.418079   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:42.418440   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:40.184807   61115 api_server.go:253] Checking apiserver healthz at https://192.168.50.210:8443/healthz ...
	I0814 01:06:42.435930   61115 api_server.go:279] https://192.168.50.210:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0814 01:06:42.435960   61115 api_server.go:103] status: https://192.168.50.210:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0814 01:06:42.435997   61115 api_server.go:253] Checking apiserver healthz at https://192.168.50.210:8443/healthz ...
	I0814 01:06:42.464919   61115 api_server.go:279] https://192.168.50.210:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0814 01:06:42.464949   61115 api_server.go:103] status: https://192.168.50.210:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0814 01:06:42.685218   61115 api_server.go:253] Checking apiserver healthz at https://192.168.50.210:8443/healthz ...
	I0814 01:06:42.691065   61115 api_server.go:279] https://192.168.50.210:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0814 01:06:42.691089   61115 api_server.go:103] status: https://192.168.50.210:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0814 01:06:43.185274   61115 api_server.go:253] Checking apiserver healthz at https://192.168.50.210:8443/healthz ...
	I0814 01:06:43.191160   61115 api_server.go:279] https://192.168.50.210:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0814 01:06:43.191189   61115 api_server.go:103] status: https://192.168.50.210:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0814 01:06:43.685407   61115 api_server.go:253] Checking apiserver healthz at https://192.168.50.210:8443/healthz ...
	I0814 01:06:43.689515   61115 api_server.go:279] https://192.168.50.210:8443/healthz returned 200:
	ok
	I0814 01:06:43.695408   61115 api_server.go:141] control plane version: v1.31.0
	I0814 01:06:43.695435   61115 api_server.go:131] duration metric: took 4.010759094s to wait for apiserver health ...
	I0814 01:06:43.695445   61115 cni.go:84] Creating CNI manager for ""
	I0814 01:06:43.695454   61115 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 01:06:43.696966   61115 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0814 01:06:39.637384   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:40.136562   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:40.637447   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:41.137212   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:41.636824   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:42.136790   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:42.637352   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:43.137237   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:43.637327   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:44.136777   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:43.698444   61115 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0814 01:06:43.713840   61115 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0814 01:06:43.754611   61115 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 01:06:43.765369   61115 system_pods.go:59] 8 kube-system pods found
	I0814 01:06:43.765402   61115 system_pods.go:61] "coredns-6f6b679f8f-fpz8f" [0fae381f-1394-4a55-9735-61197051e0da] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0814 01:06:43.765410   61115 system_pods.go:61] "etcd-embed-certs-901410" [238a87a0-88ab-4663-bc2f-6bf2cb641902] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0814 01:06:43.765421   61115 system_pods.go:61] "kube-apiserver-embed-certs-901410" [0847b62e-42c4-4616-9412-a1547f991ea5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0814 01:06:43.765427   61115 system_pods.go:61] "kube-controller-manager-embed-certs-901410" [868c288a-504f-4bc6-9af3-8d3eff0a4e66] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0814 01:06:43.765431   61115 system_pods.go:61] "kube-proxy-gtr77" [f7b7a6b1-e47f-4982-8247-2adf9ce6690b] Running
	I0814 01:06:43.765436   61115 system_pods.go:61] "kube-scheduler-embed-certs-901410" [803a8501-9a24-436d-8439-2e05ed2b6e2c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0814 01:06:43.765443   61115 system_pods.go:61] "metrics-server-6867b74b74-82tmq" [4683e8c4-92a5-4b81-86c8-55da6044e780] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 01:06:43.765447   61115 system_pods.go:61] "storage-provisioner" [796497c7-c7b4-4207-9dbb-970702bab314] Running
	I0814 01:06:43.765453   61115 system_pods.go:74] duration metric: took 10.823914ms to wait for pod list to return data ...
	I0814 01:06:43.765468   61115 node_conditions.go:102] verifying NodePressure condition ...
	I0814 01:06:43.769292   61115 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0814 01:06:43.769319   61115 node_conditions.go:123] node cpu capacity is 2
	I0814 01:06:43.769334   61115 node_conditions.go:105] duration metric: took 3.855137ms to run NodePressure ...
	I0814 01:06:43.769355   61115 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:06:44.041384   61115 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0814 01:06:44.045549   61115 kubeadm.go:739] kubelet initialised
	I0814 01:06:44.045569   61115 kubeadm.go:740] duration metric: took 4.15887ms waiting for restarted kubelet to initialise ...
	I0814 01:06:44.045576   61115 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 01:06:44.050480   61115 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6f6b679f8f-fpz8f" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:42.281812   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:44.795089   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:44.917037   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:46.918399   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:44.636971   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:45.137082   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:45.636661   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:46.136690   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:46.636597   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:47.136601   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:47.636799   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:48.136486   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:48.637415   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:49.136703   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:46.057380   61115 pod_ready.go:102] pod "coredns-6f6b679f8f-fpz8f" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:48.556914   61115 pod_ready.go:102] pod "coredns-6f6b679f8f-fpz8f" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:49.561672   61115 pod_ready.go:92] pod "coredns-6f6b679f8f-fpz8f" in "kube-system" namespace has status "Ready":"True"
	I0814 01:06:49.561693   61115 pod_ready.go:81] duration metric: took 5.511190087s for pod "coredns-6f6b679f8f-fpz8f" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:49.561705   61115 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-901410" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:47.281700   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:49.780884   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:49.418739   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:51.918181   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:49.636646   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:50.137134   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:50.637310   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:51.136913   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:51.636930   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:52.137158   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:52.636489   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:53.137140   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:53.637032   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:54.137345   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:51.567510   61115 pod_ready.go:102] pod "etcd-embed-certs-901410" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:52.567550   61115 pod_ready.go:92] pod "etcd-embed-certs-901410" in "kube-system" namespace has status "Ready":"True"
	I0814 01:06:52.567575   61115 pod_ready.go:81] duration metric: took 3.005862861s for pod "etcd-embed-certs-901410" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:52.567584   61115 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-901410" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:52.572128   61115 pod_ready.go:92] pod "kube-apiserver-embed-certs-901410" in "kube-system" namespace has status "Ready":"True"
	I0814 01:06:52.572150   61115 pod_ready.go:81] duration metric: took 4.558756ms for pod "kube-apiserver-embed-certs-901410" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:52.572160   61115 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-901410" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:52.575875   61115 pod_ready.go:92] pod "kube-controller-manager-embed-certs-901410" in "kube-system" namespace has status "Ready":"True"
	I0814 01:06:52.575894   61115 pod_ready.go:81] duration metric: took 3.728258ms for pod "kube-controller-manager-embed-certs-901410" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:52.575903   61115 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-gtr77" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:52.579889   61115 pod_ready.go:92] pod "kube-proxy-gtr77" in "kube-system" namespace has status "Ready":"True"
	I0814 01:06:52.579908   61115 pod_ready.go:81] duration metric: took 3.999715ms for pod "kube-proxy-gtr77" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:52.579916   61115 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-901410" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:52.583481   61115 pod_ready.go:92] pod "kube-scheduler-embed-certs-901410" in "kube-system" namespace has status "Ready":"True"
	I0814 01:06:52.583499   61115 pod_ready.go:81] duration metric: took 3.577393ms for pod "kube-scheduler-embed-certs-901410" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:52.583508   61115 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:54.590479   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:51.781057   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:54.280478   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:54.418737   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:56.917785   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:54.636613   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:55.137191   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:55.637149   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:56.137437   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:56.637155   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:57.136629   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:57.636616   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:58.136691   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:58.637180   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:59.137246   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:57.091108   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:59.590751   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:56.781427   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:59.280620   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:01.281835   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:58.918424   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:01.418091   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:59.636603   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:00.137399   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:00.636477   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:01.136689   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:01.636867   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:02.136874   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:02.636850   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:03.136568   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:03.636915   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:04.137185   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:02.090113   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:04.589929   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:03.780774   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:05.781084   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:03.918432   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:06.417245   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:04.636433   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:05.136514   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:05.637177   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:06.136522   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:06.636384   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:07.136753   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:07.636417   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:08.137158   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:08.636665   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:09.137281   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:07.089678   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:09.590309   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:07.781208   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:10.281385   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:08.917707   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:10.917814   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:09.637102   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:10.136575   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:10.637290   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:11.136999   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:11.636523   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:12.136756   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:12.637369   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:13.136763   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:13.637275   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:14.137363   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:12.090323   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:14.092742   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:12.780837   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:14.781484   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:13.424099   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:15.917599   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:17.918631   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:14.636871   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:15.136819   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:15.636660   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:16.136568   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:16.637322   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:17.137088   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:17.637082   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:18.136469   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:18.637351   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:19.136899   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:16.589319   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:18.590539   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:17.279827   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:19.280727   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:20.418308   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:22.418709   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:19.636984   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:20.137256   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:20.636678   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:21.136871   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:21.637264   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:07:21.637336   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:07:21.674035   61804 cri.go:89] found id: ""
	I0814 01:07:21.674081   61804 logs.go:276] 0 containers: []
	W0814 01:07:21.674091   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:07:21.674100   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:07:21.674150   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:07:21.706567   61804 cri.go:89] found id: ""
	I0814 01:07:21.706594   61804 logs.go:276] 0 containers: []
	W0814 01:07:21.706602   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:07:21.706608   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:07:21.706670   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:07:21.744892   61804 cri.go:89] found id: ""
	I0814 01:07:21.744917   61804 logs.go:276] 0 containers: []
	W0814 01:07:21.744927   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:07:21.744933   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:07:21.744987   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:07:21.780766   61804 cri.go:89] found id: ""
	I0814 01:07:21.780791   61804 logs.go:276] 0 containers: []
	W0814 01:07:21.780799   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:07:21.780805   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:07:21.780861   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:07:21.813710   61804 cri.go:89] found id: ""
	I0814 01:07:21.813737   61804 logs.go:276] 0 containers: []
	W0814 01:07:21.813744   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:07:21.813750   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:07:21.813800   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:07:21.851621   61804 cri.go:89] found id: ""
	I0814 01:07:21.851649   61804 logs.go:276] 0 containers: []
	W0814 01:07:21.851657   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:07:21.851663   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:07:21.851713   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:07:21.885176   61804 cri.go:89] found id: ""
	I0814 01:07:21.885207   61804 logs.go:276] 0 containers: []
	W0814 01:07:21.885218   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:07:21.885226   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:07:21.885293   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:07:21.922273   61804 cri.go:89] found id: ""
	I0814 01:07:21.922303   61804 logs.go:276] 0 containers: []
	W0814 01:07:21.922319   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:07:21.922330   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:07:21.922344   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:07:21.975619   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:07:21.975657   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:07:21.989295   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:07:21.989330   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:07:22.117376   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:07:22.117406   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:07:22.117421   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:07:22.190366   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:07:22.190407   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:07:21.094685   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:23.592014   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:21.781584   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:24.281405   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:24.919338   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:27.417053   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:24.727910   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:24.741649   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:07:24.741722   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:07:24.778658   61804 cri.go:89] found id: ""
	I0814 01:07:24.778684   61804 logs.go:276] 0 containers: []
	W0814 01:07:24.778693   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:07:24.778699   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:07:24.778761   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:07:24.811263   61804 cri.go:89] found id: ""
	I0814 01:07:24.811290   61804 logs.go:276] 0 containers: []
	W0814 01:07:24.811314   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:07:24.811321   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:07:24.811385   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:07:24.847414   61804 cri.go:89] found id: ""
	I0814 01:07:24.847442   61804 logs.go:276] 0 containers: []
	W0814 01:07:24.847450   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:07:24.847456   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:07:24.847512   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:07:24.888714   61804 cri.go:89] found id: ""
	I0814 01:07:24.888737   61804 logs.go:276] 0 containers: []
	W0814 01:07:24.888745   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:07:24.888750   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:07:24.888828   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:07:24.937957   61804 cri.go:89] found id: ""
	I0814 01:07:24.937983   61804 logs.go:276] 0 containers: []
	W0814 01:07:24.937994   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:07:24.938002   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:07:24.938086   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:07:24.990489   61804 cri.go:89] found id: ""
	I0814 01:07:24.990514   61804 logs.go:276] 0 containers: []
	W0814 01:07:24.990522   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:07:24.990530   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:07:24.990592   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:07:25.033458   61804 cri.go:89] found id: ""
	I0814 01:07:25.033489   61804 logs.go:276] 0 containers: []
	W0814 01:07:25.033500   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:07:25.033508   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:07:25.033594   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:07:25.065009   61804 cri.go:89] found id: ""
	I0814 01:07:25.065039   61804 logs.go:276] 0 containers: []
	W0814 01:07:25.065049   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:07:25.065062   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:07:25.065074   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:07:25.116806   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:07:25.116841   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:07:25.131759   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:07:25.131790   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:07:25.206389   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:07:25.206415   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:07:25.206435   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:07:25.284603   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:07:25.284632   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:07:27.823371   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:27.836369   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:07:27.836452   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:07:27.876906   61804 cri.go:89] found id: ""
	I0814 01:07:27.876937   61804 logs.go:276] 0 containers: []
	W0814 01:07:27.876950   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:07:27.876960   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:07:27.877039   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:07:27.912449   61804 cri.go:89] found id: ""
	I0814 01:07:27.912481   61804 logs.go:276] 0 containers: []
	W0814 01:07:27.912494   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:07:27.912501   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:07:27.912568   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:07:27.945584   61804 cri.go:89] found id: ""
	I0814 01:07:27.945611   61804 logs.go:276] 0 containers: []
	W0814 01:07:27.945620   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:07:27.945628   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:07:27.945693   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:07:27.982470   61804 cri.go:89] found id: ""
	I0814 01:07:27.982498   61804 logs.go:276] 0 containers: []
	W0814 01:07:27.982508   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:07:27.982517   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:07:27.982592   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:07:28.020494   61804 cri.go:89] found id: ""
	I0814 01:07:28.020521   61804 logs.go:276] 0 containers: []
	W0814 01:07:28.020529   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:07:28.020535   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:07:28.020604   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:07:28.055810   61804 cri.go:89] found id: ""
	I0814 01:07:28.055835   61804 logs.go:276] 0 containers: []
	W0814 01:07:28.055846   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:07:28.055854   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:07:28.055917   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:07:28.092241   61804 cri.go:89] found id: ""
	I0814 01:07:28.092266   61804 logs.go:276] 0 containers: []
	W0814 01:07:28.092273   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:07:28.092279   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:07:28.092336   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:07:28.128234   61804 cri.go:89] found id: ""
	I0814 01:07:28.128259   61804 logs.go:276] 0 containers: []
	W0814 01:07:28.128266   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:07:28.128275   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:07:28.128292   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:07:28.169651   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:07:28.169682   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:07:28.223578   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:07:28.223614   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:07:28.237283   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:07:28.237317   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:07:28.310610   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:07:28.310633   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:07:28.310657   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:07:26.090425   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:28.090637   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:26.781404   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:29.280644   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:31.281808   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:29.917201   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:31.918087   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:30.892125   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:30.904416   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:07:30.904487   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:07:30.938158   61804 cri.go:89] found id: ""
	I0814 01:07:30.938186   61804 logs.go:276] 0 containers: []
	W0814 01:07:30.938197   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:07:30.938204   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:07:30.938273   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:07:30.969960   61804 cri.go:89] found id: ""
	I0814 01:07:30.969990   61804 logs.go:276] 0 containers: []
	W0814 01:07:30.970000   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:07:30.970006   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:07:30.970094   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:07:31.003442   61804 cri.go:89] found id: ""
	I0814 01:07:31.003472   61804 logs.go:276] 0 containers: []
	W0814 01:07:31.003484   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:07:31.003492   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:07:31.003547   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:07:31.036819   61804 cri.go:89] found id: ""
	I0814 01:07:31.036852   61804 logs.go:276] 0 containers: []
	W0814 01:07:31.036866   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:07:31.036874   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:07:31.036943   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:07:31.070521   61804 cri.go:89] found id: ""
	I0814 01:07:31.070546   61804 logs.go:276] 0 containers: []
	W0814 01:07:31.070556   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:07:31.070570   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:07:31.070627   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:07:31.111200   61804 cri.go:89] found id: ""
	I0814 01:07:31.111223   61804 logs.go:276] 0 containers: []
	W0814 01:07:31.111230   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:07:31.111236   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:07:31.111299   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:07:31.143931   61804 cri.go:89] found id: ""
	I0814 01:07:31.143965   61804 logs.go:276] 0 containers: []
	W0814 01:07:31.143973   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:07:31.143978   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:07:31.144027   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:07:31.176742   61804 cri.go:89] found id: ""
	I0814 01:07:31.176765   61804 logs.go:276] 0 containers: []
	W0814 01:07:31.176773   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:07:31.176782   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:07:31.176800   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:07:31.247117   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:07:31.247145   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:07:31.247159   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:07:31.327763   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:07:31.327797   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:07:31.368715   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:07:31.368753   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:07:31.421802   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:07:31.421833   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:07:33.936162   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:33.949580   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:07:33.949647   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:07:33.982423   61804 cri.go:89] found id: ""
	I0814 01:07:33.982452   61804 logs.go:276] 0 containers: []
	W0814 01:07:33.982464   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:07:33.982472   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:07:33.982532   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:07:34.015547   61804 cri.go:89] found id: ""
	I0814 01:07:34.015580   61804 logs.go:276] 0 containers: []
	W0814 01:07:34.015591   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:07:34.015598   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:07:34.015660   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:07:34.047814   61804 cri.go:89] found id: ""
	I0814 01:07:34.047837   61804 logs.go:276] 0 containers: []
	W0814 01:07:34.047845   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:07:34.047851   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:07:34.047914   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:07:34.080509   61804 cri.go:89] found id: ""
	I0814 01:07:34.080539   61804 logs.go:276] 0 containers: []
	W0814 01:07:34.080552   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:07:34.080561   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:07:34.080629   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:07:34.114693   61804 cri.go:89] found id: ""
	I0814 01:07:34.114723   61804 logs.go:276] 0 containers: []
	W0814 01:07:34.114735   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:07:34.114742   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:07:34.114812   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:07:34.148294   61804 cri.go:89] found id: ""
	I0814 01:07:34.148321   61804 logs.go:276] 0 containers: []
	W0814 01:07:34.148334   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:07:34.148344   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:07:34.148410   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:07:34.182913   61804 cri.go:89] found id: ""
	I0814 01:07:34.182938   61804 logs.go:276] 0 containers: []
	W0814 01:07:34.182947   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:07:34.182953   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:07:34.183002   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:07:34.215609   61804 cri.go:89] found id: ""
	I0814 01:07:34.215639   61804 logs.go:276] 0 containers: []
	W0814 01:07:34.215649   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:07:34.215662   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:07:34.215688   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:07:34.278627   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:07:34.278657   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:07:34.278674   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:07:34.353824   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:07:34.353863   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:07:34.390511   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:07:34.390551   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:07:34.440170   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:07:34.440205   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:07:30.589452   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:33.089231   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:33.780724   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:35.781648   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:34.417300   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:36.418300   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:36.955228   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:36.968676   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:07:36.968752   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:07:37.005738   61804 cri.go:89] found id: ""
	I0814 01:07:37.005770   61804 logs.go:276] 0 containers: []
	W0814 01:07:37.005781   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:07:37.005800   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:07:37.005876   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:07:37.038556   61804 cri.go:89] found id: ""
	I0814 01:07:37.038586   61804 logs.go:276] 0 containers: []
	W0814 01:07:37.038594   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:07:37.038599   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:07:37.038659   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:07:37.073835   61804 cri.go:89] found id: ""
	I0814 01:07:37.073870   61804 logs.go:276] 0 containers: []
	W0814 01:07:37.073881   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:07:37.073890   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:07:37.073952   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:07:37.109720   61804 cri.go:89] found id: ""
	I0814 01:07:37.109754   61804 logs.go:276] 0 containers: []
	W0814 01:07:37.109766   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:07:37.109774   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:07:37.109837   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:07:37.141361   61804 cri.go:89] found id: ""
	I0814 01:07:37.141391   61804 logs.go:276] 0 containers: []
	W0814 01:07:37.141401   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:07:37.141409   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:07:37.141460   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:07:37.172803   61804 cri.go:89] found id: ""
	I0814 01:07:37.172833   61804 logs.go:276] 0 containers: []
	W0814 01:07:37.172841   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:07:37.172847   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:07:37.172898   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:07:37.205074   61804 cri.go:89] found id: ""
	I0814 01:07:37.205101   61804 logs.go:276] 0 containers: []
	W0814 01:07:37.205110   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:07:37.205116   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:07:37.205172   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:07:37.237440   61804 cri.go:89] found id: ""
	I0814 01:07:37.237462   61804 logs.go:276] 0 containers: []
	W0814 01:07:37.237472   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:07:37.237484   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:07:37.237499   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:07:37.286411   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:07:37.286442   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:07:37.299649   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:07:37.299673   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:07:37.363165   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:07:37.363188   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:07:37.363209   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:07:37.440551   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:07:37.440589   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:07:35.090686   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:37.091438   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:39.590158   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:38.281686   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:40.780496   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:38.919024   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:41.417327   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:39.980740   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:39.992656   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:07:39.992724   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:07:40.026980   61804 cri.go:89] found id: ""
	I0814 01:07:40.027009   61804 logs.go:276] 0 containers: []
	W0814 01:07:40.027020   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:07:40.027027   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:07:40.027093   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:07:40.059474   61804 cri.go:89] found id: ""
	I0814 01:07:40.059509   61804 logs.go:276] 0 containers: []
	W0814 01:07:40.059521   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:07:40.059528   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:07:40.059602   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:07:40.092222   61804 cri.go:89] found id: ""
	I0814 01:07:40.092251   61804 logs.go:276] 0 containers: []
	W0814 01:07:40.092260   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:07:40.092265   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:07:40.092314   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:07:40.123458   61804 cri.go:89] found id: ""
	I0814 01:07:40.123487   61804 logs.go:276] 0 containers: []
	W0814 01:07:40.123495   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:07:40.123501   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:07:40.123557   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:07:40.155410   61804 cri.go:89] found id: ""
	I0814 01:07:40.155433   61804 logs.go:276] 0 containers: []
	W0814 01:07:40.155461   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:07:40.155467   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:07:40.155517   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:07:40.186726   61804 cri.go:89] found id: ""
	I0814 01:07:40.186750   61804 logs.go:276] 0 containers: []
	W0814 01:07:40.186774   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:07:40.186782   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:07:40.186842   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:07:40.223940   61804 cri.go:89] found id: ""
	I0814 01:07:40.223964   61804 logs.go:276] 0 containers: []
	W0814 01:07:40.223974   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:07:40.223981   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:07:40.224039   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:07:40.255483   61804 cri.go:89] found id: ""
	I0814 01:07:40.255511   61804 logs.go:276] 0 containers: []
	W0814 01:07:40.255520   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:07:40.255532   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:07:40.255547   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:07:40.307368   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:07:40.307400   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:07:40.320297   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:07:40.320323   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:07:40.382358   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:07:40.382390   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:07:40.382406   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:07:40.464226   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:07:40.464312   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:07:43.001144   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:43.015011   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:07:43.015090   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:07:43.047581   61804 cri.go:89] found id: ""
	I0814 01:07:43.047617   61804 logs.go:276] 0 containers: []
	W0814 01:07:43.047629   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:07:43.047636   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:07:43.047709   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:07:43.081737   61804 cri.go:89] found id: ""
	I0814 01:07:43.081769   61804 logs.go:276] 0 containers: []
	W0814 01:07:43.081780   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:07:43.081788   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:07:43.081858   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:07:43.116828   61804 cri.go:89] found id: ""
	I0814 01:07:43.116851   61804 logs.go:276] 0 containers: []
	W0814 01:07:43.116860   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:07:43.116865   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:07:43.116918   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:07:43.149154   61804 cri.go:89] found id: ""
	I0814 01:07:43.149183   61804 logs.go:276] 0 containers: []
	W0814 01:07:43.149195   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:07:43.149203   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:07:43.149270   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:07:43.183298   61804 cri.go:89] found id: ""
	I0814 01:07:43.183327   61804 logs.go:276] 0 containers: []
	W0814 01:07:43.183335   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:07:43.183341   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:07:43.183402   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:07:43.217844   61804 cri.go:89] found id: ""
	I0814 01:07:43.217875   61804 logs.go:276] 0 containers: []
	W0814 01:07:43.217885   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:07:43.217894   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:07:43.217957   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:07:43.254501   61804 cri.go:89] found id: ""
	I0814 01:07:43.254529   61804 logs.go:276] 0 containers: []
	W0814 01:07:43.254540   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:07:43.254549   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:07:43.254621   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:07:43.288499   61804 cri.go:89] found id: ""
	I0814 01:07:43.288520   61804 logs.go:276] 0 containers: []
	W0814 01:07:43.288528   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:07:43.288538   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:07:43.288553   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:07:43.364920   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:07:43.364957   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:07:43.402536   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:07:43.402563   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:07:43.454370   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:07:43.454403   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:07:43.467972   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:07:43.468000   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:07:43.541823   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:07:42.089879   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:44.090254   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:42.781141   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:45.280856   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:43.418435   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:45.918224   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:47.918468   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:46.042614   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:46.055014   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:07:46.055074   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:07:46.088632   61804 cri.go:89] found id: ""
	I0814 01:07:46.088664   61804 logs.go:276] 0 containers: []
	W0814 01:07:46.088676   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:07:46.088684   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:07:46.088755   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:07:46.121747   61804 cri.go:89] found id: ""
	I0814 01:07:46.121774   61804 logs.go:276] 0 containers: []
	W0814 01:07:46.121782   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:07:46.121788   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:07:46.121837   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:07:46.157301   61804 cri.go:89] found id: ""
	I0814 01:07:46.157329   61804 logs.go:276] 0 containers: []
	W0814 01:07:46.157340   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:07:46.157348   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:07:46.157412   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:07:46.188543   61804 cri.go:89] found id: ""
	I0814 01:07:46.188575   61804 logs.go:276] 0 containers: []
	W0814 01:07:46.188586   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:07:46.188594   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:07:46.188657   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:07:46.219762   61804 cri.go:89] found id: ""
	I0814 01:07:46.219787   61804 logs.go:276] 0 containers: []
	W0814 01:07:46.219795   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:07:46.219801   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:07:46.219849   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:07:46.253187   61804 cri.go:89] found id: ""
	I0814 01:07:46.253223   61804 logs.go:276] 0 containers: []
	W0814 01:07:46.253234   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:07:46.253242   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:07:46.253326   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:07:46.287614   61804 cri.go:89] found id: ""
	I0814 01:07:46.287647   61804 logs.go:276] 0 containers: []
	W0814 01:07:46.287656   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:07:46.287662   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:07:46.287716   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:07:46.323558   61804 cri.go:89] found id: ""
	I0814 01:07:46.323588   61804 logs.go:276] 0 containers: []
	W0814 01:07:46.323599   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:07:46.323611   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:07:46.323628   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:07:46.336110   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:07:46.336139   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:07:46.398541   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:07:46.398568   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:07:46.398584   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:07:46.476132   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:07:46.476166   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:07:46.521433   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:07:46.521470   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:07:49.071324   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:49.083741   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:07:49.083816   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:07:49.117788   61804 cri.go:89] found id: ""
	I0814 01:07:49.117816   61804 logs.go:276] 0 containers: []
	W0814 01:07:49.117828   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:07:49.117836   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:07:49.117903   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:07:49.153363   61804 cri.go:89] found id: ""
	I0814 01:07:49.153398   61804 logs.go:276] 0 containers: []
	W0814 01:07:49.153409   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:07:49.153417   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:07:49.153488   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:07:49.186229   61804 cri.go:89] found id: ""
	I0814 01:07:49.186253   61804 logs.go:276] 0 containers: []
	W0814 01:07:49.186261   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:07:49.186267   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:07:49.186327   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:07:49.218463   61804 cri.go:89] found id: ""
	I0814 01:07:49.218485   61804 logs.go:276] 0 containers: []
	W0814 01:07:49.218492   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:07:49.218498   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:07:49.218559   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:07:49.250172   61804 cri.go:89] found id: ""
	I0814 01:07:49.250204   61804 logs.go:276] 0 containers: []
	W0814 01:07:49.250214   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:07:49.250222   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:07:49.250287   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:07:49.285574   61804 cri.go:89] found id: ""
	I0814 01:07:49.285602   61804 logs.go:276] 0 containers: []
	W0814 01:07:49.285612   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:07:49.285620   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:07:49.285679   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:07:49.317583   61804 cri.go:89] found id: ""
	I0814 01:07:49.317614   61804 logs.go:276] 0 containers: []
	W0814 01:07:49.317625   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:07:49.317632   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:07:49.317690   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:07:49.350486   61804 cri.go:89] found id: ""
	I0814 01:07:49.350513   61804 logs.go:276] 0 containers: []
	W0814 01:07:49.350524   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:07:49.350535   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:07:49.350550   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:07:49.401242   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:07:49.401278   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:07:49.415776   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:07:49.415805   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:07:49.487135   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:07:49.487207   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:07:49.487229   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:07:46.092233   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:48.589232   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:47.780910   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:49.781008   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:50.418178   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:52.917953   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:49.569068   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:07:49.569103   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:07:52.108074   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:52.120495   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:07:52.120568   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:07:52.155022   61804 cri.go:89] found id: ""
	I0814 01:07:52.155047   61804 logs.go:276] 0 containers: []
	W0814 01:07:52.155055   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:07:52.155063   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:07:52.155131   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:07:52.186783   61804 cri.go:89] found id: ""
	I0814 01:07:52.186813   61804 logs.go:276] 0 containers: []
	W0814 01:07:52.186837   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:07:52.186854   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:07:52.186908   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:07:52.219089   61804 cri.go:89] found id: ""
	I0814 01:07:52.219118   61804 logs.go:276] 0 containers: []
	W0814 01:07:52.219129   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:07:52.219136   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:07:52.219200   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:07:52.252343   61804 cri.go:89] found id: ""
	I0814 01:07:52.252378   61804 logs.go:276] 0 containers: []
	W0814 01:07:52.252391   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:07:52.252399   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:07:52.252460   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:07:52.288827   61804 cri.go:89] found id: ""
	I0814 01:07:52.288848   61804 logs.go:276] 0 containers: []
	W0814 01:07:52.288855   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:07:52.288861   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:07:52.288913   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:07:52.322201   61804 cri.go:89] found id: ""
	I0814 01:07:52.322228   61804 logs.go:276] 0 containers: []
	W0814 01:07:52.322240   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:07:52.322247   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:07:52.322327   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:07:52.357482   61804 cri.go:89] found id: ""
	I0814 01:07:52.357508   61804 logs.go:276] 0 containers: []
	W0814 01:07:52.357519   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:07:52.357527   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:07:52.357599   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:07:52.390481   61804 cri.go:89] found id: ""
	I0814 01:07:52.390508   61804 logs.go:276] 0 containers: []
	W0814 01:07:52.390515   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:07:52.390523   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:07:52.390536   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:07:52.403144   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:07:52.403171   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:07:52.474148   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:07:52.474170   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:07:52.474182   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:07:52.555353   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:07:52.555396   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:07:52.592151   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:07:52.592180   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:07:50.589355   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:52.590468   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:52.282598   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:54.780753   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:55.418165   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:57.418294   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:55.143835   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:55.156285   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:07:55.156360   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:07:55.195624   61804 cri.go:89] found id: ""
	I0814 01:07:55.195655   61804 logs.go:276] 0 containers: []
	W0814 01:07:55.195666   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:07:55.195673   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:07:55.195735   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:07:55.230384   61804 cri.go:89] found id: ""
	I0814 01:07:55.230409   61804 logs.go:276] 0 containers: []
	W0814 01:07:55.230419   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:07:55.230426   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:07:55.230491   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:07:55.264774   61804 cri.go:89] found id: ""
	I0814 01:07:55.264802   61804 logs.go:276] 0 containers: []
	W0814 01:07:55.264812   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:07:55.264819   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:07:55.264905   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:07:55.297679   61804 cri.go:89] found id: ""
	I0814 01:07:55.297706   61804 logs.go:276] 0 containers: []
	W0814 01:07:55.297715   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:07:55.297721   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:07:55.297780   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:07:55.331555   61804 cri.go:89] found id: ""
	I0814 01:07:55.331591   61804 logs.go:276] 0 containers: []
	W0814 01:07:55.331602   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:07:55.331609   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:07:55.331685   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:07:55.362351   61804 cri.go:89] found id: ""
	I0814 01:07:55.362374   61804 logs.go:276] 0 containers: []
	W0814 01:07:55.362381   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:07:55.362388   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:07:55.362434   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:07:55.397261   61804 cri.go:89] found id: ""
	I0814 01:07:55.397292   61804 logs.go:276] 0 containers: []
	W0814 01:07:55.397301   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:07:55.397308   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:07:55.397355   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:07:55.431333   61804 cri.go:89] found id: ""
	I0814 01:07:55.431363   61804 logs.go:276] 0 containers: []
	W0814 01:07:55.431376   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:07:55.431388   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:07:55.431403   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:07:55.445865   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:07:55.445901   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:07:55.511474   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:07:55.511494   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:07:55.511505   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:07:55.596934   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:07:55.596966   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:07:55.632440   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:07:55.632477   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:07:58.183656   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:58.196717   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:07:58.196776   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:07:58.231854   61804 cri.go:89] found id: ""
	I0814 01:07:58.231890   61804 logs.go:276] 0 containers: []
	W0814 01:07:58.231902   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:07:58.231910   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:07:58.231972   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:07:58.267169   61804 cri.go:89] found id: ""
	I0814 01:07:58.267201   61804 logs.go:276] 0 containers: []
	W0814 01:07:58.267211   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:07:58.267218   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:07:58.267277   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:07:58.301552   61804 cri.go:89] found id: ""
	I0814 01:07:58.301581   61804 logs.go:276] 0 containers: []
	W0814 01:07:58.301589   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:07:58.301596   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:07:58.301652   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:07:58.334399   61804 cri.go:89] found id: ""
	I0814 01:07:58.334427   61804 logs.go:276] 0 containers: []
	W0814 01:07:58.334434   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:07:58.334440   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:07:58.334490   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:07:58.366748   61804 cri.go:89] found id: ""
	I0814 01:07:58.366777   61804 logs.go:276] 0 containers: []
	W0814 01:07:58.366787   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:07:58.366794   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:07:58.366860   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:07:58.401078   61804 cri.go:89] found id: ""
	I0814 01:07:58.401108   61804 logs.go:276] 0 containers: []
	W0814 01:07:58.401117   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:07:58.401123   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:07:58.401179   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:07:58.433766   61804 cri.go:89] found id: ""
	I0814 01:07:58.433795   61804 logs.go:276] 0 containers: []
	W0814 01:07:58.433807   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:07:58.433813   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:07:58.433863   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:07:58.467187   61804 cri.go:89] found id: ""
	I0814 01:07:58.467211   61804 logs.go:276] 0 containers: []
	W0814 01:07:58.467219   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:07:58.467227   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:07:58.467241   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:07:58.520695   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:07:58.520733   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:07:58.535262   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:07:58.535288   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:07:58.601335   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:07:58.601354   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:07:58.601367   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:07:58.683365   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:07:58.683411   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:07:55.089601   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:57.089754   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:59.590432   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:56.783376   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:59.282603   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:59.917309   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:01.917515   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:01.221305   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:01.233782   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:01.233863   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:01.265991   61804 cri.go:89] found id: ""
	I0814 01:08:01.266019   61804 logs.go:276] 0 containers: []
	W0814 01:08:01.266030   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:01.266048   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:01.266116   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:01.300802   61804 cri.go:89] found id: ""
	I0814 01:08:01.300825   61804 logs.go:276] 0 containers: []
	W0814 01:08:01.300840   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:01.300851   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:01.300918   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:01.334762   61804 cri.go:89] found id: ""
	I0814 01:08:01.334788   61804 logs.go:276] 0 containers: []
	W0814 01:08:01.334796   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:01.334803   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:01.334858   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:01.367051   61804 cri.go:89] found id: ""
	I0814 01:08:01.367075   61804 logs.go:276] 0 containers: []
	W0814 01:08:01.367083   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:01.367089   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:01.367147   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:01.401875   61804 cri.go:89] found id: ""
	I0814 01:08:01.401904   61804 logs.go:276] 0 containers: []
	W0814 01:08:01.401915   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:01.401922   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:01.401982   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:01.435930   61804 cri.go:89] found id: ""
	I0814 01:08:01.435958   61804 logs.go:276] 0 containers: []
	W0814 01:08:01.435975   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:01.435994   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:01.436056   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:01.470913   61804 cri.go:89] found id: ""
	I0814 01:08:01.470943   61804 logs.go:276] 0 containers: []
	W0814 01:08:01.470958   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:01.470966   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:01.471030   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:01.506552   61804 cri.go:89] found id: ""
	I0814 01:08:01.506584   61804 logs.go:276] 0 containers: []
	W0814 01:08:01.506595   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:01.506607   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:01.506621   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:01.557203   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:01.557245   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:01.570729   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:01.570754   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:01.636244   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:01.636268   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:01.636282   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:01.720905   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:01.720937   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:04.261326   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:04.274952   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:04.275020   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:04.309640   61804 cri.go:89] found id: ""
	I0814 01:08:04.309695   61804 logs.go:276] 0 containers: []
	W0814 01:08:04.309708   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:04.309717   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:04.309784   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:04.343333   61804 cri.go:89] found id: ""
	I0814 01:08:04.343368   61804 logs.go:276] 0 containers: []
	W0814 01:08:04.343380   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:04.343388   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:04.343446   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:04.377058   61804 cri.go:89] found id: ""
	I0814 01:08:04.377090   61804 logs.go:276] 0 containers: []
	W0814 01:08:04.377101   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:04.377109   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:04.377170   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:04.411932   61804 cri.go:89] found id: ""
	I0814 01:08:04.411961   61804 logs.go:276] 0 containers: []
	W0814 01:08:04.411973   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:04.411980   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:04.412039   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:04.449523   61804 cri.go:89] found id: ""
	I0814 01:08:04.449557   61804 logs.go:276] 0 containers: []
	W0814 01:08:04.449569   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:04.449577   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:04.449639   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:04.505818   61804 cri.go:89] found id: ""
	I0814 01:08:04.505844   61804 logs.go:276] 0 containers: []
	W0814 01:08:04.505852   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:04.505858   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:04.505911   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:01.594524   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:04.089421   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:01.780659   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:03.780893   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:06.281784   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:03.917861   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:06.417117   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:04.540720   61804 cri.go:89] found id: ""
	I0814 01:08:04.540747   61804 logs.go:276] 0 containers: []
	W0814 01:08:04.540754   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:04.540759   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:04.540822   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:04.575188   61804 cri.go:89] found id: ""
	I0814 01:08:04.575218   61804 logs.go:276] 0 containers: []
	W0814 01:08:04.575230   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:04.575241   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:04.575254   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:04.624557   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:04.624593   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:04.637679   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:04.637707   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:04.707655   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:04.707676   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:04.707690   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:04.792530   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:04.792564   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:07.333726   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:07.346667   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:07.346762   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:07.379773   61804 cri.go:89] found id: ""
	I0814 01:08:07.379809   61804 logs.go:276] 0 containers: []
	W0814 01:08:07.379821   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:07.379832   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:07.379898   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:07.413473   61804 cri.go:89] found id: ""
	I0814 01:08:07.413508   61804 logs.go:276] 0 containers: []
	W0814 01:08:07.413519   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:07.413528   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:07.413592   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:07.448033   61804 cri.go:89] found id: ""
	I0814 01:08:07.448065   61804 logs.go:276] 0 containers: []
	W0814 01:08:07.448076   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:07.448084   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:07.448149   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:07.483015   61804 cri.go:89] found id: ""
	I0814 01:08:07.483043   61804 logs.go:276] 0 containers: []
	W0814 01:08:07.483051   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:07.483057   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:07.483116   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:07.516222   61804 cri.go:89] found id: ""
	I0814 01:08:07.516245   61804 logs.go:276] 0 containers: []
	W0814 01:08:07.516253   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:07.516259   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:07.516309   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:07.552179   61804 cri.go:89] found id: ""
	I0814 01:08:07.552203   61804 logs.go:276] 0 containers: []
	W0814 01:08:07.552211   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:07.552217   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:07.552269   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:07.585804   61804 cri.go:89] found id: ""
	I0814 01:08:07.585832   61804 logs.go:276] 0 containers: []
	W0814 01:08:07.585842   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:07.585850   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:07.585913   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:07.620731   61804 cri.go:89] found id: ""
	I0814 01:08:07.620757   61804 logs.go:276] 0 containers: []
	W0814 01:08:07.620766   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:07.620774   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:07.620786   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:07.662648   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:07.662686   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:07.713380   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:07.713418   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:07.726770   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:07.726801   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:07.794679   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:07.794705   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:07.794720   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:06.090545   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:08.093404   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:08.780821   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:11.281395   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:08.417151   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:10.418613   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:12.916869   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:10.370665   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:10.383986   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:10.384046   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:10.417596   61804 cri.go:89] found id: ""
	I0814 01:08:10.417622   61804 logs.go:276] 0 containers: []
	W0814 01:08:10.417634   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:10.417642   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:10.417703   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:10.453782   61804 cri.go:89] found id: ""
	I0814 01:08:10.453813   61804 logs.go:276] 0 containers: []
	W0814 01:08:10.453824   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:10.453832   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:10.453895   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:10.486795   61804 cri.go:89] found id: ""
	I0814 01:08:10.486821   61804 logs.go:276] 0 containers: []
	W0814 01:08:10.486831   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:10.486839   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:10.486930   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:10.519249   61804 cri.go:89] found id: ""
	I0814 01:08:10.519285   61804 logs.go:276] 0 containers: []
	W0814 01:08:10.519296   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:10.519304   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:10.519369   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:10.551791   61804 cri.go:89] found id: ""
	I0814 01:08:10.551818   61804 logs.go:276] 0 containers: []
	W0814 01:08:10.551825   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:10.551834   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:10.551892   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:10.584630   61804 cri.go:89] found id: ""
	I0814 01:08:10.584658   61804 logs.go:276] 0 containers: []
	W0814 01:08:10.584669   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:10.584679   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:10.584742   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:10.616870   61804 cri.go:89] found id: ""
	I0814 01:08:10.616898   61804 logs.go:276] 0 containers: []
	W0814 01:08:10.616911   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:10.616918   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:10.616984   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:10.650681   61804 cri.go:89] found id: ""
	I0814 01:08:10.650709   61804 logs.go:276] 0 containers: []
	W0814 01:08:10.650721   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:10.650731   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:10.650748   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:10.663021   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:10.663047   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:10.731788   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:10.731813   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:10.731829   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:10.812174   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:10.812213   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:10.854260   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:10.854287   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:13.414862   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:13.428537   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:13.428595   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:13.460800   61804 cri.go:89] found id: ""
	I0814 01:08:13.460836   61804 logs.go:276] 0 containers: []
	W0814 01:08:13.460850   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:13.460859   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:13.460933   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:13.494240   61804 cri.go:89] found id: ""
	I0814 01:08:13.494264   61804 logs.go:276] 0 containers: []
	W0814 01:08:13.494274   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:13.494282   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:13.494370   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:13.526684   61804 cri.go:89] found id: ""
	I0814 01:08:13.526715   61804 logs.go:276] 0 containers: []
	W0814 01:08:13.526726   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:13.526734   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:13.526797   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:13.560258   61804 cri.go:89] found id: ""
	I0814 01:08:13.560281   61804 logs.go:276] 0 containers: []
	W0814 01:08:13.560289   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:13.560296   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:13.560353   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:13.592615   61804 cri.go:89] found id: ""
	I0814 01:08:13.592641   61804 logs.go:276] 0 containers: []
	W0814 01:08:13.592653   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:13.592668   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:13.592732   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:13.627268   61804 cri.go:89] found id: ""
	I0814 01:08:13.627291   61804 logs.go:276] 0 containers: []
	W0814 01:08:13.627299   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:13.627305   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:13.627363   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:13.661932   61804 cri.go:89] found id: ""
	I0814 01:08:13.661955   61804 logs.go:276] 0 containers: []
	W0814 01:08:13.661963   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:13.661968   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:13.662024   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:13.694724   61804 cri.go:89] found id: ""
	I0814 01:08:13.694750   61804 logs.go:276] 0 containers: []
	W0814 01:08:13.694760   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:13.694770   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:13.694785   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:13.759415   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:13.759436   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:13.759449   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:13.835496   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:13.835532   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:13.873749   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:13.873779   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:13.927612   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:13.927647   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:10.590789   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:13.090113   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:13.781937   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:16.281253   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:14.920559   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:17.418625   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:16.440696   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:16.455648   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:16.455734   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:16.490557   61804 cri.go:89] found id: ""
	I0814 01:08:16.490587   61804 logs.go:276] 0 containers: []
	W0814 01:08:16.490599   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:16.490606   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:16.490667   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:16.524268   61804 cri.go:89] found id: ""
	I0814 01:08:16.524294   61804 logs.go:276] 0 containers: []
	W0814 01:08:16.524303   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:16.524315   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:16.524379   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:16.562651   61804 cri.go:89] found id: ""
	I0814 01:08:16.562686   61804 logs.go:276] 0 containers: []
	W0814 01:08:16.562696   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:16.562708   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:16.562771   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:16.598581   61804 cri.go:89] found id: ""
	I0814 01:08:16.598605   61804 logs.go:276] 0 containers: []
	W0814 01:08:16.598613   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:16.598619   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:16.598669   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:16.646849   61804 cri.go:89] found id: ""
	I0814 01:08:16.646872   61804 logs.go:276] 0 containers: []
	W0814 01:08:16.646880   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:16.646886   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:16.646939   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:16.698695   61804 cri.go:89] found id: ""
	I0814 01:08:16.698720   61804 logs.go:276] 0 containers: []
	W0814 01:08:16.698727   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:16.698733   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:16.698793   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:16.748149   61804 cri.go:89] found id: ""
	I0814 01:08:16.748182   61804 logs.go:276] 0 containers: []
	W0814 01:08:16.748193   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:16.748201   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:16.748263   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:16.783334   61804 cri.go:89] found id: ""
	I0814 01:08:16.783362   61804 logs.go:276] 0 containers: []
	W0814 01:08:16.783371   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:16.783378   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:16.783389   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:16.833178   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:16.833211   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:16.845843   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:16.845873   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:16.916728   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:16.916754   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:16.916770   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:17.001194   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:17.001236   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:15.588888   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:17.589309   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:19.593806   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:18.780869   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:20.780899   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:19.918779   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:22.417464   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:19.540300   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:19.554740   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:19.554823   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:19.590452   61804 cri.go:89] found id: ""
	I0814 01:08:19.590478   61804 logs.go:276] 0 containers: []
	W0814 01:08:19.590489   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:19.590498   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:19.590559   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:19.623643   61804 cri.go:89] found id: ""
	I0814 01:08:19.623673   61804 logs.go:276] 0 containers: []
	W0814 01:08:19.623683   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:19.623691   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:19.623759   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:19.659205   61804 cri.go:89] found id: ""
	I0814 01:08:19.659228   61804 logs.go:276] 0 containers: []
	W0814 01:08:19.659236   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:19.659243   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:19.659312   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:19.695038   61804 cri.go:89] found id: ""
	I0814 01:08:19.695061   61804 logs.go:276] 0 containers: []
	W0814 01:08:19.695068   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:19.695075   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:19.695132   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:19.728525   61804 cri.go:89] found id: ""
	I0814 01:08:19.728555   61804 logs.go:276] 0 containers: []
	W0814 01:08:19.728568   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:19.728585   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:19.728652   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:19.764153   61804 cri.go:89] found id: ""
	I0814 01:08:19.764180   61804 logs.go:276] 0 containers: []
	W0814 01:08:19.764191   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:19.764198   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:19.764261   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:19.803346   61804 cri.go:89] found id: ""
	I0814 01:08:19.803382   61804 logs.go:276] 0 containers: []
	W0814 01:08:19.803392   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:19.803400   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:19.803462   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:19.835783   61804 cri.go:89] found id: ""
	I0814 01:08:19.835811   61804 logs.go:276] 0 containers: []
	W0814 01:08:19.835818   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:19.835827   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:19.835839   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:19.889917   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:19.889961   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:19.903826   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:19.903858   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:19.977790   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:19.977813   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:19.977832   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:20.053634   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:20.053672   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:22.598821   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:22.612128   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:22.612209   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:22.647840   61804 cri.go:89] found id: ""
	I0814 01:08:22.647864   61804 logs.go:276] 0 containers: []
	W0814 01:08:22.647873   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:22.647880   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:22.647942   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:22.681572   61804 cri.go:89] found id: ""
	I0814 01:08:22.681594   61804 logs.go:276] 0 containers: []
	W0814 01:08:22.681601   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:22.681606   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:22.681670   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:22.715737   61804 cri.go:89] found id: ""
	I0814 01:08:22.715785   61804 logs.go:276] 0 containers: []
	W0814 01:08:22.715793   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:22.715799   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:22.715856   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:22.750605   61804 cri.go:89] found id: ""
	I0814 01:08:22.750628   61804 logs.go:276] 0 containers: []
	W0814 01:08:22.750636   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:22.750643   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:22.750693   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:22.786410   61804 cri.go:89] found id: ""
	I0814 01:08:22.786434   61804 logs.go:276] 0 containers: []
	W0814 01:08:22.786442   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:22.786447   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:22.786502   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:22.821799   61804 cri.go:89] found id: ""
	I0814 01:08:22.821830   61804 logs.go:276] 0 containers: []
	W0814 01:08:22.821840   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:22.821846   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:22.821923   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:22.861218   61804 cri.go:89] found id: ""
	I0814 01:08:22.861243   61804 logs.go:276] 0 containers: []
	W0814 01:08:22.861254   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:22.861261   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:22.861324   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:22.896371   61804 cri.go:89] found id: ""
	I0814 01:08:22.896398   61804 logs.go:276] 0 containers: []
	W0814 01:08:22.896408   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:22.896419   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:22.896434   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:22.951998   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:22.952035   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:22.966214   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:22.966239   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:23.035790   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:23.035812   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:23.035824   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:23.119675   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:23.119708   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:22.090427   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:24.100671   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:22.781758   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:25.280556   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:24.419130   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:26.918236   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:25.657771   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:25.671521   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:25.671607   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:25.708419   61804 cri.go:89] found id: ""
	I0814 01:08:25.708451   61804 logs.go:276] 0 containers: []
	W0814 01:08:25.708460   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:25.708466   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:25.708514   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:25.743263   61804 cri.go:89] found id: ""
	I0814 01:08:25.743296   61804 logs.go:276] 0 containers: []
	W0814 01:08:25.743309   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:25.743318   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:25.743384   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:25.773544   61804 cri.go:89] found id: ""
	I0814 01:08:25.773570   61804 logs.go:276] 0 containers: []
	W0814 01:08:25.773580   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:25.773588   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:25.773649   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:25.805316   61804 cri.go:89] found id: ""
	I0814 01:08:25.805339   61804 logs.go:276] 0 containers: []
	W0814 01:08:25.805347   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:25.805353   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:25.805404   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:25.837785   61804 cri.go:89] found id: ""
	I0814 01:08:25.837810   61804 logs.go:276] 0 containers: []
	W0814 01:08:25.837818   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:25.837824   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:25.837893   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:25.877145   61804 cri.go:89] found id: ""
	I0814 01:08:25.877171   61804 logs.go:276] 0 containers: []
	W0814 01:08:25.877182   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:25.877190   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:25.877236   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:25.913823   61804 cri.go:89] found id: ""
	I0814 01:08:25.913861   61804 logs.go:276] 0 containers: []
	W0814 01:08:25.913872   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:25.913880   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:25.913946   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:25.947397   61804 cri.go:89] found id: ""
	I0814 01:08:25.947419   61804 logs.go:276] 0 containers: []
	W0814 01:08:25.947427   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:25.947435   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:25.947446   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:26.023754   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:26.023812   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:26.060030   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:26.060068   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:26.110625   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:26.110663   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:26.123952   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:26.123991   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:26.194210   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:28.694490   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:28.706976   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:28.707040   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:28.739739   61804 cri.go:89] found id: ""
	I0814 01:08:28.739768   61804 logs.go:276] 0 containers: []
	W0814 01:08:28.739775   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:28.739781   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:28.739831   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:28.771179   61804 cri.go:89] found id: ""
	I0814 01:08:28.771217   61804 logs.go:276] 0 containers: []
	W0814 01:08:28.771228   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:28.771237   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:28.771303   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:28.805634   61804 cri.go:89] found id: ""
	I0814 01:08:28.805661   61804 logs.go:276] 0 containers: []
	W0814 01:08:28.805670   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:28.805675   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:28.805727   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:28.840796   61804 cri.go:89] found id: ""
	I0814 01:08:28.840819   61804 logs.go:276] 0 containers: []
	W0814 01:08:28.840827   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:28.840833   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:28.840893   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:28.879627   61804 cri.go:89] found id: ""
	I0814 01:08:28.879656   61804 logs.go:276] 0 containers: []
	W0814 01:08:28.879668   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:28.879675   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:28.879734   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:28.916568   61804 cri.go:89] found id: ""
	I0814 01:08:28.916588   61804 logs.go:276] 0 containers: []
	W0814 01:08:28.916597   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:28.916602   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:28.916658   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:28.952959   61804 cri.go:89] found id: ""
	I0814 01:08:28.952986   61804 logs.go:276] 0 containers: []
	W0814 01:08:28.952996   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:28.953003   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:28.953061   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:28.993496   61804 cri.go:89] found id: ""
	I0814 01:08:28.993527   61804 logs.go:276] 0 containers: []
	W0814 01:08:28.993538   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:28.993550   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:28.993565   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:29.079181   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:29.079219   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:29.121692   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:29.121718   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:29.174008   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:29.174068   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:29.188872   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:29.188904   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:29.254381   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:26.589068   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:28.590266   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:27.281232   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:29.781697   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:28.918512   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:31.418087   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:31.754986   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:31.767581   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:31.767656   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:31.803826   61804 cri.go:89] found id: ""
	I0814 01:08:31.803853   61804 logs.go:276] 0 containers: []
	W0814 01:08:31.803861   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:31.803867   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:31.803927   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:31.837958   61804 cri.go:89] found id: ""
	I0814 01:08:31.837986   61804 logs.go:276] 0 containers: []
	W0814 01:08:31.837996   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:31.838004   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:31.838077   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:31.869567   61804 cri.go:89] found id: ""
	I0814 01:08:31.869595   61804 logs.go:276] 0 containers: []
	W0814 01:08:31.869604   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:31.869612   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:31.869680   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:31.906943   61804 cri.go:89] found id: ""
	I0814 01:08:31.906973   61804 logs.go:276] 0 containers: []
	W0814 01:08:31.906985   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:31.906992   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:31.907059   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:31.940969   61804 cri.go:89] found id: ""
	I0814 01:08:31.941006   61804 logs.go:276] 0 containers: []
	W0814 01:08:31.941017   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:31.941025   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:31.941081   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:31.974546   61804 cri.go:89] found id: ""
	I0814 01:08:31.974578   61804 logs.go:276] 0 containers: []
	W0814 01:08:31.974588   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:31.974596   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:31.974657   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:32.007586   61804 cri.go:89] found id: ""
	I0814 01:08:32.007619   61804 logs.go:276] 0 containers: []
	W0814 01:08:32.007633   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:32.007641   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:32.007703   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:32.040073   61804 cri.go:89] found id: ""
	I0814 01:08:32.040104   61804 logs.go:276] 0 containers: []
	W0814 01:08:32.040116   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:32.040128   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:32.040142   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:32.094938   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:32.094978   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:32.107967   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:32.108002   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:32.176290   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:32.176314   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:32.176326   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:32.251231   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:32.251269   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:30.590569   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:33.089507   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:32.287689   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:34.781273   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:33.918103   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:36.417197   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:34.791693   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:34.804519   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:34.804582   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:34.838907   61804 cri.go:89] found id: ""
	I0814 01:08:34.838933   61804 logs.go:276] 0 containers: []
	W0814 01:08:34.838941   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:34.838947   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:34.839008   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:34.869650   61804 cri.go:89] found id: ""
	I0814 01:08:34.869676   61804 logs.go:276] 0 containers: []
	W0814 01:08:34.869684   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:34.869689   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:34.869739   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:34.903598   61804 cri.go:89] found id: ""
	I0814 01:08:34.903635   61804 logs.go:276] 0 containers: []
	W0814 01:08:34.903648   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:34.903655   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:34.903719   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:34.937101   61804 cri.go:89] found id: ""
	I0814 01:08:34.937131   61804 logs.go:276] 0 containers: []
	W0814 01:08:34.937143   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:34.937151   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:34.937214   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:34.969880   61804 cri.go:89] found id: ""
	I0814 01:08:34.969913   61804 logs.go:276] 0 containers: []
	W0814 01:08:34.969925   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:34.969933   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:34.969990   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:35.004158   61804 cri.go:89] found id: ""
	I0814 01:08:35.004185   61804 logs.go:276] 0 containers: []
	W0814 01:08:35.004194   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:35.004200   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:35.004267   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:35.037368   61804 cri.go:89] found id: ""
	I0814 01:08:35.037397   61804 logs.go:276] 0 containers: []
	W0814 01:08:35.037407   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:35.037415   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:35.037467   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:35.071051   61804 cri.go:89] found id: ""
	I0814 01:08:35.071080   61804 logs.go:276] 0 containers: []
	W0814 01:08:35.071089   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:35.071102   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:35.071116   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:35.147845   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:35.147879   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:35.189235   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:35.189271   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:35.242094   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:35.242132   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:35.255405   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:35.255430   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:35.325820   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:37.826188   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:37.839036   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:37.839117   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:37.876368   61804 cri.go:89] found id: ""
	I0814 01:08:37.876397   61804 logs.go:276] 0 containers: []
	W0814 01:08:37.876406   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:37.876411   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:37.876468   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:37.916680   61804 cri.go:89] found id: ""
	I0814 01:08:37.916717   61804 logs.go:276] 0 containers: []
	W0814 01:08:37.916727   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:37.916735   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:37.916802   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:37.951025   61804 cri.go:89] found id: ""
	I0814 01:08:37.951048   61804 logs.go:276] 0 containers: []
	W0814 01:08:37.951056   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:37.951062   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:37.951122   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:37.984837   61804 cri.go:89] found id: ""
	I0814 01:08:37.984865   61804 logs.go:276] 0 containers: []
	W0814 01:08:37.984873   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:37.984878   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:37.984928   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:38.018722   61804 cri.go:89] found id: ""
	I0814 01:08:38.018744   61804 logs.go:276] 0 containers: []
	W0814 01:08:38.018752   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:38.018757   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:38.018815   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:38.052306   61804 cri.go:89] found id: ""
	I0814 01:08:38.052337   61804 logs.go:276] 0 containers: []
	W0814 01:08:38.052350   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:38.052358   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:38.052419   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:38.086752   61804 cri.go:89] found id: ""
	I0814 01:08:38.086784   61804 logs.go:276] 0 containers: []
	W0814 01:08:38.086801   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:38.086811   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:38.086877   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:38.119201   61804 cri.go:89] found id: ""
	I0814 01:08:38.119228   61804 logs.go:276] 0 containers: []
	W0814 01:08:38.119235   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:38.119243   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:38.119255   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:38.171460   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:38.171492   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:38.184712   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:38.184739   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:38.248529   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:38.248552   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:38.248568   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:38.324517   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:38.324556   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:35.092682   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:37.590633   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:39.590761   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:37.280984   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:39.780961   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:38.417262   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:40.417822   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:42.918615   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:40.865218   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:40.877772   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:40.877847   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:40.910171   61804 cri.go:89] found id: ""
	I0814 01:08:40.910197   61804 logs.go:276] 0 containers: []
	W0814 01:08:40.910204   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:40.910210   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:40.910257   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:40.947205   61804 cri.go:89] found id: ""
	I0814 01:08:40.947234   61804 logs.go:276] 0 containers: []
	W0814 01:08:40.947244   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:40.947249   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:40.947304   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:40.979404   61804 cri.go:89] found id: ""
	I0814 01:08:40.979428   61804 logs.go:276] 0 containers: []
	W0814 01:08:40.979436   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:40.979442   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:40.979500   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:41.017710   61804 cri.go:89] found id: ""
	I0814 01:08:41.017737   61804 logs.go:276] 0 containers: []
	W0814 01:08:41.017746   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:41.017752   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:41.017799   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:41.052240   61804 cri.go:89] found id: ""
	I0814 01:08:41.052269   61804 logs.go:276] 0 containers: []
	W0814 01:08:41.052278   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:41.052286   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:41.052353   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:41.084124   61804 cri.go:89] found id: ""
	I0814 01:08:41.084151   61804 logs.go:276] 0 containers: []
	W0814 01:08:41.084159   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:41.084165   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:41.084230   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:41.120994   61804 cri.go:89] found id: ""
	I0814 01:08:41.121027   61804 logs.go:276] 0 containers: []
	W0814 01:08:41.121039   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:41.121047   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:41.121106   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:41.155794   61804 cri.go:89] found id: ""
	I0814 01:08:41.155829   61804 logs.go:276] 0 containers: []
	W0814 01:08:41.155842   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:41.155854   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:41.155873   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:41.209146   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:41.209191   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:41.222112   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:41.222141   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:41.298512   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:41.298533   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:41.298550   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:41.378609   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:41.378645   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:43.924469   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:43.936857   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:43.936935   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:43.969234   61804 cri.go:89] found id: ""
	I0814 01:08:43.969267   61804 logs.go:276] 0 containers: []
	W0814 01:08:43.969276   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:43.969282   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:43.969348   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:44.003814   61804 cri.go:89] found id: ""
	I0814 01:08:44.003841   61804 logs.go:276] 0 containers: []
	W0814 01:08:44.003852   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:44.003860   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:44.003929   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:44.037828   61804 cri.go:89] found id: ""
	I0814 01:08:44.037858   61804 logs.go:276] 0 containers: []
	W0814 01:08:44.037869   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:44.037877   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:44.037931   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:44.077084   61804 cri.go:89] found id: ""
	I0814 01:08:44.077110   61804 logs.go:276] 0 containers: []
	W0814 01:08:44.077118   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:44.077124   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:44.077174   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:44.111028   61804 cri.go:89] found id: ""
	I0814 01:08:44.111054   61804 logs.go:276] 0 containers: []
	W0814 01:08:44.111063   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:44.111070   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:44.111122   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:44.143178   61804 cri.go:89] found id: ""
	I0814 01:08:44.143211   61804 logs.go:276] 0 containers: []
	W0814 01:08:44.143222   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:44.143229   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:44.143293   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:44.177606   61804 cri.go:89] found id: ""
	I0814 01:08:44.177636   61804 logs.go:276] 0 containers: []
	W0814 01:08:44.177648   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:44.177657   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:44.177723   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:44.210941   61804 cri.go:89] found id: ""
	I0814 01:08:44.210965   61804 logs.go:276] 0 containers: []
	W0814 01:08:44.210973   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:44.210982   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:44.210995   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:44.224219   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:44.224248   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:44.289411   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:44.289431   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:44.289442   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:44.369680   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:44.369720   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:44.407705   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:44.407742   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:42.088924   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:44.090237   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:41.781814   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:44.281794   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:45.418397   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:47.419132   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:46.962321   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:46.975711   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:46.975843   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:47.008529   61804 cri.go:89] found id: ""
	I0814 01:08:47.008642   61804 logs.go:276] 0 containers: []
	W0814 01:08:47.008651   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:47.008657   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:47.008707   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:47.042469   61804 cri.go:89] found id: ""
	I0814 01:08:47.042498   61804 logs.go:276] 0 containers: []
	W0814 01:08:47.042509   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:47.042518   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:47.042586   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:47.081186   61804 cri.go:89] found id: ""
	I0814 01:08:47.081214   61804 logs.go:276] 0 containers: []
	W0814 01:08:47.081222   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:47.081229   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:47.081286   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:47.117727   61804 cri.go:89] found id: ""
	I0814 01:08:47.117754   61804 logs.go:276] 0 containers: []
	W0814 01:08:47.117765   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:47.117773   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:47.117858   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:47.151247   61804 cri.go:89] found id: ""
	I0814 01:08:47.151283   61804 logs.go:276] 0 containers: []
	W0814 01:08:47.151298   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:47.151307   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:47.151370   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:47.185640   61804 cri.go:89] found id: ""
	I0814 01:08:47.185671   61804 logs.go:276] 0 containers: []
	W0814 01:08:47.185681   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:47.185689   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:47.185755   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:47.220597   61804 cri.go:89] found id: ""
	I0814 01:08:47.220625   61804 logs.go:276] 0 containers: []
	W0814 01:08:47.220633   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:47.220641   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:47.220714   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:47.257099   61804 cri.go:89] found id: ""
	I0814 01:08:47.257131   61804 logs.go:276] 0 containers: []
	W0814 01:08:47.257147   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:47.257162   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:47.257179   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:47.307503   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:47.307538   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:47.320882   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:47.320907   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:47.394519   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:47.394553   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:47.394567   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:47.475998   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:47.476058   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:46.091154   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:48.590382   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:46.780699   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:48.780773   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:51.281235   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:49.421293   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:51.918374   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:50.019454   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:50.033470   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:50.033550   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:50.070782   61804 cri.go:89] found id: ""
	I0814 01:08:50.070806   61804 logs.go:276] 0 containers: []
	W0814 01:08:50.070813   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:50.070819   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:50.070881   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:50.104047   61804 cri.go:89] found id: ""
	I0814 01:08:50.104083   61804 logs.go:276] 0 containers: []
	W0814 01:08:50.104092   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:50.104101   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:50.104172   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:50.139445   61804 cri.go:89] found id: ""
	I0814 01:08:50.139472   61804 logs.go:276] 0 containers: []
	W0814 01:08:50.139480   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:50.139487   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:50.139545   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:50.173077   61804 cri.go:89] found id: ""
	I0814 01:08:50.173109   61804 logs.go:276] 0 containers: []
	W0814 01:08:50.173118   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:50.173126   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:50.173189   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:50.204234   61804 cri.go:89] found id: ""
	I0814 01:08:50.204264   61804 logs.go:276] 0 containers: []
	W0814 01:08:50.204273   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:50.204281   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:50.204342   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:50.237005   61804 cri.go:89] found id: ""
	I0814 01:08:50.237034   61804 logs.go:276] 0 containers: []
	W0814 01:08:50.237044   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:50.237052   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:50.237107   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:50.270171   61804 cri.go:89] found id: ""
	I0814 01:08:50.270197   61804 logs.go:276] 0 containers: []
	W0814 01:08:50.270204   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:50.270209   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:50.270274   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:50.304932   61804 cri.go:89] found id: ""
	I0814 01:08:50.304959   61804 logs.go:276] 0 containers: []
	W0814 01:08:50.304968   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:50.304980   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:50.305000   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:50.317524   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:50.317552   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:50.384790   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:50.384817   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:50.384833   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:50.461398   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:50.461432   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:50.518516   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:50.518545   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:53.069835   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:53.082707   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:53.082777   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:53.119053   61804 cri.go:89] found id: ""
	I0814 01:08:53.119075   61804 logs.go:276] 0 containers: []
	W0814 01:08:53.119083   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:53.119089   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:53.119138   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:53.154565   61804 cri.go:89] found id: ""
	I0814 01:08:53.154598   61804 logs.go:276] 0 containers: []
	W0814 01:08:53.154610   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:53.154618   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:53.154690   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:53.187144   61804 cri.go:89] found id: ""
	I0814 01:08:53.187171   61804 logs.go:276] 0 containers: []
	W0814 01:08:53.187178   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:53.187184   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:53.187236   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:53.220965   61804 cri.go:89] found id: ""
	I0814 01:08:53.220989   61804 logs.go:276] 0 containers: []
	W0814 01:08:53.220998   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:53.221004   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:53.221062   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:53.256825   61804 cri.go:89] found id: ""
	I0814 01:08:53.256857   61804 logs.go:276] 0 containers: []
	W0814 01:08:53.256868   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:53.256875   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:53.256941   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:53.295733   61804 cri.go:89] found id: ""
	I0814 01:08:53.295761   61804 logs.go:276] 0 containers: []
	W0814 01:08:53.295768   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:53.295774   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:53.295822   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:53.328928   61804 cri.go:89] found id: ""
	I0814 01:08:53.328959   61804 logs.go:276] 0 containers: []
	W0814 01:08:53.328970   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:53.328979   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:53.329049   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:53.362866   61804 cri.go:89] found id: ""
	I0814 01:08:53.362896   61804 logs.go:276] 0 containers: []
	W0814 01:08:53.362907   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:53.362919   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:53.362934   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:53.375681   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:53.375718   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:53.439108   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:53.439132   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:53.439148   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:53.524801   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:53.524838   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:53.560832   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:53.560866   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:51.091445   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:53.589472   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:53.780960   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:56.281731   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:54.417207   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:56.417442   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:56.117383   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:56.129668   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:56.129729   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:56.161928   61804 cri.go:89] found id: ""
	I0814 01:08:56.161953   61804 logs.go:276] 0 containers: []
	W0814 01:08:56.161966   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:56.161971   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:56.162017   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:56.192303   61804 cri.go:89] found id: ""
	I0814 01:08:56.192332   61804 logs.go:276] 0 containers: []
	W0814 01:08:56.192343   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:56.192360   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:56.192428   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:56.226668   61804 cri.go:89] found id: ""
	I0814 01:08:56.226696   61804 logs.go:276] 0 containers: []
	W0814 01:08:56.226707   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:56.226715   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:56.226776   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:56.284959   61804 cri.go:89] found id: ""
	I0814 01:08:56.284987   61804 logs.go:276] 0 containers: []
	W0814 01:08:56.284998   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:56.285006   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:56.285066   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:56.317591   61804 cri.go:89] found id: ""
	I0814 01:08:56.317623   61804 logs.go:276] 0 containers: []
	W0814 01:08:56.317633   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:56.317640   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:56.317707   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:56.350119   61804 cri.go:89] found id: ""
	I0814 01:08:56.350146   61804 logs.go:276] 0 containers: []
	W0814 01:08:56.350157   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:56.350165   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:56.350223   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:56.382204   61804 cri.go:89] found id: ""
	I0814 01:08:56.382231   61804 logs.go:276] 0 containers: []
	W0814 01:08:56.382239   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:56.382244   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:56.382295   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:56.415098   61804 cri.go:89] found id: ""
	I0814 01:08:56.415130   61804 logs.go:276] 0 containers: []
	W0814 01:08:56.415140   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:56.415160   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:56.415174   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:56.466056   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:56.466094   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:56.480989   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:56.481019   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:56.550348   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:56.550371   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:56.550387   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:56.629331   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:56.629371   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:59.166791   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:59.179818   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:59.179907   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:59.212759   61804 cri.go:89] found id: ""
	I0814 01:08:59.212781   61804 logs.go:276] 0 containers: []
	W0814 01:08:59.212789   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:59.212796   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:59.212851   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:59.248330   61804 cri.go:89] found id: ""
	I0814 01:08:59.248354   61804 logs.go:276] 0 containers: []
	W0814 01:08:59.248362   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:59.248368   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:59.248420   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:59.282101   61804 cri.go:89] found id: ""
	I0814 01:08:59.282123   61804 logs.go:276] 0 containers: []
	W0814 01:08:59.282136   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:59.282142   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:59.282190   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:59.318477   61804 cri.go:89] found id: ""
	I0814 01:08:59.318502   61804 logs.go:276] 0 containers: []
	W0814 01:08:59.318510   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:59.318516   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:59.318566   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:59.352473   61804 cri.go:89] found id: ""
	I0814 01:08:59.352499   61804 logs.go:276] 0 containers: []
	W0814 01:08:59.352507   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:59.352514   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:59.352583   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:59.386004   61804 cri.go:89] found id: ""
	I0814 01:08:59.386032   61804 logs.go:276] 0 containers: []
	W0814 01:08:59.386056   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:59.386065   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:59.386127   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:59.424280   61804 cri.go:89] found id: ""
	I0814 01:08:59.424309   61804 logs.go:276] 0 containers: []
	W0814 01:08:59.424334   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:59.424340   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:59.424390   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:59.461555   61804 cri.go:89] found id: ""
	I0814 01:08:59.461579   61804 logs.go:276] 0 containers: []
	W0814 01:08:59.461587   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:59.461596   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:59.461608   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:59.501997   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:59.502032   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:56.089181   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:58.089349   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:58.780740   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:01.280817   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:58.417590   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:00.417914   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:02.418923   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:59.554228   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:59.554276   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:59.569169   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:59.569201   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:59.635758   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:59.635779   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:59.635793   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:02.211233   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:02.223647   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:02.223733   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:02.257172   61804 cri.go:89] found id: ""
	I0814 01:09:02.257204   61804 logs.go:276] 0 containers: []
	W0814 01:09:02.257215   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:02.257222   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:02.257286   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:02.290090   61804 cri.go:89] found id: ""
	I0814 01:09:02.290123   61804 logs.go:276] 0 containers: []
	W0814 01:09:02.290132   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:02.290139   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:02.290207   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:02.324436   61804 cri.go:89] found id: ""
	I0814 01:09:02.324461   61804 logs.go:276] 0 containers: []
	W0814 01:09:02.324469   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:02.324474   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:02.324531   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:02.357092   61804 cri.go:89] found id: ""
	I0814 01:09:02.357116   61804 logs.go:276] 0 containers: []
	W0814 01:09:02.357124   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:02.357130   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:02.357191   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:02.390237   61804 cri.go:89] found id: ""
	I0814 01:09:02.390265   61804 logs.go:276] 0 containers: []
	W0814 01:09:02.390278   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:02.390287   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:02.390357   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:02.425960   61804 cri.go:89] found id: ""
	I0814 01:09:02.425988   61804 logs.go:276] 0 containers: []
	W0814 01:09:02.425996   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:02.426002   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:02.426072   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:02.459644   61804 cri.go:89] found id: ""
	I0814 01:09:02.459683   61804 logs.go:276] 0 containers: []
	W0814 01:09:02.459694   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:02.459702   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:02.459764   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:02.496147   61804 cri.go:89] found id: ""
	I0814 01:09:02.496169   61804 logs.go:276] 0 containers: []
	W0814 01:09:02.496182   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:02.496190   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:02.496202   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:02.576512   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:02.576547   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:02.612410   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:02.612440   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:02.665810   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:02.665850   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:02.680992   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:02.681020   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:02.781868   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:00.089915   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:02.090971   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:04.589030   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:03.780689   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:05.784928   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:04.917086   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:06.918108   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:05.282001   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:05.294986   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:05.295064   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:05.326520   61804 cri.go:89] found id: ""
	I0814 01:09:05.326547   61804 logs.go:276] 0 containers: []
	W0814 01:09:05.326555   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:05.326562   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:05.326618   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:05.358458   61804 cri.go:89] found id: ""
	I0814 01:09:05.358482   61804 logs.go:276] 0 containers: []
	W0814 01:09:05.358490   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:05.358497   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:05.358556   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:05.393729   61804 cri.go:89] found id: ""
	I0814 01:09:05.393763   61804 logs.go:276] 0 containers: []
	W0814 01:09:05.393771   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:05.393777   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:05.393824   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:05.433291   61804 cri.go:89] found id: ""
	I0814 01:09:05.433319   61804 logs.go:276] 0 containers: []
	W0814 01:09:05.433327   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:05.433334   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:05.433384   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:05.467163   61804 cri.go:89] found id: ""
	I0814 01:09:05.467187   61804 logs.go:276] 0 containers: []
	W0814 01:09:05.467198   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:05.467206   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:05.467284   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:05.499718   61804 cri.go:89] found id: ""
	I0814 01:09:05.499747   61804 logs.go:276] 0 containers: []
	W0814 01:09:05.499758   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:05.499768   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:05.499819   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:05.532818   61804 cri.go:89] found id: ""
	I0814 01:09:05.532847   61804 logs.go:276] 0 containers: []
	W0814 01:09:05.532859   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:05.532867   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:05.532920   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:05.566908   61804 cri.go:89] found id: ""
	I0814 01:09:05.566936   61804 logs.go:276] 0 containers: []
	W0814 01:09:05.566947   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:05.566957   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:05.566969   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:05.621247   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:05.621283   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:05.635566   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:05.635606   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:05.698579   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:05.698606   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:05.698622   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:05.780861   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:05.780897   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:08.322931   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:08.336836   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:08.336918   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:08.369802   61804 cri.go:89] found id: ""
	I0814 01:09:08.369833   61804 logs.go:276] 0 containers: []
	W0814 01:09:08.369842   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:08.369849   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:08.369899   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:08.415414   61804 cri.go:89] found id: ""
	I0814 01:09:08.415441   61804 logs.go:276] 0 containers: []
	W0814 01:09:08.415451   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:08.415459   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:08.415525   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:08.477026   61804 cri.go:89] found id: ""
	I0814 01:09:08.477058   61804 logs.go:276] 0 containers: []
	W0814 01:09:08.477069   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:08.477077   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:08.477145   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:08.522385   61804 cri.go:89] found id: ""
	I0814 01:09:08.522417   61804 logs.go:276] 0 containers: []
	W0814 01:09:08.522429   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:08.522438   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:08.522502   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:08.555803   61804 cri.go:89] found id: ""
	I0814 01:09:08.555839   61804 logs.go:276] 0 containers: []
	W0814 01:09:08.555848   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:08.555855   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:08.555922   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:08.589910   61804 cri.go:89] found id: ""
	I0814 01:09:08.589932   61804 logs.go:276] 0 containers: []
	W0814 01:09:08.589939   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:08.589945   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:08.589992   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:08.622278   61804 cri.go:89] found id: ""
	I0814 01:09:08.622313   61804 logs.go:276] 0 containers: []
	W0814 01:09:08.622321   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:08.622328   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:08.622381   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:08.655221   61804 cri.go:89] found id: ""
	I0814 01:09:08.655248   61804 logs.go:276] 0 containers: []
	W0814 01:09:08.655257   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:08.655266   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:08.655280   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:08.691932   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:08.691965   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:08.742551   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:08.742586   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:08.755590   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:08.755619   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:08.822365   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:08.822384   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:08.822401   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:06.589889   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:09.089601   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:08.281249   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:10.781156   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:09.418153   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:11.418222   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:11.397107   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:11.409425   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:11.409498   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:11.442680   61804 cri.go:89] found id: ""
	I0814 01:09:11.442711   61804 logs.go:276] 0 containers: []
	W0814 01:09:11.442724   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:11.442732   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:11.442791   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:11.482991   61804 cri.go:89] found id: ""
	I0814 01:09:11.483016   61804 logs.go:276] 0 containers: []
	W0814 01:09:11.483023   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:11.483034   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:11.483099   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:11.516069   61804 cri.go:89] found id: ""
	I0814 01:09:11.516091   61804 logs.go:276] 0 containers: []
	W0814 01:09:11.516100   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:11.516105   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:11.516154   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:11.549745   61804 cri.go:89] found id: ""
	I0814 01:09:11.549773   61804 logs.go:276] 0 containers: []
	W0814 01:09:11.549780   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:11.549787   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:11.549851   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:11.582542   61804 cri.go:89] found id: ""
	I0814 01:09:11.582569   61804 logs.go:276] 0 containers: []
	W0814 01:09:11.582577   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:11.582583   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:11.582642   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:11.616238   61804 cri.go:89] found id: ""
	I0814 01:09:11.616261   61804 logs.go:276] 0 containers: []
	W0814 01:09:11.616269   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:11.616275   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:11.616330   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:11.650238   61804 cri.go:89] found id: ""
	I0814 01:09:11.650286   61804 logs.go:276] 0 containers: []
	W0814 01:09:11.650301   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:11.650311   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:11.650384   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:11.683100   61804 cri.go:89] found id: ""
	I0814 01:09:11.683128   61804 logs.go:276] 0 containers: []
	W0814 01:09:11.683139   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:11.683149   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:11.683169   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:11.760248   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:11.760292   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:11.798965   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:11.798996   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:11.853109   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:11.853145   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:11.865645   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:11.865682   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:11.935478   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:14.436076   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:14.448846   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:14.448927   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:14.483833   61804 cri.go:89] found id: ""
	I0814 01:09:14.483873   61804 logs.go:276] 0 containers: []
	W0814 01:09:14.483882   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:14.483887   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:14.483940   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:11.089723   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:13.090681   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:12.781680   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:14.782443   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:13.918681   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:16.417982   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:14.522643   61804 cri.go:89] found id: ""
	I0814 01:09:14.522670   61804 logs.go:276] 0 containers: []
	W0814 01:09:14.522678   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:14.522683   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:14.522783   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:14.564084   61804 cri.go:89] found id: ""
	I0814 01:09:14.564111   61804 logs.go:276] 0 containers: []
	W0814 01:09:14.564121   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:14.564129   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:14.564193   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:14.603532   61804 cri.go:89] found id: ""
	I0814 01:09:14.603560   61804 logs.go:276] 0 containers: []
	W0814 01:09:14.603571   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:14.603578   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:14.603641   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:14.644420   61804 cri.go:89] found id: ""
	I0814 01:09:14.644443   61804 logs.go:276] 0 containers: []
	W0814 01:09:14.644450   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:14.644455   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:14.644503   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:14.681652   61804 cri.go:89] found id: ""
	I0814 01:09:14.681685   61804 logs.go:276] 0 containers: []
	W0814 01:09:14.681693   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:14.681701   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:14.681757   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:14.715830   61804 cri.go:89] found id: ""
	I0814 01:09:14.715852   61804 logs.go:276] 0 containers: []
	W0814 01:09:14.715860   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:14.715866   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:14.715912   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:14.752305   61804 cri.go:89] found id: ""
	I0814 01:09:14.752336   61804 logs.go:276] 0 containers: []
	W0814 01:09:14.752343   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:14.752352   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:14.752367   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:14.765250   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:14.765287   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:14.834427   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:14.834453   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:14.834470   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:14.914683   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:14.914721   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:14.959497   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:14.959534   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:17.513077   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:17.526300   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:17.526409   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:17.563670   61804 cri.go:89] found id: ""
	I0814 01:09:17.563700   61804 logs.go:276] 0 containers: []
	W0814 01:09:17.563709   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:17.563715   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:17.563768   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:17.599019   61804 cri.go:89] found id: ""
	I0814 01:09:17.599048   61804 logs.go:276] 0 containers: []
	W0814 01:09:17.599070   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:17.599078   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:17.599158   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:17.633378   61804 cri.go:89] found id: ""
	I0814 01:09:17.633407   61804 logs.go:276] 0 containers: []
	W0814 01:09:17.633422   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:17.633430   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:17.633494   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:17.667180   61804 cri.go:89] found id: ""
	I0814 01:09:17.667213   61804 logs.go:276] 0 containers: []
	W0814 01:09:17.667225   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:17.667233   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:17.667293   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:17.704552   61804 cri.go:89] found id: ""
	I0814 01:09:17.704582   61804 logs.go:276] 0 containers: []
	W0814 01:09:17.704595   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:17.704603   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:17.704670   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:17.735937   61804 cri.go:89] found id: ""
	I0814 01:09:17.735966   61804 logs.go:276] 0 containers: []
	W0814 01:09:17.735978   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:17.735987   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:17.736057   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:17.772223   61804 cri.go:89] found id: ""
	I0814 01:09:17.772251   61804 logs.go:276] 0 containers: []
	W0814 01:09:17.772263   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:17.772271   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:17.772335   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:17.807432   61804 cri.go:89] found id: ""
	I0814 01:09:17.807462   61804 logs.go:276] 0 containers: []
	W0814 01:09:17.807474   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:17.807485   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:17.807499   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:17.860093   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:17.860135   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:17.874608   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:17.874644   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:17.948791   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:17.948812   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:17.948827   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:18.024743   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:18.024778   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:15.590951   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:18.090491   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:17.296200   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:19.780540   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:18.419867   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:20.917387   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:22.918933   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:20.559854   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:20.572920   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:20.573004   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:20.609163   61804 cri.go:89] found id: ""
	I0814 01:09:20.609189   61804 logs.go:276] 0 containers: []
	W0814 01:09:20.609200   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:20.609205   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:20.609253   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:20.646826   61804 cri.go:89] found id: ""
	I0814 01:09:20.646852   61804 logs.go:276] 0 containers: []
	W0814 01:09:20.646859   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:20.646865   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:20.646913   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:20.682403   61804 cri.go:89] found id: ""
	I0814 01:09:20.682432   61804 logs.go:276] 0 containers: []
	W0814 01:09:20.682443   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:20.682452   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:20.682515   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:20.717678   61804 cri.go:89] found id: ""
	I0814 01:09:20.717700   61804 logs.go:276] 0 containers: []
	W0814 01:09:20.717708   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:20.717713   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:20.717761   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:20.748451   61804 cri.go:89] found id: ""
	I0814 01:09:20.748481   61804 logs.go:276] 0 containers: []
	W0814 01:09:20.748492   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:20.748501   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:20.748567   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:20.785684   61804 cri.go:89] found id: ""
	I0814 01:09:20.785712   61804 logs.go:276] 0 containers: []
	W0814 01:09:20.785722   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:20.785729   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:20.785792   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:20.826195   61804 cri.go:89] found id: ""
	I0814 01:09:20.826225   61804 logs.go:276] 0 containers: []
	W0814 01:09:20.826233   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:20.826239   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:20.826305   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:20.860155   61804 cri.go:89] found id: ""
	I0814 01:09:20.860181   61804 logs.go:276] 0 containers: []
	W0814 01:09:20.860190   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:20.860198   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:20.860209   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:20.909428   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:20.909464   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:20.923178   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:20.923208   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:20.994502   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:20.994537   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:20.994556   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:21.074097   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:21.074138   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:23.615557   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:23.628906   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:23.628976   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:23.661923   61804 cri.go:89] found id: ""
	I0814 01:09:23.661954   61804 logs.go:276] 0 containers: []
	W0814 01:09:23.661966   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:23.661973   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:23.662033   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:23.693786   61804 cri.go:89] found id: ""
	I0814 01:09:23.693815   61804 logs.go:276] 0 containers: []
	W0814 01:09:23.693828   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:23.693844   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:23.693938   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:23.726707   61804 cri.go:89] found id: ""
	I0814 01:09:23.726739   61804 logs.go:276] 0 containers: []
	W0814 01:09:23.726750   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:23.726758   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:23.726823   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:23.757433   61804 cri.go:89] found id: ""
	I0814 01:09:23.757457   61804 logs.go:276] 0 containers: []
	W0814 01:09:23.757465   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:23.757471   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:23.757521   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:23.789493   61804 cri.go:89] found id: ""
	I0814 01:09:23.789516   61804 logs.go:276] 0 containers: []
	W0814 01:09:23.789523   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:23.789529   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:23.789589   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:23.824641   61804 cri.go:89] found id: ""
	I0814 01:09:23.824668   61804 logs.go:276] 0 containers: []
	W0814 01:09:23.824676   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:23.824685   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:23.824758   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:23.857651   61804 cri.go:89] found id: ""
	I0814 01:09:23.857678   61804 logs.go:276] 0 containers: []
	W0814 01:09:23.857688   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:23.857697   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:23.857761   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:23.898116   61804 cri.go:89] found id: ""
	I0814 01:09:23.898138   61804 logs.go:276] 0 containers: []
	W0814 01:09:23.898145   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:23.898154   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:23.898169   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:23.982086   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:23.982121   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:24.018340   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:24.018372   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:24.067264   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:24.067300   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:24.081648   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:24.081681   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:24.156566   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:20.590620   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:23.090160   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:21.781174   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:23.782333   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:26.282145   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:25.417101   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:27.417596   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:26.656930   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:26.669540   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:26.669616   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:26.701786   61804 cri.go:89] found id: ""
	I0814 01:09:26.701819   61804 logs.go:276] 0 containers: []
	W0814 01:09:26.701828   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:26.701834   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:26.701897   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:26.734372   61804 cri.go:89] found id: ""
	I0814 01:09:26.734397   61804 logs.go:276] 0 containers: []
	W0814 01:09:26.734405   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:26.734410   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:26.734463   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:26.767100   61804 cri.go:89] found id: ""
	I0814 01:09:26.767125   61804 logs.go:276] 0 containers: []
	W0814 01:09:26.767140   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:26.767148   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:26.767210   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:26.802145   61804 cri.go:89] found id: ""
	I0814 01:09:26.802168   61804 logs.go:276] 0 containers: []
	W0814 01:09:26.802177   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:26.802182   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:26.802230   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:26.835588   61804 cri.go:89] found id: ""
	I0814 01:09:26.835616   61804 logs.go:276] 0 containers: []
	W0814 01:09:26.835624   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:26.835630   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:26.835685   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:26.868104   61804 cri.go:89] found id: ""
	I0814 01:09:26.868130   61804 logs.go:276] 0 containers: []
	W0814 01:09:26.868138   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:26.868144   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:26.868209   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:26.899709   61804 cri.go:89] found id: ""
	I0814 01:09:26.899736   61804 logs.go:276] 0 containers: []
	W0814 01:09:26.899755   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:26.899764   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:26.899824   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:26.934964   61804 cri.go:89] found id: ""
	I0814 01:09:26.934989   61804 logs.go:276] 0 containers: []
	W0814 01:09:26.934996   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:26.935005   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:26.935023   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:26.970832   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:26.970859   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:27.022349   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:27.022390   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:27.035656   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:27.035683   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:27.115414   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:27.115441   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:27.115458   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:25.090543   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:27.590088   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:29.590449   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:28.781004   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:30.781622   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:29.920036   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:32.417796   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:29.701338   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:29.713890   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:29.713947   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:29.745724   61804 cri.go:89] found id: ""
	I0814 01:09:29.745749   61804 logs.go:276] 0 containers: []
	W0814 01:09:29.745756   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:29.745763   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:29.745816   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:29.777020   61804 cri.go:89] found id: ""
	I0814 01:09:29.777047   61804 logs.go:276] 0 containers: []
	W0814 01:09:29.777057   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:29.777065   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:29.777130   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:29.813355   61804 cri.go:89] found id: ""
	I0814 01:09:29.813386   61804 logs.go:276] 0 containers: []
	W0814 01:09:29.813398   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:29.813406   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:29.813464   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:29.845184   61804 cri.go:89] found id: ""
	I0814 01:09:29.845212   61804 logs.go:276] 0 containers: []
	W0814 01:09:29.845222   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:29.845227   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:29.845288   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:29.881128   61804 cri.go:89] found id: ""
	I0814 01:09:29.881158   61804 logs.go:276] 0 containers: []
	W0814 01:09:29.881169   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:29.881177   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:29.881249   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:29.912034   61804 cri.go:89] found id: ""
	I0814 01:09:29.912078   61804 logs.go:276] 0 containers: []
	W0814 01:09:29.912091   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:29.912100   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:29.912173   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:29.950345   61804 cri.go:89] found id: ""
	I0814 01:09:29.950378   61804 logs.go:276] 0 containers: []
	W0814 01:09:29.950386   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:29.950392   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:29.950454   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:29.989118   61804 cri.go:89] found id: ""
	I0814 01:09:29.989150   61804 logs.go:276] 0 containers: []
	W0814 01:09:29.989161   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:29.989172   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:29.989186   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:30.042231   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:30.042262   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:30.056231   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:30.056262   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:30.130840   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:30.130871   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:30.130891   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:30.209332   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:30.209372   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:32.751036   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:32.765011   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:32.765072   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:32.802505   61804 cri.go:89] found id: ""
	I0814 01:09:32.802533   61804 logs.go:276] 0 containers: []
	W0814 01:09:32.802543   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:32.802548   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:32.802600   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:32.835127   61804 cri.go:89] found id: ""
	I0814 01:09:32.835165   61804 logs.go:276] 0 containers: []
	W0814 01:09:32.835174   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:32.835179   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:32.835230   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:32.871768   61804 cri.go:89] found id: ""
	I0814 01:09:32.871793   61804 logs.go:276] 0 containers: []
	W0814 01:09:32.871800   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:32.871814   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:32.871865   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:32.907601   61804 cri.go:89] found id: ""
	I0814 01:09:32.907625   61804 logs.go:276] 0 containers: []
	W0814 01:09:32.907634   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:32.907640   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:32.907693   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:32.942615   61804 cri.go:89] found id: ""
	I0814 01:09:32.942640   61804 logs.go:276] 0 containers: []
	W0814 01:09:32.942649   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:32.942655   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:32.942707   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:32.975436   61804 cri.go:89] found id: ""
	I0814 01:09:32.975467   61804 logs.go:276] 0 containers: []
	W0814 01:09:32.975478   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:32.975486   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:32.975546   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:33.008982   61804 cri.go:89] found id: ""
	I0814 01:09:33.009013   61804 logs.go:276] 0 containers: []
	W0814 01:09:33.009021   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:33.009027   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:33.009077   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:33.042312   61804 cri.go:89] found id: ""
	I0814 01:09:33.042345   61804 logs.go:276] 0 containers: []
	W0814 01:09:33.042362   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:33.042371   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:33.042383   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:33.102102   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:33.102145   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:33.116497   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:33.116527   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:33.191821   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:33.191847   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:33.191862   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:33.272510   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:33.272562   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:32.090206   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:34.589260   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:33.280565   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:35.280918   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:34.417839   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:36.417950   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:35.813246   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:35.826224   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:35.826304   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:35.859220   61804 cri.go:89] found id: ""
	I0814 01:09:35.859252   61804 logs.go:276] 0 containers: []
	W0814 01:09:35.859263   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:35.859274   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:35.859349   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:35.896460   61804 cri.go:89] found id: ""
	I0814 01:09:35.896485   61804 logs.go:276] 0 containers: []
	W0814 01:09:35.896494   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:35.896500   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:35.896559   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:35.929796   61804 cri.go:89] found id: ""
	I0814 01:09:35.929819   61804 logs.go:276] 0 containers: []
	W0814 01:09:35.929827   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:35.929832   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:35.929883   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:35.963928   61804 cri.go:89] found id: ""
	I0814 01:09:35.963954   61804 logs.go:276] 0 containers: []
	W0814 01:09:35.963965   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:35.963972   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:35.964033   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:36.004613   61804 cri.go:89] found id: ""
	I0814 01:09:36.004644   61804 logs.go:276] 0 containers: []
	W0814 01:09:36.004654   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:36.004660   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:36.004729   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:36.039212   61804 cri.go:89] found id: ""
	I0814 01:09:36.039241   61804 logs.go:276] 0 containers: []
	W0814 01:09:36.039249   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:36.039256   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:36.039311   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:36.072917   61804 cri.go:89] found id: ""
	I0814 01:09:36.072945   61804 logs.go:276] 0 containers: []
	W0814 01:09:36.072954   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:36.072960   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:36.073020   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:36.113542   61804 cri.go:89] found id: ""
	I0814 01:09:36.113573   61804 logs.go:276] 0 containers: []
	W0814 01:09:36.113584   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:36.113594   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:36.113610   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:36.152043   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:36.152071   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:36.203163   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:36.203200   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:36.216733   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:36.216764   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:36.288171   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:36.288193   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:36.288206   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:38.868008   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:38.881009   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:38.881089   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:38.914485   61804 cri.go:89] found id: ""
	I0814 01:09:38.914515   61804 logs.go:276] 0 containers: []
	W0814 01:09:38.914527   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:38.914535   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:38.914595   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:38.950810   61804 cri.go:89] found id: ""
	I0814 01:09:38.950841   61804 logs.go:276] 0 containers: []
	W0814 01:09:38.950852   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:38.950860   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:38.950913   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:38.984938   61804 cri.go:89] found id: ""
	I0814 01:09:38.984964   61804 logs.go:276] 0 containers: []
	W0814 01:09:38.984972   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:38.984980   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:38.985050   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:39.017383   61804 cri.go:89] found id: ""
	I0814 01:09:39.017408   61804 logs.go:276] 0 containers: []
	W0814 01:09:39.017415   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:39.017421   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:39.017467   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:39.050669   61804 cri.go:89] found id: ""
	I0814 01:09:39.050694   61804 logs.go:276] 0 containers: []
	W0814 01:09:39.050706   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:39.050712   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:39.050777   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:39.083840   61804 cri.go:89] found id: ""
	I0814 01:09:39.083870   61804 logs.go:276] 0 containers: []
	W0814 01:09:39.083882   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:39.083903   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:39.083973   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:39.117880   61804 cri.go:89] found id: ""
	I0814 01:09:39.117905   61804 logs.go:276] 0 containers: []
	W0814 01:09:39.117913   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:39.117920   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:39.117989   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:39.151956   61804 cri.go:89] found id: ""
	I0814 01:09:39.151981   61804 logs.go:276] 0 containers: []
	W0814 01:09:39.151991   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:39.152002   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:39.152017   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:39.229820   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:39.229860   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:39.266989   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:39.267023   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:39.317673   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:39.317709   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:39.332968   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:39.332997   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:39.401164   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:36.591033   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:39.089990   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:37.282218   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:39.781653   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:38.918816   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:41.417142   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:41.901891   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:41.914735   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:41.914810   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:41.950605   61804 cri.go:89] found id: ""
	I0814 01:09:41.950633   61804 logs.go:276] 0 containers: []
	W0814 01:09:41.950641   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:41.950648   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:41.950699   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:41.984517   61804 cri.go:89] found id: ""
	I0814 01:09:41.984541   61804 logs.go:276] 0 containers: []
	W0814 01:09:41.984549   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:41.984555   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:41.984609   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:42.018378   61804 cri.go:89] found id: ""
	I0814 01:09:42.018405   61804 logs.go:276] 0 containers: []
	W0814 01:09:42.018413   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:42.018418   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:42.018475   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:42.057088   61804 cri.go:89] found id: ""
	I0814 01:09:42.057126   61804 logs.go:276] 0 containers: []
	W0814 01:09:42.057134   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:42.057140   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:42.057208   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:42.093523   61804 cri.go:89] found id: ""
	I0814 01:09:42.093548   61804 logs.go:276] 0 containers: []
	W0814 01:09:42.093564   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:42.093569   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:42.093621   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:42.127036   61804 cri.go:89] found id: ""
	I0814 01:09:42.127059   61804 logs.go:276] 0 containers: []
	W0814 01:09:42.127067   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:42.127072   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:42.127123   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:42.161194   61804 cri.go:89] found id: ""
	I0814 01:09:42.161218   61804 logs.go:276] 0 containers: []
	W0814 01:09:42.161226   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:42.161231   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:42.161279   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:42.195595   61804 cri.go:89] found id: ""
	I0814 01:09:42.195624   61804 logs.go:276] 0 containers: []
	W0814 01:09:42.195633   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:42.195643   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:42.195656   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:42.251942   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:42.251974   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:42.309142   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:42.309179   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:42.322696   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:42.322724   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:42.389877   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:42.389903   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:42.389918   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:41.589650   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:43.589804   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:42.281108   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:44.780495   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:43.417531   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:45.419122   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:47.918282   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:44.974486   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:44.986981   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:44.987044   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:45.023400   61804 cri.go:89] found id: ""
	I0814 01:09:45.023426   61804 logs.go:276] 0 containers: []
	W0814 01:09:45.023435   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:45.023441   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:45.023492   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:45.057923   61804 cri.go:89] found id: ""
	I0814 01:09:45.057948   61804 logs.go:276] 0 containers: []
	W0814 01:09:45.057961   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:45.057968   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:45.058024   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:45.092882   61804 cri.go:89] found id: ""
	I0814 01:09:45.092908   61804 logs.go:276] 0 containers: []
	W0814 01:09:45.092918   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:45.092924   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:45.092987   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:45.128802   61804 cri.go:89] found id: ""
	I0814 01:09:45.128832   61804 logs.go:276] 0 containers: []
	W0814 01:09:45.128840   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:45.128846   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:45.128909   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:45.164528   61804 cri.go:89] found id: ""
	I0814 01:09:45.164556   61804 logs.go:276] 0 containers: []
	W0814 01:09:45.164564   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:45.164571   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:45.164619   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:45.198115   61804 cri.go:89] found id: ""
	I0814 01:09:45.198145   61804 logs.go:276] 0 containers: []
	W0814 01:09:45.198157   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:45.198164   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:45.198231   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:45.230356   61804 cri.go:89] found id: ""
	I0814 01:09:45.230389   61804 logs.go:276] 0 containers: []
	W0814 01:09:45.230401   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:45.230409   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:45.230471   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:45.268342   61804 cri.go:89] found id: ""
	I0814 01:09:45.268367   61804 logs.go:276] 0 containers: []
	W0814 01:09:45.268376   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:45.268384   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:45.268398   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:45.321257   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:45.321294   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:45.334182   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:45.334206   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:45.409140   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:45.409162   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:45.409178   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:45.493974   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:45.494013   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:48.032466   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:48.045704   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:48.045783   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:48.084634   61804 cri.go:89] found id: ""
	I0814 01:09:48.084663   61804 logs.go:276] 0 containers: []
	W0814 01:09:48.084674   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:48.084683   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:48.084743   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:48.121917   61804 cri.go:89] found id: ""
	I0814 01:09:48.121941   61804 logs.go:276] 0 containers: []
	W0814 01:09:48.121948   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:48.121953   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:48.122014   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:48.156005   61804 cri.go:89] found id: ""
	I0814 01:09:48.156029   61804 logs.go:276] 0 containers: []
	W0814 01:09:48.156038   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:48.156046   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:48.156104   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:48.190105   61804 cri.go:89] found id: ""
	I0814 01:09:48.190127   61804 logs.go:276] 0 containers: []
	W0814 01:09:48.190136   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:48.190141   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:48.190202   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:48.222617   61804 cri.go:89] found id: ""
	I0814 01:09:48.222641   61804 logs.go:276] 0 containers: []
	W0814 01:09:48.222649   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:48.222655   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:48.222727   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:48.256198   61804 cri.go:89] found id: ""
	I0814 01:09:48.256222   61804 logs.go:276] 0 containers: []
	W0814 01:09:48.256230   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:48.256236   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:48.256294   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:48.294389   61804 cri.go:89] found id: ""
	I0814 01:09:48.294420   61804 logs.go:276] 0 containers: []
	W0814 01:09:48.294428   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:48.294434   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:48.294496   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:48.331503   61804 cri.go:89] found id: ""
	I0814 01:09:48.331540   61804 logs.go:276] 0 containers: []
	W0814 01:09:48.331553   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:48.331565   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:48.331585   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:48.407092   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:48.407134   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:48.446890   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:48.446920   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:48.498523   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:48.498559   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:48.511540   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:48.511578   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:48.576299   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:45.590239   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:48.090689   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:46.781816   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:49.280840   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:51.281638   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:50.418154   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:52.917611   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:51.076974   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:51.089440   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:51.089508   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:51.122770   61804 cri.go:89] found id: ""
	I0814 01:09:51.122794   61804 logs.go:276] 0 containers: []
	W0814 01:09:51.122806   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:51.122814   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:51.122873   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:51.159045   61804 cri.go:89] found id: ""
	I0814 01:09:51.159075   61804 logs.go:276] 0 containers: []
	W0814 01:09:51.159084   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:51.159090   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:51.159144   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:51.192983   61804 cri.go:89] found id: ""
	I0814 01:09:51.193013   61804 logs.go:276] 0 containers: []
	W0814 01:09:51.193022   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:51.193028   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:51.193087   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:51.225112   61804 cri.go:89] found id: ""
	I0814 01:09:51.225137   61804 logs.go:276] 0 containers: []
	W0814 01:09:51.225145   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:51.225151   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:51.225204   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:51.257785   61804 cri.go:89] found id: ""
	I0814 01:09:51.257813   61804 logs.go:276] 0 containers: []
	W0814 01:09:51.257822   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:51.257828   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:51.257879   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:51.289863   61804 cri.go:89] found id: ""
	I0814 01:09:51.289891   61804 logs.go:276] 0 containers: []
	W0814 01:09:51.289902   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:51.289910   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:51.289963   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:51.321834   61804 cri.go:89] found id: ""
	I0814 01:09:51.321860   61804 logs.go:276] 0 containers: []
	W0814 01:09:51.321870   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:51.321880   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:51.321949   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:51.354494   61804 cri.go:89] found id: ""
	I0814 01:09:51.354517   61804 logs.go:276] 0 containers: []
	W0814 01:09:51.354526   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:51.354535   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:51.354556   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:51.424704   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:51.424726   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:51.424741   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:51.505301   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:51.505337   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:51.544567   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:51.544603   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:51.598924   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:51.598954   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:54.113501   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:54.128000   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:54.128075   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:54.162230   61804 cri.go:89] found id: ""
	I0814 01:09:54.162260   61804 logs.go:276] 0 containers: []
	W0814 01:09:54.162270   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:54.162277   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:54.162327   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:54.196395   61804 cri.go:89] found id: ""
	I0814 01:09:54.196421   61804 logs.go:276] 0 containers: []
	W0814 01:09:54.196432   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:54.196440   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:54.196500   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:54.229685   61804 cri.go:89] found id: ""
	I0814 01:09:54.229718   61804 logs.go:276] 0 containers: []
	W0814 01:09:54.229730   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:54.229738   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:54.229825   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:54.263141   61804 cri.go:89] found id: ""
	I0814 01:09:54.263174   61804 logs.go:276] 0 containers: []
	W0814 01:09:54.263185   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:54.263193   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:54.263257   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:54.298658   61804 cri.go:89] found id: ""
	I0814 01:09:54.298689   61804 logs.go:276] 0 containers: []
	W0814 01:09:54.298700   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:54.298708   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:54.298794   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:54.331254   61804 cri.go:89] found id: ""
	I0814 01:09:54.331278   61804 logs.go:276] 0 containers: []
	W0814 01:09:54.331287   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:54.331294   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:54.331348   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:54.362930   61804 cri.go:89] found id: ""
	I0814 01:09:54.362954   61804 logs.go:276] 0 containers: []
	W0814 01:09:54.362961   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:54.362967   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:54.363017   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:54.403663   61804 cri.go:89] found id: ""
	I0814 01:09:54.403690   61804 logs.go:276] 0 containers: []
	W0814 01:09:54.403697   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:54.403706   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:54.403725   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:54.460623   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:54.460661   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:54.478728   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:54.478757   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 01:09:50.589697   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:53.089733   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:53.781208   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:56.282166   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:54.918107   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:56.918502   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	W0814 01:09:54.548615   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:54.548640   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:54.548654   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:54.624350   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:54.624385   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:57.164202   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:57.176107   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:57.176174   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:57.211204   61804 cri.go:89] found id: ""
	I0814 01:09:57.211230   61804 logs.go:276] 0 containers: []
	W0814 01:09:57.211238   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:57.211245   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:57.211305   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:57.243004   61804 cri.go:89] found id: ""
	I0814 01:09:57.243035   61804 logs.go:276] 0 containers: []
	W0814 01:09:57.243046   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:57.243052   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:57.243113   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:57.275315   61804 cri.go:89] found id: ""
	I0814 01:09:57.275346   61804 logs.go:276] 0 containers: []
	W0814 01:09:57.275357   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:57.275365   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:57.275435   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:57.311856   61804 cri.go:89] found id: ""
	I0814 01:09:57.311878   61804 logs.go:276] 0 containers: []
	W0814 01:09:57.311885   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:57.311890   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:57.311944   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:57.345305   61804 cri.go:89] found id: ""
	I0814 01:09:57.345335   61804 logs.go:276] 0 containers: []
	W0814 01:09:57.345347   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:57.345355   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:57.345419   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:57.378001   61804 cri.go:89] found id: ""
	I0814 01:09:57.378033   61804 logs.go:276] 0 containers: []
	W0814 01:09:57.378058   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:57.378066   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:57.378127   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:57.410664   61804 cri.go:89] found id: ""
	I0814 01:09:57.410691   61804 logs.go:276] 0 containers: []
	W0814 01:09:57.410700   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:57.410706   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:57.410766   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:57.443477   61804 cri.go:89] found id: ""
	I0814 01:09:57.443505   61804 logs.go:276] 0 containers: []
	W0814 01:09:57.443514   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:57.443523   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:57.443538   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:57.497674   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:57.497710   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:57.511347   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:57.511380   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:57.580111   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:57.580137   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:57.580153   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:57.660119   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:57.660166   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:55.089771   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:57.090272   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:59.591289   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:58.780363   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:00.781165   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:59.417990   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:01.419950   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:00.203685   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:10:00.224480   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:10:00.224552   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:10:00.265353   61804 cri.go:89] found id: ""
	I0814 01:10:00.265379   61804 logs.go:276] 0 containers: []
	W0814 01:10:00.265388   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:10:00.265395   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:10:00.265449   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:10:00.301086   61804 cri.go:89] found id: ""
	I0814 01:10:00.301112   61804 logs.go:276] 0 containers: []
	W0814 01:10:00.301122   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:10:00.301129   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:10:00.301203   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:10:00.335369   61804 cri.go:89] found id: ""
	I0814 01:10:00.335400   61804 logs.go:276] 0 containers: []
	W0814 01:10:00.335422   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:10:00.335442   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:10:00.335501   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:10:00.369341   61804 cri.go:89] found id: ""
	I0814 01:10:00.369367   61804 logs.go:276] 0 containers: []
	W0814 01:10:00.369377   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:10:00.369384   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:10:00.369446   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:10:00.403958   61804 cri.go:89] found id: ""
	I0814 01:10:00.403985   61804 logs.go:276] 0 containers: []
	W0814 01:10:00.403993   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:10:00.403998   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:10:00.404059   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:10:00.437921   61804 cri.go:89] found id: ""
	I0814 01:10:00.437944   61804 logs.go:276] 0 containers: []
	W0814 01:10:00.437952   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:10:00.437958   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:10:00.438020   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:10:00.471076   61804 cri.go:89] found id: ""
	I0814 01:10:00.471116   61804 logs.go:276] 0 containers: []
	W0814 01:10:00.471127   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:10:00.471135   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:10:00.471194   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:10:00.506002   61804 cri.go:89] found id: ""
	I0814 01:10:00.506026   61804 logs.go:276] 0 containers: []
	W0814 01:10:00.506034   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:10:00.506056   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:10:00.506074   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:10:00.576627   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:10:00.576653   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:10:00.576668   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:10:00.661108   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:10:00.661150   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:10:00.699083   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:10:00.699111   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:10:00.748944   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:10:00.748981   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:10:03.262338   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:10:03.274831   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:10:03.274909   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:10:03.308413   61804 cri.go:89] found id: ""
	I0814 01:10:03.308445   61804 logs.go:276] 0 containers: []
	W0814 01:10:03.308456   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:10:03.308463   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:10:03.308530   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:10:03.340763   61804 cri.go:89] found id: ""
	I0814 01:10:03.340789   61804 logs.go:276] 0 containers: []
	W0814 01:10:03.340798   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:10:03.340804   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:10:03.340872   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:10:03.375914   61804 cri.go:89] found id: ""
	I0814 01:10:03.375945   61804 logs.go:276] 0 containers: []
	W0814 01:10:03.375956   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:10:03.375964   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:10:03.376028   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:10:03.408904   61804 cri.go:89] found id: ""
	I0814 01:10:03.408934   61804 logs.go:276] 0 containers: []
	W0814 01:10:03.408944   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:10:03.408951   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:10:03.409015   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:10:03.443664   61804 cri.go:89] found id: ""
	I0814 01:10:03.443694   61804 logs.go:276] 0 containers: []
	W0814 01:10:03.443704   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:10:03.443712   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:10:03.443774   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:10:03.475742   61804 cri.go:89] found id: ""
	I0814 01:10:03.475775   61804 logs.go:276] 0 containers: []
	W0814 01:10:03.475786   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:10:03.475794   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:10:03.475856   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:10:03.509252   61804 cri.go:89] found id: ""
	I0814 01:10:03.509297   61804 logs.go:276] 0 containers: []
	W0814 01:10:03.509309   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:10:03.509315   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:10:03.509380   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:10:03.544311   61804 cri.go:89] found id: ""
	I0814 01:10:03.544332   61804 logs.go:276] 0 containers: []
	W0814 01:10:03.544341   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:10:03.544350   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:10:03.544365   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:10:03.620425   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:10:03.620459   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:10:03.658574   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:10:03.658601   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:10:03.708154   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:10:03.708187   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:10:03.721959   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:10:03.721986   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:10:03.789903   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:10:02.088526   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:04.092427   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:02.781595   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:05.280678   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:03.917268   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:05.917774   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:07.918699   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:06.290301   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:10:06.301935   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:10:06.301994   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:10:06.336211   61804 cri.go:89] found id: ""
	I0814 01:10:06.336231   61804 logs.go:276] 0 containers: []
	W0814 01:10:06.336239   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:10:06.336245   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:10:06.336293   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:10:06.369489   61804 cri.go:89] found id: ""
	I0814 01:10:06.369517   61804 logs.go:276] 0 containers: []
	W0814 01:10:06.369526   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:10:06.369532   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:10:06.369590   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:10:06.401142   61804 cri.go:89] found id: ""
	I0814 01:10:06.401167   61804 logs.go:276] 0 containers: []
	W0814 01:10:06.401176   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:10:06.401183   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:10:06.401233   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:10:06.432265   61804 cri.go:89] found id: ""
	I0814 01:10:06.432294   61804 logs.go:276] 0 containers: []
	W0814 01:10:06.432303   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:10:06.432308   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:10:06.432368   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:10:06.464786   61804 cri.go:89] found id: ""
	I0814 01:10:06.464815   61804 logs.go:276] 0 containers: []
	W0814 01:10:06.464826   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:10:06.464834   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:10:06.464928   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:10:06.497984   61804 cri.go:89] found id: ""
	I0814 01:10:06.498013   61804 logs.go:276] 0 containers: []
	W0814 01:10:06.498021   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:10:06.498027   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:10:06.498122   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:10:06.528722   61804 cri.go:89] found id: ""
	I0814 01:10:06.528750   61804 logs.go:276] 0 containers: []
	W0814 01:10:06.528760   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:10:06.528768   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:10:06.528836   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:10:06.559920   61804 cri.go:89] found id: ""
	I0814 01:10:06.559947   61804 logs.go:276] 0 containers: []
	W0814 01:10:06.559955   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:10:06.559964   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:10:06.559976   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:10:06.609227   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:10:06.609256   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:10:06.621627   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:10:06.621652   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:10:06.686110   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:10:06.686132   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:10:06.686145   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:10:06.767163   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:10:06.767201   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:10:09.302611   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:10:09.314804   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:10:09.314863   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:10:09.347222   61804 cri.go:89] found id: ""
	I0814 01:10:09.347248   61804 logs.go:276] 0 containers: []
	W0814 01:10:09.347257   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:10:09.347262   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:10:09.347311   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:10:09.382005   61804 cri.go:89] found id: ""
	I0814 01:10:09.382035   61804 logs.go:276] 0 containers: []
	W0814 01:10:09.382059   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:10:09.382067   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:10:09.382129   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:10:09.413728   61804 cri.go:89] found id: ""
	I0814 01:10:09.413759   61804 logs.go:276] 0 containers: []
	W0814 01:10:09.413771   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:10:09.413778   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:10:09.413843   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:10:09.446389   61804 cri.go:89] found id: ""
	I0814 01:10:09.446422   61804 logs.go:276] 0 containers: []
	W0814 01:10:09.446435   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:10:09.446455   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:10:09.446511   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:10:09.482224   61804 cri.go:89] found id: ""
	I0814 01:10:09.482253   61804 logs.go:276] 0 containers: []
	W0814 01:10:09.482261   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:10:09.482267   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:10:09.482330   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:10:06.589791   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:09.089933   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:07.782212   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:07.782245   61447 pod_ready.go:81] duration metric: took 4m0.007594209s for pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace to be "Ready" ...
	E0814 01:10:07.782257   61447 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0814 01:10:07.782267   61447 pod_ready.go:38] duration metric: took 4m3.607931792s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 01:10:07.782286   61447 api_server.go:52] waiting for apiserver process to appear ...
	I0814 01:10:07.782318   61447 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:10:07.782382   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:10:07.840346   61447 cri.go:89] found id: "ddba3ebb8413d6647bf6c8fe0bff16a1a10b35bcd7219cad5b66979c372cd92e"
	I0814 01:10:07.840370   61447 cri.go:89] found id: ""
	I0814 01:10:07.840378   61447 logs.go:276] 1 containers: [ddba3ebb8413d6647bf6c8fe0bff16a1a10b35bcd7219cad5b66979c372cd92e]
	I0814 01:10:07.840426   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:07.844721   61447 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:10:07.844775   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:10:07.879720   61447 cri.go:89] found id: "1632d4b88f7f0e7e802d1848acf6c67950cfa7cdfbe5b4dd7f6fbc467a969388"
	I0814 01:10:07.879748   61447 cri.go:89] found id: ""
	I0814 01:10:07.879756   61447 logs.go:276] 1 containers: [1632d4b88f7f0e7e802d1848acf6c67950cfa7cdfbe5b4dd7f6fbc467a969388]
	I0814 01:10:07.879805   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:07.883392   61447 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:10:07.883455   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:10:07.919395   61447 cri.go:89] found id: "7d3cb1d648607a62a84856164fa12f6d400c0d4316d7bda5d83a448870c145fc"
	I0814 01:10:07.919414   61447 cri.go:89] found id: ""
	I0814 01:10:07.919423   61447 logs.go:276] 1 containers: [7d3cb1d648607a62a84856164fa12f6d400c0d4316d7bda5d83a448870c145fc]
	I0814 01:10:07.919481   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:07.923650   61447 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:10:07.923715   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:10:07.960706   61447 cri.go:89] found id: "89953f1dc813e32748579ac4c2541c0f99b1443ae3956d85a367db06810107b2"
	I0814 01:10:07.960734   61447 cri.go:89] found id: ""
	I0814 01:10:07.960744   61447 logs.go:276] 1 containers: [89953f1dc813e32748579ac4c2541c0f99b1443ae3956d85a367db06810107b2]
	I0814 01:10:07.960792   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:07.964923   61447 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:10:07.964984   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:10:08.000107   61447 cri.go:89] found id: "0ec88a5a7a9d566fabdf1393667a95e5eac0ed7f2a2aaa326ee51d3e99a72b12"
	I0814 01:10:08.000127   61447 cri.go:89] found id: ""
	I0814 01:10:08.000134   61447 logs.go:276] 1 containers: [0ec88a5a7a9d566fabdf1393667a95e5eac0ed7f2a2aaa326ee51d3e99a72b12]
	I0814 01:10:08.000187   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:08.004313   61447 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:10:08.004375   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:10:08.039317   61447 cri.go:89] found id: "3ef9bf666bbbc4c4c8b8036d2fa7e15e882f2bb301fe7d3b63a483d8b38e6091"
	I0814 01:10:08.039346   61447 cri.go:89] found id: ""
	I0814 01:10:08.039356   61447 logs.go:276] 1 containers: [3ef9bf666bbbc4c4c8b8036d2fa7e15e882f2bb301fe7d3b63a483d8b38e6091]
	I0814 01:10:08.039433   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:08.043054   61447 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:10:08.043122   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:10:08.078708   61447 cri.go:89] found id: ""
	I0814 01:10:08.078745   61447 logs.go:276] 0 containers: []
	W0814 01:10:08.078756   61447 logs.go:278] No container was found matching "kindnet"
	I0814 01:10:08.078764   61447 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0814 01:10:08.078826   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0814 01:10:08.119964   61447 cri.go:89] found id: "d4d7da10edbe3c5e4d9a0997c7f8909523ed34eacac6b44dc982bf6ab7504eff"
	I0814 01:10:08.119989   61447 cri.go:89] found id: "bacb411cbea2000ee28b8a5eb1c371d4094e22dea31e33892860568e08ee9768"
	I0814 01:10:08.119995   61447 cri.go:89] found id: ""
	I0814 01:10:08.120004   61447 logs.go:276] 2 containers: [d4d7da10edbe3c5e4d9a0997c7f8909523ed34eacac6b44dc982bf6ab7504eff bacb411cbea2000ee28b8a5eb1c371d4094e22dea31e33892860568e08ee9768]
	I0814 01:10:08.120067   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:08.123852   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:08.127530   61447 logs.go:123] Gathering logs for dmesg ...
	I0814 01:10:08.127553   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:10:08.144431   61447 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:10:08.144466   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 01:10:08.267719   61447 logs.go:123] Gathering logs for coredns [7d3cb1d648607a62a84856164fa12f6d400c0d4316d7bda5d83a448870c145fc] ...
	I0814 01:10:08.267751   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7d3cb1d648607a62a84856164fa12f6d400c0d4316d7bda5d83a448870c145fc"
	I0814 01:10:08.308901   61447 logs.go:123] Gathering logs for kube-controller-manager [3ef9bf666bbbc4c4c8b8036d2fa7e15e882f2bb301fe7d3b63a483d8b38e6091] ...
	I0814 01:10:08.308936   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ef9bf666bbbc4c4c8b8036d2fa7e15e882f2bb301fe7d3b63a483d8b38e6091"
	I0814 01:10:08.357837   61447 logs.go:123] Gathering logs for storage-provisioner [d4d7da10edbe3c5e4d9a0997c7f8909523ed34eacac6b44dc982bf6ab7504eff] ...
	I0814 01:10:08.357868   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d4d7da10edbe3c5e4d9a0997c7f8909523ed34eacac6b44dc982bf6ab7504eff"
	I0814 01:10:08.393863   61447 logs.go:123] Gathering logs for storage-provisioner [bacb411cbea2000ee28b8a5eb1c371d4094e22dea31e33892860568e08ee9768] ...
	I0814 01:10:08.393890   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bacb411cbea2000ee28b8a5eb1c371d4094e22dea31e33892860568e08ee9768"
	I0814 01:10:08.430599   61447 logs.go:123] Gathering logs for kubelet ...
	I0814 01:10:08.430631   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:10:08.512420   61447 logs.go:123] Gathering logs for etcd [1632d4b88f7f0e7e802d1848acf6c67950cfa7cdfbe5b4dd7f6fbc467a969388] ...
	I0814 01:10:08.512460   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1632d4b88f7f0e7e802d1848acf6c67950cfa7cdfbe5b4dd7f6fbc467a969388"
	I0814 01:10:08.561482   61447 logs.go:123] Gathering logs for kube-scheduler [89953f1dc813e32748579ac4c2541c0f99b1443ae3956d85a367db06810107b2] ...
	I0814 01:10:08.561512   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 89953f1dc813e32748579ac4c2541c0f99b1443ae3956d85a367db06810107b2"
	I0814 01:10:08.598681   61447 logs.go:123] Gathering logs for kube-proxy [0ec88a5a7a9d566fabdf1393667a95e5eac0ed7f2a2aaa326ee51d3e99a72b12] ...
	I0814 01:10:08.598705   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ec88a5a7a9d566fabdf1393667a95e5eac0ed7f2a2aaa326ee51d3e99a72b12"
	I0814 01:10:08.634798   61447 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:10:08.634835   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:10:09.113197   61447 logs.go:123] Gathering logs for container status ...
	I0814 01:10:09.113249   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:10:09.166264   61447 logs.go:123] Gathering logs for kube-apiserver [ddba3ebb8413d6647bf6c8fe0bff16a1a10b35bcd7219cad5b66979c372cd92e] ...
	I0814 01:10:09.166294   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ddba3ebb8413d6647bf6c8fe0bff16a1a10b35bcd7219cad5b66979c372cd92e"
	I0814 01:10:10.417612   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:12.418303   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:12.911546   61689 pod_ready.go:81] duration metric: took 4m0.00009953s for pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace to be "Ready" ...
	E0814 01:10:12.911580   61689 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0814 01:10:12.911610   61689 pod_ready.go:38] duration metric: took 4m7.021956674s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 01:10:12.911643   61689 kubeadm.go:597] duration metric: took 4m14.591841657s to restartPrimaryControlPlane
	W0814 01:10:12.911710   61689 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0814 01:10:12.911741   61689 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0814 01:10:09.517482   61804 cri.go:89] found id: ""
	I0814 01:10:09.517511   61804 logs.go:276] 0 containers: []
	W0814 01:10:09.517529   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:10:09.517538   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:10:09.517600   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:10:09.550825   61804 cri.go:89] found id: ""
	I0814 01:10:09.550849   61804 logs.go:276] 0 containers: []
	W0814 01:10:09.550857   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:10:09.550863   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:10:09.550923   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:10:09.585090   61804 cri.go:89] found id: ""
	I0814 01:10:09.585122   61804 logs.go:276] 0 containers: []
	W0814 01:10:09.585129   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:10:09.585137   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:10:09.585148   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:10:09.636337   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:10:09.636367   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:10:09.649807   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:10:09.649837   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:10:09.720720   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:10:09.720743   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:10:09.720759   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:10:09.805985   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:10:09.806027   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:10:12.350767   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:10:12.364446   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:10:12.364516   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:10:12.396353   61804 cri.go:89] found id: ""
	I0814 01:10:12.396387   61804 logs.go:276] 0 containers: []
	W0814 01:10:12.396400   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:10:12.396409   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:10:12.396478   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:10:12.427988   61804 cri.go:89] found id: ""
	I0814 01:10:12.428010   61804 logs.go:276] 0 containers: []
	W0814 01:10:12.428022   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:10:12.428033   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:10:12.428094   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:10:12.461269   61804 cri.go:89] found id: ""
	I0814 01:10:12.461295   61804 logs.go:276] 0 containers: []
	W0814 01:10:12.461304   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:10:12.461310   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:10:12.461364   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:10:12.495746   61804 cri.go:89] found id: ""
	I0814 01:10:12.495772   61804 logs.go:276] 0 containers: []
	W0814 01:10:12.495783   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:10:12.495791   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:10:12.495850   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:10:12.528862   61804 cri.go:89] found id: ""
	I0814 01:10:12.528891   61804 logs.go:276] 0 containers: []
	W0814 01:10:12.528901   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:10:12.528909   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:10:12.528969   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:10:12.562169   61804 cri.go:89] found id: ""
	I0814 01:10:12.562196   61804 logs.go:276] 0 containers: []
	W0814 01:10:12.562206   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:10:12.562214   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:10:12.562279   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:10:12.601089   61804 cri.go:89] found id: ""
	I0814 01:10:12.601118   61804 logs.go:276] 0 containers: []
	W0814 01:10:12.601129   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:10:12.601137   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:10:12.601200   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:10:12.635250   61804 cri.go:89] found id: ""
	I0814 01:10:12.635276   61804 logs.go:276] 0 containers: []
	W0814 01:10:12.635285   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:10:12.635293   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:10:12.635306   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:10:12.686904   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:10:12.686937   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:10:12.702218   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:10:12.702244   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:10:12.767008   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:10:12.767034   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:10:12.767051   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:10:12.849601   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:10:12.849639   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:10:11.090068   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:13.090518   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:11.715364   61447 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:10:11.731610   61447 api_server.go:72] duration metric: took 4m15.320142444s to wait for apiserver process to appear ...
	I0814 01:10:11.731645   61447 api_server.go:88] waiting for apiserver healthz status ...
	I0814 01:10:11.731689   61447 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:10:11.731748   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:10:11.769722   61447 cri.go:89] found id: "ddba3ebb8413d6647bf6c8fe0bff16a1a10b35bcd7219cad5b66979c372cd92e"
	I0814 01:10:11.769754   61447 cri.go:89] found id: ""
	I0814 01:10:11.769763   61447 logs.go:276] 1 containers: [ddba3ebb8413d6647bf6c8fe0bff16a1a10b35bcd7219cad5b66979c372cd92e]
	I0814 01:10:11.769824   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:11.774334   61447 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:10:11.774403   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:10:11.808392   61447 cri.go:89] found id: "1632d4b88f7f0e7e802d1848acf6c67950cfa7cdfbe5b4dd7f6fbc467a969388"
	I0814 01:10:11.808412   61447 cri.go:89] found id: ""
	I0814 01:10:11.808419   61447 logs.go:276] 1 containers: [1632d4b88f7f0e7e802d1848acf6c67950cfa7cdfbe5b4dd7f6fbc467a969388]
	I0814 01:10:11.808464   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:11.812100   61447 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:10:11.812154   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:10:11.846105   61447 cri.go:89] found id: "7d3cb1d648607a62a84856164fa12f6d400c0d4316d7bda5d83a448870c145fc"
	I0814 01:10:11.846133   61447 cri.go:89] found id: ""
	I0814 01:10:11.846144   61447 logs.go:276] 1 containers: [7d3cb1d648607a62a84856164fa12f6d400c0d4316d7bda5d83a448870c145fc]
	I0814 01:10:11.846202   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:11.850271   61447 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:10:11.850330   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:10:11.889364   61447 cri.go:89] found id: "89953f1dc813e32748579ac4c2541c0f99b1443ae3956d85a367db06810107b2"
	I0814 01:10:11.889389   61447 cri.go:89] found id: ""
	I0814 01:10:11.889399   61447 logs.go:276] 1 containers: [89953f1dc813e32748579ac4c2541c0f99b1443ae3956d85a367db06810107b2]
	I0814 01:10:11.889446   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:11.893413   61447 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:10:11.893483   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:10:11.929675   61447 cri.go:89] found id: "0ec88a5a7a9d566fabdf1393667a95e5eac0ed7f2a2aaa326ee51d3e99a72b12"
	I0814 01:10:11.929696   61447 cri.go:89] found id: ""
	I0814 01:10:11.929704   61447 logs.go:276] 1 containers: [0ec88a5a7a9d566fabdf1393667a95e5eac0ed7f2a2aaa326ee51d3e99a72b12]
	I0814 01:10:11.929764   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:11.933454   61447 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:10:11.933513   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:10:11.971708   61447 cri.go:89] found id: "3ef9bf666bbbc4c4c8b8036d2fa7e15e882f2bb301fe7d3b63a483d8b38e6091"
	I0814 01:10:11.971734   61447 cri.go:89] found id: ""
	I0814 01:10:11.971743   61447 logs.go:276] 1 containers: [3ef9bf666bbbc4c4c8b8036d2fa7e15e882f2bb301fe7d3b63a483d8b38e6091]
	I0814 01:10:11.971801   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:11.975943   61447 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:10:11.976005   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:10:12.010171   61447 cri.go:89] found id: ""
	I0814 01:10:12.010198   61447 logs.go:276] 0 containers: []
	W0814 01:10:12.010209   61447 logs.go:278] No container was found matching "kindnet"
	I0814 01:10:12.010217   61447 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0814 01:10:12.010277   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0814 01:10:12.045333   61447 cri.go:89] found id: "d4d7da10edbe3c5e4d9a0997c7f8909523ed34eacac6b44dc982bf6ab7504eff"
	I0814 01:10:12.045354   61447 cri.go:89] found id: "bacb411cbea2000ee28b8a5eb1c371d4094e22dea31e33892860568e08ee9768"
	I0814 01:10:12.045359   61447 cri.go:89] found id: ""
	I0814 01:10:12.045367   61447 logs.go:276] 2 containers: [d4d7da10edbe3c5e4d9a0997c7f8909523ed34eacac6b44dc982bf6ab7504eff bacb411cbea2000ee28b8a5eb1c371d4094e22dea31e33892860568e08ee9768]
	I0814 01:10:12.045431   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:12.049476   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:12.053712   61447 logs.go:123] Gathering logs for kube-controller-manager [3ef9bf666bbbc4c4c8b8036d2fa7e15e882f2bb301fe7d3b63a483d8b38e6091] ...
	I0814 01:10:12.053732   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ef9bf666bbbc4c4c8b8036d2fa7e15e882f2bb301fe7d3b63a483d8b38e6091"
	I0814 01:10:12.109678   61447 logs.go:123] Gathering logs for storage-provisioner [d4d7da10edbe3c5e4d9a0997c7f8909523ed34eacac6b44dc982bf6ab7504eff] ...
	I0814 01:10:12.109706   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d4d7da10edbe3c5e4d9a0997c7f8909523ed34eacac6b44dc982bf6ab7504eff"
	I0814 01:10:12.146300   61447 logs.go:123] Gathering logs for storage-provisioner [bacb411cbea2000ee28b8a5eb1c371d4094e22dea31e33892860568e08ee9768] ...
	I0814 01:10:12.146327   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bacb411cbea2000ee28b8a5eb1c371d4094e22dea31e33892860568e08ee9768"
	I0814 01:10:12.186556   61447 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:10:12.186585   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:10:12.660273   61447 logs.go:123] Gathering logs for kubelet ...
	I0814 01:10:12.660307   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:10:12.739687   61447 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:10:12.739723   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 01:10:12.859358   61447 logs.go:123] Gathering logs for kube-apiserver [ddba3ebb8413d6647bf6c8fe0bff16a1a10b35bcd7219cad5b66979c372cd92e] ...
	I0814 01:10:12.859388   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ddba3ebb8413d6647bf6c8fe0bff16a1a10b35bcd7219cad5b66979c372cd92e"
	I0814 01:10:12.908682   61447 logs.go:123] Gathering logs for kube-proxy [0ec88a5a7a9d566fabdf1393667a95e5eac0ed7f2a2aaa326ee51d3e99a72b12] ...
	I0814 01:10:12.908712   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ec88a5a7a9d566fabdf1393667a95e5eac0ed7f2a2aaa326ee51d3e99a72b12"
	I0814 01:10:12.943374   61447 logs.go:123] Gathering logs for container status ...
	I0814 01:10:12.943403   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:10:12.985875   61447 logs.go:123] Gathering logs for dmesg ...
	I0814 01:10:12.985915   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:10:13.001173   61447 logs.go:123] Gathering logs for etcd [1632d4b88f7f0e7e802d1848acf6c67950cfa7cdfbe5b4dd7f6fbc467a969388] ...
	I0814 01:10:13.001206   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1632d4b88f7f0e7e802d1848acf6c67950cfa7cdfbe5b4dd7f6fbc467a969388"
	I0814 01:10:13.048387   61447 logs.go:123] Gathering logs for coredns [7d3cb1d648607a62a84856164fa12f6d400c0d4316d7bda5d83a448870c145fc] ...
	I0814 01:10:13.048419   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7d3cb1d648607a62a84856164fa12f6d400c0d4316d7bda5d83a448870c145fc"
	I0814 01:10:13.088258   61447 logs.go:123] Gathering logs for kube-scheduler [89953f1dc813e32748579ac4c2541c0f99b1443ae3956d85a367db06810107b2] ...
	I0814 01:10:13.088295   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 89953f1dc813e32748579ac4c2541c0f99b1443ae3956d85a367db06810107b2"
	I0814 01:10:15.634029   61447 api_server.go:253] Checking apiserver healthz at https://192.168.72.94:8443/healthz ...
	I0814 01:10:15.639313   61447 api_server.go:279] https://192.168.72.94:8443/healthz returned 200:
	ok
	I0814 01:10:15.640756   61447 api_server.go:141] control plane version: v1.31.0
	I0814 01:10:15.640778   61447 api_server.go:131] duration metric: took 3.909125329s to wait for apiserver health ...
	I0814 01:10:15.640785   61447 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 01:10:15.640808   61447 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:10:15.640853   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:10:15.687350   61447 cri.go:89] found id: "ddba3ebb8413d6647bf6c8fe0bff16a1a10b35bcd7219cad5b66979c372cd92e"
	I0814 01:10:15.687373   61447 cri.go:89] found id: ""
	I0814 01:10:15.687381   61447 logs.go:276] 1 containers: [ddba3ebb8413d6647bf6c8fe0bff16a1a10b35bcd7219cad5b66979c372cd92e]
	I0814 01:10:15.687460   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:15.691407   61447 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:10:15.691473   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:10:15.730526   61447 cri.go:89] found id: "1632d4b88f7f0e7e802d1848acf6c67950cfa7cdfbe5b4dd7f6fbc467a969388"
	I0814 01:10:15.730551   61447 cri.go:89] found id: ""
	I0814 01:10:15.730560   61447 logs.go:276] 1 containers: [1632d4b88f7f0e7e802d1848acf6c67950cfa7cdfbe5b4dd7f6fbc467a969388]
	I0814 01:10:15.730618   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:15.734328   61447 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:10:15.734390   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:10:15.773166   61447 cri.go:89] found id: "7d3cb1d648607a62a84856164fa12f6d400c0d4316d7bda5d83a448870c145fc"
	I0814 01:10:15.773185   61447 cri.go:89] found id: ""
	I0814 01:10:15.773192   61447 logs.go:276] 1 containers: [7d3cb1d648607a62a84856164fa12f6d400c0d4316d7bda5d83a448870c145fc]
	I0814 01:10:15.773236   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:15.778757   61447 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:10:15.778815   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:10:15.813960   61447 cri.go:89] found id: "89953f1dc813e32748579ac4c2541c0f99b1443ae3956d85a367db06810107b2"
	I0814 01:10:15.813984   61447 cri.go:89] found id: ""
	I0814 01:10:15.813993   61447 logs.go:276] 1 containers: [89953f1dc813e32748579ac4c2541c0f99b1443ae3956d85a367db06810107b2]
	I0814 01:10:15.814068   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:15.818154   61447 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:10:15.818206   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:10:15.859408   61447 cri.go:89] found id: "0ec88a5a7a9d566fabdf1393667a95e5eac0ed7f2a2aaa326ee51d3e99a72b12"
	I0814 01:10:15.859432   61447 cri.go:89] found id: ""
	I0814 01:10:15.859440   61447 logs.go:276] 1 containers: [0ec88a5a7a9d566fabdf1393667a95e5eac0ed7f2a2aaa326ee51d3e99a72b12]
	I0814 01:10:15.859487   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:15.864494   61447 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:10:15.864583   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:10:15.900903   61447 cri.go:89] found id: "3ef9bf666bbbc4c4c8b8036d2fa7e15e882f2bb301fe7d3b63a483d8b38e6091"
	I0814 01:10:15.900922   61447 cri.go:89] found id: ""
	I0814 01:10:15.900932   61447 logs.go:276] 1 containers: [3ef9bf666bbbc4c4c8b8036d2fa7e15e882f2bb301fe7d3b63a483d8b38e6091]
	I0814 01:10:15.900982   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:15.905238   61447 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:10:15.905298   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:10:15.941185   61447 cri.go:89] found id: ""
	I0814 01:10:15.941215   61447 logs.go:276] 0 containers: []
	W0814 01:10:15.941226   61447 logs.go:278] No container was found matching "kindnet"
	I0814 01:10:15.941233   61447 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0814 01:10:15.941293   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0814 01:10:15.980737   61447 cri.go:89] found id: "d4d7da10edbe3c5e4d9a0997c7f8909523ed34eacac6b44dc982bf6ab7504eff"
	I0814 01:10:15.980756   61447 cri.go:89] found id: "bacb411cbea2000ee28b8a5eb1c371d4094e22dea31e33892860568e08ee9768"
	I0814 01:10:15.980760   61447 cri.go:89] found id: ""
	I0814 01:10:15.980766   61447 logs.go:276] 2 containers: [d4d7da10edbe3c5e4d9a0997c7f8909523ed34eacac6b44dc982bf6ab7504eff bacb411cbea2000ee28b8a5eb1c371d4094e22dea31e33892860568e08ee9768]
	I0814 01:10:15.980809   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:15.985209   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:15.989469   61447 logs.go:123] Gathering logs for coredns [7d3cb1d648607a62a84856164fa12f6d400c0d4316d7bda5d83a448870c145fc] ...
	I0814 01:10:15.989492   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7d3cb1d648607a62a84856164fa12f6d400c0d4316d7bda5d83a448870c145fc"
	I0814 01:10:16.026888   61447 logs.go:123] Gathering logs for kube-proxy [0ec88a5a7a9d566fabdf1393667a95e5eac0ed7f2a2aaa326ee51d3e99a72b12] ...
	I0814 01:10:16.026917   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ec88a5a7a9d566fabdf1393667a95e5eac0ed7f2a2aaa326ee51d3e99a72b12"
	I0814 01:10:16.071726   61447 logs.go:123] Gathering logs for storage-provisioner [d4d7da10edbe3c5e4d9a0997c7f8909523ed34eacac6b44dc982bf6ab7504eff] ...
	I0814 01:10:16.071754   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d4d7da10edbe3c5e4d9a0997c7f8909523ed34eacac6b44dc982bf6ab7504eff"
	I0814 01:10:16.109685   61447 logs.go:123] Gathering logs for storage-provisioner [bacb411cbea2000ee28b8a5eb1c371d4094e22dea31e33892860568e08ee9768] ...
	I0814 01:10:16.109710   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bacb411cbea2000ee28b8a5eb1c371d4094e22dea31e33892860568e08ee9768"
	I0814 01:10:16.145898   61447 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:10:16.145928   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:10:15.387785   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:10:15.401850   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:10:15.401916   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:10:15.441217   61804 cri.go:89] found id: ""
	I0814 01:10:15.441240   61804 logs.go:276] 0 containers: []
	W0814 01:10:15.441255   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:10:15.441261   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:10:15.441312   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:10:15.475123   61804 cri.go:89] found id: ""
	I0814 01:10:15.475158   61804 logs.go:276] 0 containers: []
	W0814 01:10:15.475167   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:10:15.475172   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:10:15.475234   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:10:15.509696   61804 cri.go:89] found id: ""
	I0814 01:10:15.509725   61804 logs.go:276] 0 containers: []
	W0814 01:10:15.509733   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:10:15.509739   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:10:15.509797   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:10:15.542584   61804 cri.go:89] found id: ""
	I0814 01:10:15.542615   61804 logs.go:276] 0 containers: []
	W0814 01:10:15.542625   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:10:15.542632   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:10:15.542701   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:10:15.576508   61804 cri.go:89] found id: ""
	I0814 01:10:15.576540   61804 logs.go:276] 0 containers: []
	W0814 01:10:15.576552   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:10:15.576558   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:10:15.576622   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:10:15.613618   61804 cri.go:89] found id: ""
	I0814 01:10:15.613649   61804 logs.go:276] 0 containers: []
	W0814 01:10:15.613660   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:10:15.613669   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:10:15.613732   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:10:15.646153   61804 cri.go:89] found id: ""
	I0814 01:10:15.646173   61804 logs.go:276] 0 containers: []
	W0814 01:10:15.646182   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:10:15.646189   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:10:15.646241   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:10:15.681417   61804 cri.go:89] found id: ""
	I0814 01:10:15.681444   61804 logs.go:276] 0 containers: []
	W0814 01:10:15.681455   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:10:15.681466   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:10:15.681483   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:10:15.763989   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:10:15.764026   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:10:15.803304   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:10:15.803337   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:10:15.872591   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:10:15.872630   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:10:15.886469   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:10:15.886504   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:10:15.956403   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:10:18.457103   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:10:18.470059   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:10:18.470138   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:10:18.505369   61804 cri.go:89] found id: ""
	I0814 01:10:18.505399   61804 logs.go:276] 0 containers: []
	W0814 01:10:18.505410   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:10:18.505419   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:10:18.505481   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:10:18.536719   61804 cri.go:89] found id: ""
	I0814 01:10:18.536750   61804 logs.go:276] 0 containers: []
	W0814 01:10:18.536781   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:10:18.536790   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:10:18.536845   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:10:18.571048   61804 cri.go:89] found id: ""
	I0814 01:10:18.571077   61804 logs.go:276] 0 containers: []
	W0814 01:10:18.571089   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:10:18.571096   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:10:18.571161   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:10:18.605547   61804 cri.go:89] found id: ""
	I0814 01:10:18.605569   61804 logs.go:276] 0 containers: []
	W0814 01:10:18.605578   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:10:18.605585   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:10:18.605645   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:10:18.637177   61804 cri.go:89] found id: ""
	I0814 01:10:18.637199   61804 logs.go:276] 0 containers: []
	W0814 01:10:18.637207   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:10:18.637213   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:10:18.637275   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:10:18.674976   61804 cri.go:89] found id: ""
	I0814 01:10:18.675003   61804 logs.go:276] 0 containers: []
	W0814 01:10:18.675012   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:10:18.675017   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:10:18.675066   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:10:18.709808   61804 cri.go:89] found id: ""
	I0814 01:10:18.709832   61804 logs.go:276] 0 containers: []
	W0814 01:10:18.709840   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:10:18.709846   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:10:18.709902   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:10:18.743577   61804 cri.go:89] found id: ""
	I0814 01:10:18.743601   61804 logs.go:276] 0 containers: []
	W0814 01:10:18.743607   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:10:18.743615   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:10:18.743635   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:10:18.794913   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:10:18.794944   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:10:18.807665   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:10:18.807692   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:10:18.877814   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:10:18.877835   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:10:18.877847   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:10:18.962319   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:10:18.962356   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:10:16.533474   61447 logs.go:123] Gathering logs for container status ...
	I0814 01:10:16.533523   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:10:16.579098   61447 logs.go:123] Gathering logs for kube-apiserver [ddba3ebb8413d6647bf6c8fe0bff16a1a10b35bcd7219cad5b66979c372cd92e] ...
	I0814 01:10:16.579129   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ddba3ebb8413d6647bf6c8fe0bff16a1a10b35bcd7219cad5b66979c372cd92e"
	I0814 01:10:16.620711   61447 logs.go:123] Gathering logs for dmesg ...
	I0814 01:10:16.620744   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:10:16.633968   61447 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:10:16.634005   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 01:10:16.733947   61447 logs.go:123] Gathering logs for etcd [1632d4b88f7f0e7e802d1848acf6c67950cfa7cdfbe5b4dd7f6fbc467a969388] ...
	I0814 01:10:16.733985   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1632d4b88f7f0e7e802d1848acf6c67950cfa7cdfbe5b4dd7f6fbc467a969388"
	I0814 01:10:16.785475   61447 logs.go:123] Gathering logs for kube-scheduler [89953f1dc813e32748579ac4c2541c0f99b1443ae3956d85a367db06810107b2] ...
	I0814 01:10:16.785512   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 89953f1dc813e32748579ac4c2541c0f99b1443ae3956d85a367db06810107b2"
	I0814 01:10:16.826307   61447 logs.go:123] Gathering logs for kube-controller-manager [3ef9bf666bbbc4c4c8b8036d2fa7e15e882f2bb301fe7d3b63a483d8b38e6091] ...
	I0814 01:10:16.826334   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ef9bf666bbbc4c4c8b8036d2fa7e15e882f2bb301fe7d3b63a483d8b38e6091"
	I0814 01:10:16.879391   61447 logs.go:123] Gathering logs for kubelet ...
	I0814 01:10:16.879422   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:10:19.453998   61447 system_pods.go:59] 8 kube-system pods found
	I0814 01:10:19.454028   61447 system_pods.go:61] "coredns-6f6b679f8f-dz9zk" [67e29ce3-7f67-4b96-8030-c980773b5772] Running
	I0814 01:10:19.454034   61447 system_pods.go:61] "etcd-no-preload-776907" [b81b7341-dcd8-4374-8241-8797eb33d707] Running
	I0814 01:10:19.454050   61447 system_pods.go:61] "kube-apiserver-no-preload-776907" [33b066e2-28ef-46a7-95d7-b17806cdbde6] Running
	I0814 01:10:19.454056   61447 system_pods.go:61] "kube-controller-manager-no-preload-776907" [1de07b1f-7e0d-4704-84dc-fbb1280fc3bf] Running
	I0814 01:10:19.454060   61447 system_pods.go:61] "kube-proxy-pgm9t" [efad60b0-c62e-4c47-974b-98fdca9d3496] Running
	I0814 01:10:19.454065   61447 system_pods.go:61] "kube-scheduler-no-preload-776907" [6a57c2f5-6194-4e84-bfd3-985a6ff2333d] Running
	I0814 01:10:19.454074   61447 system_pods.go:61] "metrics-server-6867b74b74-gb2dt" [c950c58e-c5c3-4535-b10f-f4379ff03409] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 01:10:19.454079   61447 system_pods.go:61] "storage-provisioner" [d0ba9510-e0a5-4558-98e3-a9510920f93a] Running
	I0814 01:10:19.454090   61447 system_pods.go:74] duration metric: took 3.813297982s to wait for pod list to return data ...
	I0814 01:10:19.454101   61447 default_sa.go:34] waiting for default service account to be created ...
	I0814 01:10:19.456941   61447 default_sa.go:45] found service account: "default"
	I0814 01:10:19.456969   61447 default_sa.go:55] duration metric: took 2.858057ms for default service account to be created ...
	I0814 01:10:19.456980   61447 system_pods.go:116] waiting for k8s-apps to be running ...
	I0814 01:10:19.461101   61447 system_pods.go:86] 8 kube-system pods found
	I0814 01:10:19.461125   61447 system_pods.go:89] "coredns-6f6b679f8f-dz9zk" [67e29ce3-7f67-4b96-8030-c980773b5772] Running
	I0814 01:10:19.461133   61447 system_pods.go:89] "etcd-no-preload-776907" [b81b7341-dcd8-4374-8241-8797eb33d707] Running
	I0814 01:10:19.461138   61447 system_pods.go:89] "kube-apiserver-no-preload-776907" [33b066e2-28ef-46a7-95d7-b17806cdbde6] Running
	I0814 01:10:19.461144   61447 system_pods.go:89] "kube-controller-manager-no-preload-776907" [1de07b1f-7e0d-4704-84dc-fbb1280fc3bf] Running
	I0814 01:10:19.461150   61447 system_pods.go:89] "kube-proxy-pgm9t" [efad60b0-c62e-4c47-974b-98fdca9d3496] Running
	I0814 01:10:19.461155   61447 system_pods.go:89] "kube-scheduler-no-preload-776907" [6a57c2f5-6194-4e84-bfd3-985a6ff2333d] Running
	I0814 01:10:19.461166   61447 system_pods.go:89] "metrics-server-6867b74b74-gb2dt" [c950c58e-c5c3-4535-b10f-f4379ff03409] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 01:10:19.461178   61447 system_pods.go:89] "storage-provisioner" [d0ba9510-e0a5-4558-98e3-a9510920f93a] Running
	I0814 01:10:19.461191   61447 system_pods.go:126] duration metric: took 4.203785ms to wait for k8s-apps to be running ...
	I0814 01:10:19.461203   61447 system_svc.go:44] waiting for kubelet service to be running ....
	I0814 01:10:19.461253   61447 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 01:10:19.476698   61447 system_svc.go:56] duration metric: took 15.486945ms WaitForService to wait for kubelet
	I0814 01:10:19.476735   61447 kubeadm.go:582] duration metric: took 4m23.065272349s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 01:10:19.476762   61447 node_conditions.go:102] verifying NodePressure condition ...
	I0814 01:10:19.480352   61447 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0814 01:10:19.480377   61447 node_conditions.go:123] node cpu capacity is 2
	I0814 01:10:19.480392   61447 node_conditions.go:105] duration metric: took 3.624166ms to run NodePressure ...
	I0814 01:10:19.480407   61447 start.go:241] waiting for startup goroutines ...
	I0814 01:10:19.480426   61447 start.go:246] waiting for cluster config update ...
	I0814 01:10:19.480440   61447 start.go:255] writing updated cluster config ...
	I0814 01:10:19.480790   61447 ssh_runner.go:195] Run: rm -f paused
	I0814 01:10:19.529809   61447 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0814 01:10:19.531666   61447 out.go:177] * Done! kubectl is now configured to use "no-preload-776907" cluster and "default" namespace by default
	I0814 01:10:15.590230   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:18.089286   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:21.500596   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:10:21.513404   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:10:21.513479   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:10:21.554150   61804 cri.go:89] found id: ""
	I0814 01:10:21.554179   61804 logs.go:276] 0 containers: []
	W0814 01:10:21.554188   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:10:21.554194   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:10:21.554251   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:10:21.588785   61804 cri.go:89] found id: ""
	I0814 01:10:21.588807   61804 logs.go:276] 0 containers: []
	W0814 01:10:21.588815   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:10:21.588820   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:10:21.588870   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:10:21.621537   61804 cri.go:89] found id: ""
	I0814 01:10:21.621572   61804 logs.go:276] 0 containers: []
	W0814 01:10:21.621581   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:10:21.621587   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:10:21.621640   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:10:21.660651   61804 cri.go:89] found id: ""
	I0814 01:10:21.660680   61804 logs.go:276] 0 containers: []
	W0814 01:10:21.660690   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:10:21.660698   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:10:21.660763   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:10:21.697233   61804 cri.go:89] found id: ""
	I0814 01:10:21.697259   61804 logs.go:276] 0 containers: []
	W0814 01:10:21.697269   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:10:21.697276   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:10:21.697347   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:10:21.728389   61804 cri.go:89] found id: ""
	I0814 01:10:21.728416   61804 logs.go:276] 0 containers: []
	W0814 01:10:21.728428   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:10:21.728435   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:10:21.728498   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:10:21.761502   61804 cri.go:89] found id: ""
	I0814 01:10:21.761534   61804 logs.go:276] 0 containers: []
	W0814 01:10:21.761546   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:10:21.761552   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:10:21.761624   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:10:21.796569   61804 cri.go:89] found id: ""
	I0814 01:10:21.796598   61804 logs.go:276] 0 containers: []
	W0814 01:10:21.796610   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:10:21.796621   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:10:21.796637   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:10:21.845444   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:10:21.845483   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:10:21.858017   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:10:21.858057   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:10:21.930417   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:10:21.930443   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:10:21.930460   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:10:22.005912   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:10:22.005951   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:10:20.089593   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:22.089797   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:24.591315   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:24.545241   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:10:24.559341   61804 kubeadm.go:597] duration metric: took 4m4.643567639s to restartPrimaryControlPlane
	W0814 01:10:24.559407   61804 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0814 01:10:24.559430   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0814 01:10:28.294241   61804 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (3.734785326s)
	I0814 01:10:28.294319   61804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 01:10:28.311148   61804 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 01:10:28.321145   61804 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 01:10:28.335025   61804 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 01:10:28.335042   61804 kubeadm.go:157] found existing configuration files:
	
	I0814 01:10:28.335084   61804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 01:10:28.348778   61804 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 01:10:28.348838   61804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 01:10:28.362209   61804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 01:10:28.374981   61804 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 01:10:28.375054   61804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 01:10:28.385686   61804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 01:10:28.396608   61804 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 01:10:28.396681   61804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 01:10:28.410155   61804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 01:10:28.419462   61804 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 01:10:28.419524   61804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 01:10:28.429089   61804 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0814 01:10:28.506715   61804 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0814 01:10:28.506816   61804 kubeadm.go:310] [preflight] Running pre-flight checks
	I0814 01:10:28.668770   61804 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0814 01:10:28.668908   61804 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0814 01:10:28.669020   61804 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0814 01:10:28.865442   61804 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0814 01:10:28.866971   61804 out.go:204]   - Generating certificates and keys ...
	I0814 01:10:28.867065   61804 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0814 01:10:28.867151   61804 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0814 01:10:28.867270   61804 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0814 01:10:28.867370   61804 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0814 01:10:28.867486   61804 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0814 01:10:28.867575   61804 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0814 01:10:28.867668   61804 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0814 01:10:28.867762   61804 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0814 01:10:28.867854   61804 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0814 01:10:28.867969   61804 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0814 01:10:28.868026   61804 kubeadm.go:310] [certs] Using the existing "sa" key
	I0814 01:10:28.868095   61804 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0814 01:10:29.109820   61804 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0814 01:10:29.305485   61804 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0814 01:10:29.447627   61804 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0814 01:10:29.519749   61804 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0814 01:10:29.534507   61804 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0814 01:10:29.535858   61804 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0814 01:10:29.535915   61804 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0814 01:10:29.679100   61804 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0814 01:10:27.089933   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:29.590579   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:29.681457   61804 out.go:204]   - Booting up control plane ...
	I0814 01:10:29.681596   61804 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0814 01:10:29.686193   61804 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0814 01:10:29.690458   61804 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0814 01:10:29.690602   61804 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0814 01:10:29.692526   61804 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0814 01:10:32.089926   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:34.090129   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:39.266092   61689 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.354324468s)
	I0814 01:10:39.266176   61689 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 01:10:39.281039   61689 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 01:10:39.290328   61689 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 01:10:39.299179   61689 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 01:10:39.299200   61689 kubeadm.go:157] found existing configuration files:
	
	I0814 01:10:39.299240   61689 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0814 01:10:39.307972   61689 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 01:10:39.308029   61689 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 01:10:39.316639   61689 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0814 01:10:39.324834   61689 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 01:10:39.324907   61689 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 01:10:39.333911   61689 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0814 01:10:39.342294   61689 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 01:10:39.342358   61689 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 01:10:39.351209   61689 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0814 01:10:39.361364   61689 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 01:10:39.361429   61689 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 01:10:39.370737   61689 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0814 01:10:39.422751   61689 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0814 01:10:39.422819   61689 kubeadm.go:310] [preflight] Running pre-flight checks
	I0814 01:10:39.536672   61689 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0814 01:10:39.536827   61689 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0814 01:10:39.536965   61689 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0814 01:10:39.546793   61689 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0814 01:10:36.590409   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:39.090160   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:39.548749   61689 out.go:204]   - Generating certificates and keys ...
	I0814 01:10:39.548852   61689 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0814 01:10:39.548936   61689 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0814 01:10:39.549054   61689 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0814 01:10:39.549147   61689 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0814 01:10:39.549236   61689 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0814 01:10:39.549354   61689 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0814 01:10:39.549454   61689 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0814 01:10:39.549540   61689 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0814 01:10:39.549647   61689 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0814 01:10:39.549725   61689 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0814 01:10:39.549779   61689 kubeadm.go:310] [certs] Using the existing "sa" key
	I0814 01:10:39.549857   61689 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0814 01:10:39.626351   61689 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0814 01:10:39.760278   61689 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0814 01:10:39.866008   61689 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0814 01:10:39.999161   61689 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0814 01:10:40.196721   61689 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0814 01:10:40.197188   61689 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0814 01:10:40.199882   61689 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0814 01:10:40.201618   61689 out.go:204]   - Booting up control plane ...
	I0814 01:10:40.201746   61689 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0814 01:10:40.201813   61689 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0814 01:10:40.201869   61689 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0814 01:10:40.219199   61689 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0814 01:10:40.227902   61689 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0814 01:10:40.227973   61689 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0814 01:10:40.361233   61689 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0814 01:10:40.361348   61689 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0814 01:10:40.862332   61689 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.269742ms
	I0814 01:10:40.862432   61689 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0814 01:10:41.590443   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:43.590766   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:45.864038   61689 kubeadm.go:310] [api-check] The API server is healthy after 5.001460061s
	I0814 01:10:45.878388   61689 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0814 01:10:45.896709   61689 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0814 01:10:45.940134   61689 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0814 01:10:45.940348   61689 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-585256 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0814 01:10:45.955748   61689 kubeadm.go:310] [bootstrap-token] Using token: 8dipep.54emqs990as2h2yu
	I0814 01:10:45.957107   61689 out.go:204]   - Configuring RBAC rules ...
	I0814 01:10:45.957260   61689 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0814 01:10:45.967198   61689 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0814 01:10:45.981109   61689 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0814 01:10:45.984971   61689 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0814 01:10:45.990218   61689 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0814 01:10:45.994132   61689 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0814 01:10:46.271392   61689 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0814 01:10:46.713198   61689 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0814 01:10:47.271788   61689 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0814 01:10:47.271821   61689 kubeadm.go:310] 
	I0814 01:10:47.271873   61689 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0814 01:10:47.271880   61689 kubeadm.go:310] 
	I0814 01:10:47.271970   61689 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0814 01:10:47.271983   61689 kubeadm.go:310] 
	I0814 01:10:47.272035   61689 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0814 01:10:47.272118   61689 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0814 01:10:47.272195   61689 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0814 01:10:47.272219   61689 kubeadm.go:310] 
	I0814 01:10:47.272313   61689 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0814 01:10:47.272340   61689 kubeadm.go:310] 
	I0814 01:10:47.272418   61689 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0814 01:10:47.272431   61689 kubeadm.go:310] 
	I0814 01:10:47.272493   61689 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0814 01:10:47.272603   61689 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0814 01:10:47.272718   61689 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0814 01:10:47.272736   61689 kubeadm.go:310] 
	I0814 01:10:47.272851   61689 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0814 01:10:47.272978   61689 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0814 01:10:47.272988   61689 kubeadm.go:310] 
	I0814 01:10:47.273093   61689 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 8dipep.54emqs990as2h2yu \
	I0814 01:10:47.273238   61689 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f608549e9c065598e2d494ba753eb8a7bbe7260a53a76decd30e6e17b5fe24c3 \
	I0814 01:10:47.273276   61689 kubeadm.go:310] 	--control-plane 
	I0814 01:10:47.273290   61689 kubeadm.go:310] 
	I0814 01:10:47.273405   61689 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0814 01:10:47.273413   61689 kubeadm.go:310] 
	I0814 01:10:47.273513   61689 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 8dipep.54emqs990as2h2yu \
	I0814 01:10:47.273659   61689 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f608549e9c065598e2d494ba753eb8a7bbe7260a53a76decd30e6e17b5fe24c3 
	I0814 01:10:47.274832   61689 kubeadm.go:310] W0814 01:10:39.407507    2549 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0814 01:10:47.275253   61689 kubeadm.go:310] W0814 01:10:39.408398    2549 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0814 01:10:47.275402   61689 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0814 01:10:47.275444   61689 cni.go:84] Creating CNI manager for ""
	I0814 01:10:47.275455   61689 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 01:10:47.277239   61689 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0814 01:10:47.278570   61689 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0814 01:10:47.289683   61689 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0814 01:10:47.306392   61689 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0814 01:10:47.306474   61689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:10:47.306474   61689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-585256 minikube.k8s.io/updated_at=2024_08_14T01_10_47_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a5ac17629aabe8562ef9220195ba6559cf416caf minikube.k8s.io/name=default-k8s-diff-port-585256 minikube.k8s.io/primary=true
	I0814 01:10:47.471053   61689 ops.go:34] apiserver oom_adj: -16
	I0814 01:10:47.471227   61689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:10:47.971669   61689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:10:46.089776   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:48.589378   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:48.472147   61689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:10:48.971874   61689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:10:49.471867   61689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:10:49.972002   61689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:10:50.471298   61689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:10:50.971656   61689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:10:51.471610   61689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:10:51.548562   61689 kubeadm.go:1113] duration metric: took 4.24215834s to wait for elevateKubeSystemPrivileges
	I0814 01:10:51.548600   61689 kubeadm.go:394] duration metric: took 4m53.28604263s to StartCluster
	I0814 01:10:51.548621   61689 settings.go:142] acquiring lock: {Name:mkb0f793aa2a6618ff3457f9cd2d34beec5f1b5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 01:10:51.548708   61689 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19429-9425/kubeconfig
	I0814 01:10:51.551834   61689 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19429-9425/kubeconfig: {Name:mkd99f966962f24d3ec49056c344f5320df43dba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 01:10:51.552154   61689 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.110 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0814 01:10:51.552236   61689 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0814 01:10:51.552311   61689 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-585256"
	I0814 01:10:51.552343   61689 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-585256"
	I0814 01:10:51.552341   61689 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-585256"
	W0814 01:10:51.552354   61689 addons.go:243] addon storage-provisioner should already be in state true
	I0814 01:10:51.552384   61689 host.go:66] Checking if "default-k8s-diff-port-585256" exists ...
	I0814 01:10:51.552387   61689 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-585256"
	W0814 01:10:51.552396   61689 addons.go:243] addon metrics-server should already be in state true
	I0814 01:10:51.552416   61689 host.go:66] Checking if "default-k8s-diff-port-585256" exists ...
	I0814 01:10:51.552423   61689 config.go:182] Loaded profile config "default-k8s-diff-port-585256": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 01:10:51.552805   61689 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:10:51.552842   61689 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:10:51.552855   61689 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:10:51.552865   61689 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:10:51.553056   61689 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-585256"
	I0814 01:10:51.553092   61689 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-585256"
	I0814 01:10:51.553476   61689 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:10:51.553519   61689 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:10:51.553870   61689 out.go:177] * Verifying Kubernetes components...
	I0814 01:10:51.555358   61689 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 01:10:51.569380   61689 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36961
	I0814 01:10:51.569570   61689 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38335
	I0814 01:10:51.569920   61689 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:10:51.570057   61689 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:10:51.570516   61689 main.go:141] libmachine: Using API Version  1
	I0814 01:10:51.570536   61689 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:10:51.570648   61689 main.go:141] libmachine: Using API Version  1
	I0814 01:10:51.570672   61689 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:10:51.570891   61689 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:10:51.570981   61689 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:10:51.571148   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetState
	I0814 01:10:51.571564   61689 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:10:51.571600   61689 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:10:51.572161   61689 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40351
	I0814 01:10:51.572637   61689 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:10:51.573134   61689 main.go:141] libmachine: Using API Version  1
	I0814 01:10:51.573153   61689 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:10:51.574142   61689 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:10:51.574576   61689 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:10:51.574600   61689 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:10:51.575008   61689 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-585256"
	W0814 01:10:51.575026   61689 addons.go:243] addon default-storageclass should already be in state true
	I0814 01:10:51.575056   61689 host.go:66] Checking if "default-k8s-diff-port-585256" exists ...
	I0814 01:10:51.575459   61689 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:10:51.575500   61689 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:10:51.587910   61689 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35335
	I0814 01:10:51.588640   61689 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:10:51.589298   61689 main.go:141] libmachine: Using API Version  1
	I0814 01:10:51.589318   61689 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:10:51.589938   61689 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:10:51.590198   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetState
	I0814 01:10:51.591151   61689 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40625
	I0814 01:10:51.591786   61689 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:10:51.592257   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .DriverName
	I0814 01:10:51.592427   61689 main.go:141] libmachine: Using API Version  1
	I0814 01:10:51.592444   61689 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:10:51.592742   61689 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:10:51.592959   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetState
	I0814 01:10:51.594517   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .DriverName
	I0814 01:10:51.594851   61689 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0814 01:10:51.596245   61689 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 01:10:51.596263   61689 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0814 01:10:51.596277   61689 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0814 01:10:51.596296   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHHostname
	I0814 01:10:51.597335   61689 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 01:10:51.597351   61689 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0814 01:10:51.597365   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHHostname
	I0814 01:10:51.599147   61689 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40567
	I0814 01:10:51.599559   61689 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:10:51.600041   61689 main.go:141] libmachine: Using API Version  1
	I0814 01:10:51.600062   61689 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:10:51.600442   61689 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:10:51.601105   61689 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:10:51.601131   61689 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:10:51.601316   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:10:51.601345   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:10:51.601367   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:10:51.601408   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHPort
	I0814 01:10:51.601889   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:10:51.601893   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 01:10:51.602060   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHUsername
	I0814 01:10:51.602226   61689 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/default-k8s-diff-port-585256/id_rsa Username:docker}
	I0814 01:10:51.606415   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:10:51.606437   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:10:51.606582   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHPort
	I0814 01:10:51.606793   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 01:10:51.607035   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHUsername
	I0814 01:10:51.607200   61689 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/default-k8s-diff-port-585256/id_rsa Username:docker}
	I0814 01:10:51.623773   61689 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33265
	I0814 01:10:51.624272   61689 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:10:51.624752   61689 main.go:141] libmachine: Using API Version  1
	I0814 01:10:51.624772   61689 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:10:51.625130   61689 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:10:51.625309   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetState
	I0814 01:10:51.627055   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .DriverName
	I0814 01:10:51.627259   61689 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0814 01:10:51.627272   61689 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0814 01:10:51.627284   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHHostname
	I0814 01:10:51.630492   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:10:51.630890   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:10:51.630904   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:10:51.631066   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHPort
	I0814 01:10:51.631226   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 01:10:51.631389   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHUsername
	I0814 01:10:51.631501   61689 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/default-k8s-diff-port-585256/id_rsa Username:docker}
	I0814 01:10:51.744471   61689 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 01:10:51.762256   61689 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-585256" to be "Ready" ...
	I0814 01:10:51.782968   61689 node_ready.go:49] node "default-k8s-diff-port-585256" has status "Ready":"True"
	I0814 01:10:51.782999   61689 node_ready.go:38] duration metric: took 20.706198ms for node "default-k8s-diff-port-585256" to be "Ready" ...
	I0814 01:10:51.783011   61689 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 01:10:51.796967   61689 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-hngz9" in "kube-system" namespace to be "Ready" ...
	I0814 01:10:51.866263   61689 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 01:10:51.867193   61689 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0814 01:10:51.880992   61689 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0814 01:10:51.881017   61689 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0814 01:10:51.927059   61689 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0814 01:10:51.927081   61689 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0814 01:10:51.987114   61689 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0814 01:10:51.987134   61689 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0814 01:10:52.053818   61689 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0814 01:10:52.977726   61689 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.111426777s)
	I0814 01:10:52.977791   61689 main.go:141] libmachine: Making call to close driver server
	I0814 01:10:52.977789   61689 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.110564484s)
	I0814 01:10:52.977844   61689 main.go:141] libmachine: Making call to close driver server
	I0814 01:10:52.977863   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .Close
	I0814 01:10:52.977805   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .Close
	I0814 01:10:52.978191   61689 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:10:52.978210   61689 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:10:52.978217   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | Closing plugin on server side
	I0814 01:10:52.978222   61689 main.go:141] libmachine: Making call to close driver server
	I0814 01:10:52.978230   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | Closing plugin on server side
	I0814 01:10:52.978236   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .Close
	I0814 01:10:52.978282   61689 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:10:52.978310   61689 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:10:52.978325   61689 main.go:141] libmachine: Making call to close driver server
	I0814 01:10:52.978335   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .Close
	I0814 01:10:52.978869   61689 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:10:52.978909   61689 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:10:52.979017   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | Closing plugin on server side
	I0814 01:10:52.981465   61689 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:10:52.981488   61689 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:10:53.039845   61689 main.go:141] libmachine: Making call to close driver server
	I0814 01:10:53.039866   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .Close
	I0814 01:10:53.040156   61689 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:10:53.040174   61689 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:10:53.040217   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | Closing plugin on server side
	I0814 01:10:53.239968   61689 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.186108272s)
	I0814 01:10:53.240018   61689 main.go:141] libmachine: Making call to close driver server
	I0814 01:10:53.240035   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .Close
	I0814 01:10:53.240360   61689 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:10:53.240378   61689 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:10:53.240387   61689 main.go:141] libmachine: Making call to close driver server
	I0814 01:10:53.240395   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .Close
	I0814 01:10:53.240672   61689 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:10:53.240686   61689 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:10:53.240696   61689 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-585256"
	I0814 01:10:53.242401   61689 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0814 01:10:50.591245   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:52.584492   61115 pod_ready.go:81] duration metric: took 4m0.000968161s for pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace to be "Ready" ...
	E0814 01:10:52.584532   61115 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace to be "Ready" (will not retry!)
	I0814 01:10:52.584557   61115 pod_ready.go:38] duration metric: took 4m8.538973262s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 01:10:52.584585   61115 kubeadm.go:597] duration metric: took 4m16.433276087s to restartPrimaryControlPlane
	W0814 01:10:52.584639   61115 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0814 01:10:52.584666   61115 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0814 01:10:53.243906   61689 addons.go:510] duration metric: took 1.691669156s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0814 01:10:53.804696   61689 pod_ready.go:102] pod "coredns-6f6b679f8f-hngz9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:56.305075   61689 pod_ready.go:102] pod "coredns-6f6b679f8f-hngz9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:57.805174   61689 pod_ready.go:92] pod "coredns-6f6b679f8f-hngz9" in "kube-system" namespace has status "Ready":"True"
	I0814 01:10:57.805202   61689 pod_ready.go:81] duration metric: took 6.008208867s for pod "coredns-6f6b679f8f-hngz9" in "kube-system" namespace to be "Ready" ...
	I0814 01:10:57.805214   61689 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-jmqk7" in "kube-system" namespace to be "Ready" ...
	I0814 01:10:57.809693   61689 pod_ready.go:92] pod "coredns-6f6b679f8f-jmqk7" in "kube-system" namespace has status "Ready":"True"
	I0814 01:10:57.809714   61689 pod_ready.go:81] duration metric: took 4.491999ms for pod "coredns-6f6b679f8f-jmqk7" in "kube-system" namespace to be "Ready" ...
	I0814 01:10:57.809726   61689 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-585256" in "kube-system" namespace to be "Ready" ...
	I0814 01:10:59.816199   61689 pod_ready.go:92] pod "etcd-default-k8s-diff-port-585256" in "kube-system" namespace has status "Ready":"True"
	I0814 01:10:59.816228   61689 pod_ready.go:81] duration metric: took 2.006493576s for pod "etcd-default-k8s-diff-port-585256" in "kube-system" namespace to be "Ready" ...
	I0814 01:10:59.816241   61689 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-585256" in "kube-system" namespace to be "Ready" ...
	I0814 01:10:59.821351   61689 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-585256" in "kube-system" namespace has status "Ready":"True"
	I0814 01:10:59.821374   61689 pod_ready.go:81] duration metric: took 5.126272ms for pod "kube-apiserver-default-k8s-diff-port-585256" in "kube-system" namespace to be "Ready" ...
	I0814 01:10:59.821384   61689 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-585256" in "kube-system" namespace to be "Ready" ...
	I0814 01:10:59.825182   61689 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-585256" in "kube-system" namespace has status "Ready":"True"
	I0814 01:10:59.825200   61689 pod_ready.go:81] duration metric: took 3.810193ms for pod "kube-controller-manager-default-k8s-diff-port-585256" in "kube-system" namespace to be "Ready" ...
	I0814 01:10:59.825209   61689 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rg8h9" in "kube-system" namespace to be "Ready" ...
	I0814 01:10:59.829240   61689 pod_ready.go:92] pod "kube-proxy-rg8h9" in "kube-system" namespace has status "Ready":"True"
	I0814 01:10:59.829259   61689 pod_ready.go:81] duration metric: took 4.043044ms for pod "kube-proxy-rg8h9" in "kube-system" namespace to be "Ready" ...
	I0814 01:10:59.829269   61689 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-585256" in "kube-system" namespace to be "Ready" ...
	I0814 01:11:00.602253   61689 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-585256" in "kube-system" namespace has status "Ready":"True"
	I0814 01:11:00.602276   61689 pod_ready.go:81] duration metric: took 773.000181ms for pod "kube-scheduler-default-k8s-diff-port-585256" in "kube-system" namespace to be "Ready" ...
	I0814 01:11:00.602285   61689 pod_ready.go:38] duration metric: took 8.819260447s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 01:11:00.602301   61689 api_server.go:52] waiting for apiserver process to appear ...
	I0814 01:11:00.602352   61689 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:11:00.620930   61689 api_server.go:72] duration metric: took 9.068741768s to wait for apiserver process to appear ...
	I0814 01:11:00.620954   61689 api_server.go:88] waiting for apiserver healthz status ...
	I0814 01:11:00.620973   61689 api_server.go:253] Checking apiserver healthz at https://192.168.39.110:8444/healthz ...
	I0814 01:11:00.625960   61689 api_server.go:279] https://192.168.39.110:8444/healthz returned 200:
	ok
	I0814 01:11:00.626930   61689 api_server.go:141] control plane version: v1.31.0
	I0814 01:11:00.626948   61689 api_server.go:131] duration metric: took 5.98825ms to wait for apiserver health ...
	I0814 01:11:00.626956   61689 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 01:11:00.805157   61689 system_pods.go:59] 9 kube-system pods found
	I0814 01:11:00.805183   61689 system_pods.go:61] "coredns-6f6b679f8f-hngz9" [213f9a45-596b-47b3-9c37-ceae021433ea] Running
	I0814 01:11:00.805187   61689 system_pods.go:61] "coredns-6f6b679f8f-jmqk7" [397fb54b-40cd-4c4e-9503-c077f814c6e5] Running
	I0814 01:11:00.805190   61689 system_pods.go:61] "etcd-default-k8s-diff-port-585256" [2fa04b3c-b311-4f0f-82e5-e512db3dd11b] Running
	I0814 01:11:00.805194   61689 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-585256" [ef1c1aeb-9cee-47d6-8cf5-14535208af62] Running
	I0814 01:11:00.805197   61689 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-585256" [ff5c5123-b01f-4023-b8ec-169065ddb88a] Running
	I0814 01:11:00.805200   61689 system_pods.go:61] "kube-proxy-rg8h9" [b2601104-a6f5-4065-87d5-c027d583f647] Running
	I0814 01:11:00.805203   61689 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-585256" [31e655e4-00c7-443a-9ee8-058a4020852d] Running
	I0814 01:11:00.805209   61689 system_pods.go:61] "metrics-server-6867b74b74-lzfpz" [2dd31ad2-c384-4edd-8d5c-561bc2fa72e4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 01:11:00.805213   61689 system_pods.go:61] "storage-provisioner" [1636777b-2347-4c48-b72a-3b5445c4862a] Running
	I0814 01:11:00.805219   61689 system_pods.go:74] duration metric: took 178.259422ms to wait for pod list to return data ...
	I0814 01:11:00.805226   61689 default_sa.go:34] waiting for default service account to be created ...
	I0814 01:11:01.001973   61689 default_sa.go:45] found service account: "default"
	I0814 01:11:01.002000   61689 default_sa.go:55] duration metric: took 196.764266ms for default service account to be created ...
	I0814 01:11:01.002010   61689 system_pods.go:116] waiting for k8s-apps to be running ...
	I0814 01:11:01.203660   61689 system_pods.go:86] 9 kube-system pods found
	I0814 01:11:01.203683   61689 system_pods.go:89] "coredns-6f6b679f8f-hngz9" [213f9a45-596b-47b3-9c37-ceae021433ea] Running
	I0814 01:11:01.203688   61689 system_pods.go:89] "coredns-6f6b679f8f-jmqk7" [397fb54b-40cd-4c4e-9503-c077f814c6e5] Running
	I0814 01:11:01.203695   61689 system_pods.go:89] "etcd-default-k8s-diff-port-585256" [2fa04b3c-b311-4f0f-82e5-e512db3dd11b] Running
	I0814 01:11:01.203702   61689 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-585256" [ef1c1aeb-9cee-47d6-8cf5-14535208af62] Running
	I0814 01:11:01.203708   61689 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-585256" [ff5c5123-b01f-4023-b8ec-169065ddb88a] Running
	I0814 01:11:01.203713   61689 system_pods.go:89] "kube-proxy-rg8h9" [b2601104-a6f5-4065-87d5-c027d583f647] Running
	I0814 01:11:01.203719   61689 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-585256" [31e655e4-00c7-443a-9ee8-058a4020852d] Running
	I0814 01:11:01.203727   61689 system_pods.go:89] "metrics-server-6867b74b74-lzfpz" [2dd31ad2-c384-4edd-8d5c-561bc2fa72e4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 01:11:01.203733   61689 system_pods.go:89] "storage-provisioner" [1636777b-2347-4c48-b72a-3b5445c4862a] Running
	I0814 01:11:01.203744   61689 system_pods.go:126] duration metric: took 201.72785ms to wait for k8s-apps to be running ...
	I0814 01:11:01.203752   61689 system_svc.go:44] waiting for kubelet service to be running ....
	I0814 01:11:01.203810   61689 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 01:11:01.218903   61689 system_svc.go:56] duration metric: took 15.144054ms WaitForService to wait for kubelet
	I0814 01:11:01.218925   61689 kubeadm.go:582] duration metric: took 9.666741267s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 01:11:01.218950   61689 node_conditions.go:102] verifying NodePressure condition ...
	I0814 01:11:01.403320   61689 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0814 01:11:01.403350   61689 node_conditions.go:123] node cpu capacity is 2
	I0814 01:11:01.403363   61689 node_conditions.go:105] duration metric: took 184.40754ms to run NodePressure ...
	I0814 01:11:01.403377   61689 start.go:241] waiting for startup goroutines ...
	I0814 01:11:01.403385   61689 start.go:246] waiting for cluster config update ...
	I0814 01:11:01.403398   61689 start.go:255] writing updated cluster config ...
	I0814 01:11:01.403690   61689 ssh_runner.go:195] Run: rm -f paused
	I0814 01:11:01.451211   61689 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0814 01:11:01.453288   61689 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-585256" cluster and "default" namespace by default
	I0814 01:11:09.693028   61804 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0814 01:11:09.693700   61804 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 01:11:09.693975   61804 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 01:11:18.892614   61115 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.307924274s)
	I0814 01:11:18.892692   61115 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 01:11:18.907571   61115 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 01:11:18.917775   61115 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 01:11:18.927492   61115 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 01:11:18.927521   61115 kubeadm.go:157] found existing configuration files:
	
	I0814 01:11:18.927588   61115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 01:11:18.936787   61115 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 01:11:18.936840   61115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 01:11:18.946163   61115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 01:11:18.954567   61115 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 01:11:18.954613   61115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 01:11:18.963437   61115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 01:11:18.971647   61115 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 01:11:18.971691   61115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 01:11:18.980676   61115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 01:11:18.989638   61115 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 01:11:18.989681   61115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 01:11:18.998834   61115 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0814 01:11:19.044209   61115 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0814 01:11:19.044286   61115 kubeadm.go:310] [preflight] Running pre-flight checks
	I0814 01:11:19.152983   61115 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0814 01:11:19.153147   61115 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0814 01:11:19.153253   61115 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0814 01:11:19.160933   61115 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0814 01:11:14.694223   61804 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 01:11:14.694446   61804 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 01:11:19.162856   61115 out.go:204]   - Generating certificates and keys ...
	I0814 01:11:19.162972   61115 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0814 01:11:19.163044   61115 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0814 01:11:19.163121   61115 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0814 01:11:19.163213   61115 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0814 01:11:19.163322   61115 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0814 01:11:19.163396   61115 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0814 01:11:19.163467   61115 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0814 01:11:19.163527   61115 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0814 01:11:19.163755   61115 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0814 01:11:19.163860   61115 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0814 01:11:19.163917   61115 kubeadm.go:310] [certs] Using the existing "sa" key
	I0814 01:11:19.163987   61115 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0814 01:11:19.615014   61115 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0814 01:11:19.777877   61115 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0814 01:11:19.917278   61115 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0814 01:11:20.190113   61115 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0814 01:11:20.351945   61115 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0814 01:11:20.352522   61115 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0814 01:11:20.355239   61115 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0814 01:11:20.356550   61115 out.go:204]   - Booting up control plane ...
	I0814 01:11:20.356683   61115 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0814 01:11:20.356784   61115 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0814 01:11:20.356993   61115 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0814 01:11:20.376382   61115 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0814 01:11:20.381926   61115 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0814 01:11:20.382001   61115 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0814 01:11:20.510283   61115 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0814 01:11:20.510394   61115 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0814 01:11:21.016575   61115 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 505.997518ms
	I0814 01:11:21.016716   61115 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0814 01:11:26.018203   61115 kubeadm.go:310] [api-check] The API server is healthy after 5.00166081s
	I0814 01:11:26.035867   61115 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0814 01:11:26.053660   61115 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0814 01:11:26.084727   61115 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0814 01:11:26.084987   61115 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-901410 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0814 01:11:26.100115   61115 kubeadm.go:310] [bootstrap-token] Using token: t7ews1.hirn7pq8otu9l2lh
	I0814 01:11:26.101532   61115 out.go:204]   - Configuring RBAC rules ...
	I0814 01:11:26.101691   61115 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0814 01:11:26.107165   61115 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0814 01:11:26.117715   61115 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0814 01:11:26.121222   61115 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0814 01:11:26.124371   61115 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0814 01:11:26.128216   61115 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0814 01:11:26.426496   61115 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0814 01:11:26.868163   61115 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0814 01:11:27.426401   61115 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0814 01:11:27.427484   61115 kubeadm.go:310] 
	I0814 01:11:27.427587   61115 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0814 01:11:27.427604   61115 kubeadm.go:310] 
	I0814 01:11:27.427727   61115 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0814 01:11:27.427743   61115 kubeadm.go:310] 
	I0814 01:11:27.427770   61115 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0814 01:11:27.427846   61115 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0814 01:11:27.427928   61115 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0814 01:11:27.427939   61115 kubeadm.go:310] 
	I0814 01:11:27.428020   61115 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0814 01:11:27.428027   61115 kubeadm.go:310] 
	I0814 01:11:27.428109   61115 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0814 01:11:27.428116   61115 kubeadm.go:310] 
	I0814 01:11:27.428192   61115 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0814 01:11:27.428289   61115 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0814 01:11:27.428389   61115 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0814 01:11:27.428397   61115 kubeadm.go:310] 
	I0814 01:11:27.428511   61115 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0814 01:11:27.428625   61115 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0814 01:11:27.428640   61115 kubeadm.go:310] 
	I0814 01:11:27.428778   61115 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token t7ews1.hirn7pq8otu9l2lh \
	I0814 01:11:27.428920   61115 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f608549e9c065598e2d494ba753eb8a7bbe7260a53a76decd30e6e17b5fe24c3 \
	I0814 01:11:27.428964   61115 kubeadm.go:310] 	--control-plane 
	I0814 01:11:27.428971   61115 kubeadm.go:310] 
	I0814 01:11:27.429085   61115 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0814 01:11:27.429097   61115 kubeadm.go:310] 
	I0814 01:11:27.429229   61115 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token t7ews1.hirn7pq8otu9l2lh \
	I0814 01:11:27.429381   61115 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f608549e9c065598e2d494ba753eb8a7bbe7260a53a76decd30e6e17b5fe24c3 
	I0814 01:11:27.430485   61115 kubeadm.go:310] W0814 01:11:19.012996    2597 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0814 01:11:27.430895   61115 kubeadm.go:310] W0814 01:11:19.013634    2597 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0814 01:11:27.431062   61115 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0814 01:11:27.431092   61115 cni.go:84] Creating CNI manager for ""
	I0814 01:11:27.431102   61115 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 01:11:27.432987   61115 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0814 01:11:24.694861   61804 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 01:11:24.695123   61804 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 01:11:27.434183   61115 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0814 01:11:27.446168   61115 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0814 01:11:27.466651   61115 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0814 01:11:27.466760   61115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-901410 minikube.k8s.io/updated_at=2024_08_14T01_11_27_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a5ac17629aabe8562ef9220195ba6559cf416caf minikube.k8s.io/name=embed-certs-901410 minikube.k8s.io/primary=true
	I0814 01:11:27.466760   61115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:11:27.495784   61115 ops.go:34] apiserver oom_adj: -16
	I0814 01:11:27.670097   61115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:11:28.170891   61115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:11:28.670320   61115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:11:29.170197   61115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:11:29.670157   61115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:11:30.170664   61115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:11:30.670254   61115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:11:31.170767   61115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:11:31.671004   61115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:11:31.762872   61115 kubeadm.go:1113] duration metric: took 4.296174293s to wait for elevateKubeSystemPrivileges
	I0814 01:11:31.762902   61115 kubeadm.go:394] duration metric: took 4m55.664668706s to StartCluster
	I0814 01:11:31.762924   61115 settings.go:142] acquiring lock: {Name:mkb0f793aa2a6618ff3457f9cd2d34beec5f1b5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 01:11:31.763010   61115 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19429-9425/kubeconfig
	I0814 01:11:31.764625   61115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19429-9425/kubeconfig: {Name:mkd99f966962f24d3ec49056c344f5320df43dba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 01:11:31.764876   61115 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.210 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0814 01:11:31.764951   61115 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0814 01:11:31.765038   61115 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-901410"
	I0814 01:11:31.765052   61115 addons.go:69] Setting default-storageclass=true in profile "embed-certs-901410"
	I0814 01:11:31.765070   61115 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-901410"
	I0814 01:11:31.765068   61115 addons.go:69] Setting metrics-server=true in profile "embed-certs-901410"
	I0814 01:11:31.765086   61115 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-901410"
	I0814 01:11:31.765092   61115 config.go:182] Loaded profile config "embed-certs-901410": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 01:11:31.765111   61115 addons.go:234] Setting addon metrics-server=true in "embed-certs-901410"
	W0814 01:11:31.765126   61115 addons.go:243] addon metrics-server should already be in state true
	I0814 01:11:31.765163   61115 host.go:66] Checking if "embed-certs-901410" exists ...
	W0814 01:11:31.765083   61115 addons.go:243] addon storage-provisioner should already be in state true
	I0814 01:11:31.765199   61115 host.go:66] Checking if "embed-certs-901410" exists ...
	I0814 01:11:31.765481   61115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:11:31.765516   61115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:11:31.765554   61115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:11:31.765570   61115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:11:31.765588   61115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:11:31.765614   61115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:11:31.766459   61115 out.go:177] * Verifying Kubernetes components...
	I0814 01:11:31.767835   61115 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 01:11:31.781637   61115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34599
	I0814 01:11:31.782146   61115 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:11:31.782517   61115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32983
	I0814 01:11:31.782700   61115 main.go:141] libmachine: Using API Version  1
	I0814 01:11:31.782732   61115 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:11:31.783038   61115 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:11:31.783052   61115 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:11:31.783213   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetState
	I0814 01:11:31.783540   61115 main.go:141] libmachine: Using API Version  1
	I0814 01:11:31.783569   61115 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:11:31.783897   61115 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:11:31.784326   61115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39503
	I0814 01:11:31.784458   61115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:11:31.784487   61115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:11:31.784791   61115 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:11:31.785281   61115 main.go:141] libmachine: Using API Version  1
	I0814 01:11:31.785306   61115 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:11:31.785665   61115 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:11:31.786175   61115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:11:31.786218   61115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:11:31.786466   61115 addons.go:234] Setting addon default-storageclass=true in "embed-certs-901410"
	W0814 01:11:31.786484   61115 addons.go:243] addon default-storageclass should already be in state true
	I0814 01:11:31.786513   61115 host.go:66] Checking if "embed-certs-901410" exists ...
	I0814 01:11:31.786853   61115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:11:31.786881   61115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:11:31.801208   61115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41561
	I0814 01:11:31.801592   61115 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:11:31.802016   61115 main.go:141] libmachine: Using API Version  1
	I0814 01:11:31.802032   61115 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:11:31.802382   61115 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:11:31.802555   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetState
	I0814 01:11:31.803106   61115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40669
	I0814 01:11:31.803589   61115 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:11:31.804133   61115 main.go:141] libmachine: Using API Version  1
	I0814 01:11:31.804159   61115 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:11:31.804462   61115 main.go:141] libmachine: (embed-certs-901410) Calling .DriverName
	I0814 01:11:31.804532   61115 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:11:31.804716   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetState
	I0814 01:11:31.805759   61115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39529
	I0814 01:11:31.806197   61115 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:11:31.806546   61115 main.go:141] libmachine: (embed-certs-901410) Calling .DriverName
	I0814 01:11:31.806590   61115 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0814 01:11:31.806667   61115 main.go:141] libmachine: Using API Version  1
	I0814 01:11:31.806692   61115 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:11:31.806982   61115 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:11:31.807572   61115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:11:31.807609   61115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:11:31.808223   61115 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 01:11:31.808225   61115 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0814 01:11:31.808301   61115 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0814 01:11:31.808335   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHHostname
	I0814 01:11:31.810018   61115 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 01:11:31.810057   61115 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0814 01:11:31.810125   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHHostname
	I0814 01:11:31.812029   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:11:31.812728   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:11:31.812862   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:11:31.813062   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHPort
	I0814 01:11:31.813261   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 01:11:31.813284   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:11:31.813420   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHUsername
	I0814 01:11:31.813562   61115 sshutil.go:53] new ssh client: &{IP:192.168.50.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/embed-certs-901410/id_rsa Username:docker}
	I0814 01:11:31.813864   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:11:31.813880   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:11:31.814032   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHPort
	I0814 01:11:31.814236   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 01:11:31.814398   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHUsername
	I0814 01:11:31.814542   61115 sshutil.go:53] new ssh client: &{IP:192.168.50.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/embed-certs-901410/id_rsa Username:docker}
	I0814 01:11:31.825081   61115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34269
	I0814 01:11:31.825523   61115 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:11:31.825944   61115 main.go:141] libmachine: Using API Version  1
	I0814 01:11:31.825967   61115 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:11:31.826327   61115 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:11:31.826537   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetState
	I0814 01:11:31.831060   61115 main.go:141] libmachine: (embed-certs-901410) Calling .DriverName
	I0814 01:11:31.831292   61115 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0814 01:11:31.831315   61115 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0814 01:11:31.831334   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHHostname
	I0814 01:11:31.834552   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:11:31.834934   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:11:31.834962   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:11:31.835102   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHPort
	I0814 01:11:31.835304   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 01:11:31.835476   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHUsername
	I0814 01:11:31.835610   61115 sshutil.go:53] new ssh client: &{IP:192.168.50.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/embed-certs-901410/id_rsa Username:docker}
	I0814 01:11:31.960224   61115 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 01:11:31.980097   61115 node_ready.go:35] waiting up to 6m0s for node "embed-certs-901410" to be "Ready" ...
	I0814 01:11:31.993130   61115 node_ready.go:49] node "embed-certs-901410" has status "Ready":"True"
	I0814 01:11:31.993152   61115 node_ready.go:38] duration metric: took 13.020022ms for node "embed-certs-901410" to be "Ready" ...
	I0814 01:11:31.993164   61115 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 01:11:31.998448   61115 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-901410" in "kube-system" namespace to be "Ready" ...
	I0814 01:11:32.075908   61115 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0814 01:11:32.075933   61115 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0814 01:11:32.114559   61115 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 01:11:32.137251   61115 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0814 01:11:32.144383   61115 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0814 01:11:32.144404   61115 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0814 01:11:32.207930   61115 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0814 01:11:32.207957   61115 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0814 01:11:32.235306   61115 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0814 01:11:32.769968   61115 main.go:141] libmachine: Making call to close driver server
	I0814 01:11:32.769994   61115 main.go:141] libmachine: (embed-certs-901410) Calling .Close
	I0814 01:11:32.770140   61115 main.go:141] libmachine: Making call to close driver server
	I0814 01:11:32.770164   61115 main.go:141] libmachine: (embed-certs-901410) Calling .Close
	I0814 01:11:32.770300   61115 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:11:32.770337   61115 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:11:32.770348   61115 main.go:141] libmachine: Making call to close driver server
	I0814 01:11:32.770351   61115 main.go:141] libmachine: (embed-certs-901410) DBG | Closing plugin on server side
	I0814 01:11:32.770360   61115 main.go:141] libmachine: (embed-certs-901410) Calling .Close
	I0814 01:11:32.770412   61115 main.go:141] libmachine: (embed-certs-901410) DBG | Closing plugin on server side
	I0814 01:11:32.770434   61115 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:11:32.770447   61115 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:11:32.770461   61115 main.go:141] libmachine: Making call to close driver server
	I0814 01:11:32.770472   61115 main.go:141] libmachine: (embed-certs-901410) Calling .Close
	I0814 01:11:32.770656   61115 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:11:32.770696   61115 main.go:141] libmachine: (embed-certs-901410) DBG | Closing plugin on server side
	I0814 01:11:32.770706   61115 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:11:32.770767   61115 main.go:141] libmachine: (embed-certs-901410) DBG | Closing plugin on server side
	I0814 01:11:32.770945   61115 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:11:32.770960   61115 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:11:32.779423   61115 main.go:141] libmachine: Making call to close driver server
	I0814 01:11:32.779437   61115 main.go:141] libmachine: (embed-certs-901410) Calling .Close
	I0814 01:11:32.779661   61115 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:11:32.779675   61115 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:11:32.779702   61115 main.go:141] libmachine: (embed-certs-901410) DBG | Closing plugin on server side
	I0814 01:11:33.063157   61115 main.go:141] libmachine: Making call to close driver server
	I0814 01:11:33.063187   61115 main.go:141] libmachine: (embed-certs-901410) Calling .Close
	I0814 01:11:33.064055   61115 main.go:141] libmachine: (embed-certs-901410) DBG | Closing plugin on server side
	I0814 01:11:33.064101   61115 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:11:33.064110   61115 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:11:33.064120   61115 main.go:141] libmachine: Making call to close driver server
	I0814 01:11:33.064127   61115 main.go:141] libmachine: (embed-certs-901410) Calling .Close
	I0814 01:11:33.064378   61115 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:11:33.064397   61115 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:11:33.064409   61115 addons.go:475] Verifying addon metrics-server=true in "embed-certs-901410"
	I0814 01:11:33.064458   61115 main.go:141] libmachine: (embed-certs-901410) DBG | Closing plugin on server side
	I0814 01:11:33.066122   61115 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0814 01:11:33.067534   61115 addons.go:510] duration metric: took 1.302585898s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0814 01:11:34.004078   61115 pod_ready.go:102] pod "etcd-embed-certs-901410" in "kube-system" namespace has status "Ready":"False"
	I0814 01:11:36.005391   61115 pod_ready.go:102] pod "etcd-embed-certs-901410" in "kube-system" namespace has status "Ready":"False"
	I0814 01:11:38.505031   61115 pod_ready.go:102] pod "etcd-embed-certs-901410" in "kube-system" namespace has status "Ready":"False"
	I0814 01:11:39.507006   61115 pod_ready.go:92] pod "etcd-embed-certs-901410" in "kube-system" namespace has status "Ready":"True"
	I0814 01:11:39.507026   61115 pod_ready.go:81] duration metric: took 7.508554233s for pod "etcd-embed-certs-901410" in "kube-system" namespace to be "Ready" ...
	I0814 01:11:39.507035   61115 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-901410" in "kube-system" namespace to be "Ready" ...
	I0814 01:11:39.517719   61115 pod_ready.go:92] pod "kube-apiserver-embed-certs-901410" in "kube-system" namespace has status "Ready":"True"
	I0814 01:11:39.517739   61115 pod_ready.go:81] duration metric: took 10.698211ms for pod "kube-apiserver-embed-certs-901410" in "kube-system" namespace to be "Ready" ...
	I0814 01:11:39.517751   61115 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-901410" in "kube-system" namespace to be "Ready" ...
	I0814 01:11:39.522245   61115 pod_ready.go:92] pod "kube-controller-manager-embed-certs-901410" in "kube-system" namespace has status "Ready":"True"
	I0814 01:11:39.522267   61115 pod_ready.go:81] duration metric: took 4.507786ms for pod "kube-controller-manager-embed-certs-901410" in "kube-system" namespace to be "Ready" ...
	I0814 01:11:39.522280   61115 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fqmzw" in "kube-system" namespace to be "Ready" ...
	I0814 01:11:39.527880   61115 pod_ready.go:92] pod "kube-proxy-fqmzw" in "kube-system" namespace has status "Ready":"True"
	I0814 01:11:39.527897   61115 pod_ready.go:81] duration metric: took 5.609617ms for pod "kube-proxy-fqmzw" in "kube-system" namespace to be "Ready" ...
	I0814 01:11:39.527904   61115 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-901410" in "kube-system" namespace to be "Ready" ...
	I0814 01:11:39.532430   61115 pod_ready.go:92] pod "kube-scheduler-embed-certs-901410" in "kube-system" namespace has status "Ready":"True"
	I0814 01:11:39.532448   61115 pod_ready.go:81] duration metric: took 4.536902ms for pod "kube-scheduler-embed-certs-901410" in "kube-system" namespace to be "Ready" ...
	I0814 01:11:39.532456   61115 pod_ready.go:38] duration metric: took 7.539280742s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 01:11:39.532471   61115 api_server.go:52] waiting for apiserver process to appear ...
	I0814 01:11:39.532537   61115 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:11:39.547608   61115 api_server.go:72] duration metric: took 7.782698582s to wait for apiserver process to appear ...
	I0814 01:11:39.547635   61115 api_server.go:88] waiting for apiserver healthz status ...
	I0814 01:11:39.547652   61115 api_server.go:253] Checking apiserver healthz at https://192.168.50.210:8443/healthz ...
	I0814 01:11:39.552021   61115 api_server.go:279] https://192.168.50.210:8443/healthz returned 200:
	ok
	I0814 01:11:39.552955   61115 api_server.go:141] control plane version: v1.31.0
	I0814 01:11:39.552972   61115 api_server.go:131] duration metric: took 5.330974ms to wait for apiserver health ...
	I0814 01:11:39.552979   61115 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 01:11:39.704928   61115 system_pods.go:59] 9 kube-system pods found
	I0814 01:11:39.704952   61115 system_pods.go:61] "coredns-6f6b679f8f-bq2xk" [6593bc2b-ef8f-4738-8674-dcaea675b88b] Running
	I0814 01:11:39.704959   61115 system_pods.go:61] "coredns-6f6b679f8f-lwd2j" [75f6e3fe-c5ac-4dbc-bbbb-bfb91796aaff] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0814 01:11:39.704964   61115 system_pods.go:61] "etcd-embed-certs-901410" [60eb6469-1be4-401b-9382-977428a0ead5] Running
	I0814 01:11:39.704970   61115 system_pods.go:61] "kube-apiserver-embed-certs-901410" [802d6cc2-d1d4-485c-98d8-e5b4afa9e632] Running
	I0814 01:11:39.704974   61115 system_pods.go:61] "kube-controller-manager-embed-certs-901410" [12e308db-7ca5-4d33-b62a-e144e7dd06c5] Running
	I0814 01:11:39.704977   61115 system_pods.go:61] "kube-proxy-fqmzw" [f9d63b14-ce56-4d0b-8511-1198b306b70e] Running
	I0814 01:11:39.704980   61115 system_pods.go:61] "kube-scheduler-embed-certs-901410" [668258a9-02d2-416d-ac07-b2b87deea00d] Running
	I0814 01:11:39.704985   61115 system_pods.go:61] "metrics-server-6867b74b74-mwl74" [065b6973-cd9d-4091-96b9-8dff2c5f85eb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 01:11:39.704989   61115 system_pods.go:61] "storage-provisioner" [e0f82856-b50c-4a5f-b0c7-4cd81e4b896e] Running
	I0814 01:11:39.704995   61115 system_pods.go:74] duration metric: took 152.010903ms to wait for pod list to return data ...
	I0814 01:11:39.705004   61115 default_sa.go:34] waiting for default service account to be created ...
	I0814 01:11:39.902622   61115 default_sa.go:45] found service account: "default"
	I0814 01:11:39.902662   61115 default_sa.go:55] duration metric: took 197.651811ms for default service account to be created ...
	I0814 01:11:39.902674   61115 system_pods.go:116] waiting for k8s-apps to be running ...
	I0814 01:11:40.105740   61115 system_pods.go:86] 9 kube-system pods found
	I0814 01:11:40.105767   61115 system_pods.go:89] "coredns-6f6b679f8f-bq2xk" [6593bc2b-ef8f-4738-8674-dcaea675b88b] Running
	I0814 01:11:40.105775   61115 system_pods.go:89] "coredns-6f6b679f8f-lwd2j" [75f6e3fe-c5ac-4dbc-bbbb-bfb91796aaff] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0814 01:11:40.105781   61115 system_pods.go:89] "etcd-embed-certs-901410" [60eb6469-1be4-401b-9382-977428a0ead5] Running
	I0814 01:11:40.105787   61115 system_pods.go:89] "kube-apiserver-embed-certs-901410" [802d6cc2-d1d4-485c-98d8-e5b4afa9e632] Running
	I0814 01:11:40.105791   61115 system_pods.go:89] "kube-controller-manager-embed-certs-901410" [12e308db-7ca5-4d33-b62a-e144e7dd06c5] Running
	I0814 01:11:40.105794   61115 system_pods.go:89] "kube-proxy-fqmzw" [f9d63b14-ce56-4d0b-8511-1198b306b70e] Running
	I0814 01:11:40.105798   61115 system_pods.go:89] "kube-scheduler-embed-certs-901410" [668258a9-02d2-416d-ac07-b2b87deea00d] Running
	I0814 01:11:40.105804   61115 system_pods.go:89] "metrics-server-6867b74b74-mwl74" [065b6973-cd9d-4091-96b9-8dff2c5f85eb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 01:11:40.105809   61115 system_pods.go:89] "storage-provisioner" [e0f82856-b50c-4a5f-b0c7-4cd81e4b896e] Running
	I0814 01:11:40.105815   61115 system_pods.go:126] duration metric: took 203.134555ms to wait for k8s-apps to be running ...
	I0814 01:11:40.105824   61115 system_svc.go:44] waiting for kubelet service to be running ....
	I0814 01:11:40.105866   61115 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 01:11:40.121399   61115 system_svc.go:56] duration metric: took 15.565745ms WaitForService to wait for kubelet
	I0814 01:11:40.121427   61115 kubeadm.go:582] duration metric: took 8.356517219s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 01:11:40.121445   61115 node_conditions.go:102] verifying NodePressure condition ...
	I0814 01:11:40.303687   61115 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0814 01:11:40.303720   61115 node_conditions.go:123] node cpu capacity is 2
	I0814 01:11:40.303732   61115 node_conditions.go:105] duration metric: took 182.281943ms to run NodePressure ...
	I0814 01:11:40.303745   61115 start.go:241] waiting for startup goroutines ...
	I0814 01:11:40.303754   61115 start.go:246] waiting for cluster config update ...
	I0814 01:11:40.303768   61115 start.go:255] writing updated cluster config ...
	I0814 01:11:40.304122   61115 ssh_runner.go:195] Run: rm -f paused
	I0814 01:11:40.350855   61115 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0814 01:11:40.352610   61115 out.go:177] * Done! kubectl is now configured to use "embed-certs-901410" cluster and "default" namespace by default
	I0814 01:11:44.695887   61804 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 01:11:44.696122   61804 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 01:12:24.697922   61804 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 01:12:24.698217   61804 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 01:12:24.698256   61804 kubeadm.go:310] 
	I0814 01:12:24.698318   61804 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0814 01:12:24.698406   61804 kubeadm.go:310] 		timed out waiting for the condition
	I0814 01:12:24.698434   61804 kubeadm.go:310] 
	I0814 01:12:24.698484   61804 kubeadm.go:310] 	This error is likely caused by:
	I0814 01:12:24.698530   61804 kubeadm.go:310] 		- The kubelet is not running
	I0814 01:12:24.698640   61804 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0814 01:12:24.698651   61804 kubeadm.go:310] 
	I0814 01:12:24.698784   61804 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0814 01:12:24.698841   61804 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0814 01:12:24.698874   61804 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0814 01:12:24.698878   61804 kubeadm.go:310] 
	I0814 01:12:24.699009   61804 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0814 01:12:24.699119   61804 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0814 01:12:24.699128   61804 kubeadm.go:310] 
	I0814 01:12:24.699294   61804 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0814 01:12:24.699431   61804 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0814 01:12:24.699536   61804 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0814 01:12:24.699635   61804 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0814 01:12:24.699647   61804 kubeadm.go:310] 
	I0814 01:12:24.700201   61804 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0814 01:12:24.700300   61804 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0814 01:12:24.700391   61804 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0814 01:12:24.700527   61804 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0814 01:12:24.700577   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0814 01:12:30.038180   61804 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.337582505s)
	I0814 01:12:30.038256   61804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 01:12:30.052476   61804 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 01:12:30.062330   61804 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 01:12:30.062357   61804 kubeadm.go:157] found existing configuration files:
	
	I0814 01:12:30.062409   61804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 01:12:30.072303   61804 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 01:12:30.072355   61804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 01:12:30.081331   61804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 01:12:30.090105   61804 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 01:12:30.090163   61804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 01:12:30.099446   61804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 01:12:30.108290   61804 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 01:12:30.108346   61804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 01:12:30.117872   61804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 01:12:30.126357   61804 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 01:12:30.126424   61804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 01:12:30.136277   61804 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0814 01:12:30.342736   61804 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0814 01:14:26.274820   61804 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0814 01:14:26.274958   61804 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0814 01:14:26.276512   61804 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0814 01:14:26.276601   61804 kubeadm.go:310] [preflight] Running pre-flight checks
	I0814 01:14:26.276743   61804 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0814 01:14:26.276887   61804 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0814 01:14:26.277017   61804 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0814 01:14:26.277097   61804 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0814 01:14:26.278845   61804 out.go:204]   - Generating certificates and keys ...
	I0814 01:14:26.278935   61804 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0814 01:14:26.279005   61804 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0814 01:14:26.279103   61804 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0814 01:14:26.279187   61804 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0814 01:14:26.279278   61804 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0814 01:14:26.279351   61804 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0814 01:14:26.279433   61804 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0814 01:14:26.279515   61804 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0814 01:14:26.279623   61804 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0814 01:14:26.279725   61804 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0814 01:14:26.279776   61804 kubeadm.go:310] [certs] Using the existing "sa" key
	I0814 01:14:26.279858   61804 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0814 01:14:26.279933   61804 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0814 01:14:26.280086   61804 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0814 01:14:26.280188   61804 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0814 01:14:26.280289   61804 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0814 01:14:26.280424   61804 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0814 01:14:26.280517   61804 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0814 01:14:26.280573   61804 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0814 01:14:26.280648   61804 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0814 01:14:26.281982   61804 out.go:204]   - Booting up control plane ...
	I0814 01:14:26.282070   61804 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0814 01:14:26.282159   61804 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0814 01:14:26.282249   61804 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0814 01:14:26.282389   61804 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0814 01:14:26.282564   61804 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0814 01:14:26.282624   61804 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0814 01:14:26.282685   61804 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 01:14:26.282866   61804 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 01:14:26.282971   61804 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 01:14:26.283161   61804 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 01:14:26.283235   61804 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 01:14:26.283494   61804 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 01:14:26.283611   61804 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 01:14:26.283768   61804 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 01:14:26.283830   61804 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 01:14:26.284021   61804 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 01:14:26.284032   61804 kubeadm.go:310] 
	I0814 01:14:26.284069   61804 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0814 01:14:26.284126   61804 kubeadm.go:310] 		timed out waiting for the condition
	I0814 01:14:26.284135   61804 kubeadm.go:310] 
	I0814 01:14:26.284188   61804 kubeadm.go:310] 	This error is likely caused by:
	I0814 01:14:26.284234   61804 kubeadm.go:310] 		- The kubelet is not running
	I0814 01:14:26.284336   61804 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0814 01:14:26.284344   61804 kubeadm.go:310] 
	I0814 01:14:26.284429   61804 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0814 01:14:26.284463   61804 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0814 01:14:26.284490   61804 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0814 01:14:26.284499   61804 kubeadm.go:310] 
	I0814 01:14:26.284587   61804 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0814 01:14:26.284726   61804 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0814 01:14:26.284747   61804 kubeadm.go:310] 
	I0814 01:14:26.284889   61804 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0814 01:14:26.285007   61804 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0814 01:14:26.285083   61804 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0814 01:14:26.285158   61804 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0814 01:14:26.285174   61804 kubeadm.go:310] 
	I0814 01:14:26.285220   61804 kubeadm.go:394] duration metric: took 8m6.417053649s to StartCluster
	I0814 01:14:26.285266   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:14:26.285318   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:14:26.327320   61804 cri.go:89] found id: ""
	I0814 01:14:26.327351   61804 logs.go:276] 0 containers: []
	W0814 01:14:26.327359   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:14:26.327366   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:14:26.327435   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:14:26.362074   61804 cri.go:89] found id: ""
	I0814 01:14:26.362101   61804 logs.go:276] 0 containers: []
	W0814 01:14:26.362109   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:14:26.362115   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:14:26.362192   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:14:26.395777   61804 cri.go:89] found id: ""
	I0814 01:14:26.395802   61804 logs.go:276] 0 containers: []
	W0814 01:14:26.395814   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:14:26.395821   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:14:26.395884   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:14:26.429263   61804 cri.go:89] found id: ""
	I0814 01:14:26.429290   61804 logs.go:276] 0 containers: []
	W0814 01:14:26.429299   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:14:26.429307   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:14:26.429370   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:14:26.463278   61804 cri.go:89] found id: ""
	I0814 01:14:26.463307   61804 logs.go:276] 0 containers: []
	W0814 01:14:26.463314   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:14:26.463321   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:14:26.463381   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:14:26.496454   61804 cri.go:89] found id: ""
	I0814 01:14:26.496493   61804 logs.go:276] 0 containers: []
	W0814 01:14:26.496513   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:14:26.496521   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:14:26.496591   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:14:26.530536   61804 cri.go:89] found id: ""
	I0814 01:14:26.530567   61804 logs.go:276] 0 containers: []
	W0814 01:14:26.530579   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:14:26.530587   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:14:26.530659   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:14:26.564201   61804 cri.go:89] found id: ""
	I0814 01:14:26.564232   61804 logs.go:276] 0 containers: []
	W0814 01:14:26.564245   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:14:26.564258   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:14:26.564274   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:14:26.614225   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:14:26.614263   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:14:26.632126   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:14:26.632162   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:14:26.733732   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:14:26.733757   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:14:26.733773   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:14:26.849177   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:14:26.849218   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0814 01:14:26.885741   61804 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0814 01:14:26.885794   61804 out.go:239] * 
	W0814 01:14:26.885846   61804 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0814 01:14:26.885871   61804 out.go:239] * 
	W0814 01:14:26.886747   61804 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0814 01:14:26.889874   61804 out.go:177] 
	W0814 01:14:26.891040   61804 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0814 01:14:26.891083   61804 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0814 01:14:26.891101   61804 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0814 01:14:26.892501   61804 out.go:177] 
	
	
	==> CRI-O <==
	Aug 14 01:19:21 no-preload-776907 crio[731]: time="2024-08-14 01:19:21.558804760Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4b305a62-9938-4c3c-99d5-6795707bbf1c name=/runtime.v1.RuntimeService/Version
	Aug 14 01:19:21 no-preload-776907 crio[731]: time="2024-08-14 01:19:21.559936760Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=64be1cba-463e-4993-be83-aca088e0ca77 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 01:19:21 no-preload-776907 crio[731]: time="2024-08-14 01:19:21.560405034Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598361560374387,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=64be1cba-463e-4993-be83-aca088e0ca77 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 01:19:21 no-preload-776907 crio[731]: time="2024-08-14 01:19:21.560835036Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d3fa2b0e-8cfc-4bbb-885d-af628d9c21a0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 01:19:21 no-preload-776907 crio[731]: time="2024-08-14 01:19:21.560885393Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d3fa2b0e-8cfc-4bbb-885d-af628d9c21a0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 01:19:21 no-preload-776907 crio[731]: time="2024-08-14 01:19:21.561157568Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d4d7da10edbe3c5e4d9a0997c7f8909523ed34eacac6b44dc982bf6ab7504eff,PodSandboxId:c94f7f9e7de031c457a749f2cefd26e7eaecac814369bea2a126dc540ae95e8c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723597584961991657,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0ba9510-e0a5-4558-98e3-a9510920f93a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5778f1274c91f6882fd1efbc2d7c2f484c2f1daf8c772baf6f7d6398b11d2bcd,PodSandboxId:d9f891d25e8e1aaf25d0e48e092294c60510a060f2c32f09c772127917dfbc71,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723597564688648231,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c514e832-2998-4439-bb97-0d6d4eb4e499,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d3cb1d648607a62a84856164fa12f6d400c0d4316d7bda5d83a448870c145fc,PodSandboxId:83a1a082fd506659affe2870d9ff9a0d6fdf28c0c211596a2c186635a8880fc7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723597561866844465,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-dz9zk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67e29ce3-7f67-4b96-8030-c980773b5772,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ec88a5a7a9d566fabdf1393667a95e5eac0ed7f2a2aaa326ee51d3e99a72b12,PodSandboxId:6b12b85e75b67325c97708feca61417980a8504ed000e11ffe7929e7666afa80,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723597554114575314,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pgm9t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efad60b0-c62e-4c47-97
4b-98fdca9d3496,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bacb411cbea2000ee28b8a5eb1c371d4094e22dea31e33892860568e08ee9768,PodSandboxId:c94f7f9e7de031c457a749f2cefd26e7eaecac814369bea2a126dc540ae95e8c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723597554103696166,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0ba9510-e0a5-4558-98e3-a9510920f93
a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89953f1dc813e32748579ac4c2541c0f99b1443ae3956d85a367db06810107b2,PodSandboxId:8d49aac6a7eb624a202a61b82b0a35a7ce0277e4c21afb45f0db4970a93af7ae,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723597550385420710,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-776907,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f30aa569f7332a3771c25ad0568b0e7d,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1632d4b88f7f0e7e802d1848acf6c67950cfa7cdfbe5b4dd7f6fbc467a969388,PodSandboxId:cc557c0f92cc4b2da21354ba61b5934a1951b181ab44212a8a2bde2717195d7d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723597550340466652,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-776907,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1727822331a98d206a1c6455e6be9d1a,},Annotations:map[string]string{io.kubernetes.containe
r.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ef9bf666bbbc4c4c8b8036d2fa7e15e882f2bb301fe7d3b63a483d8b38e6091,PodSandboxId:ee7ecc8991ff707504a4b1e27f2e6763b86e88139265a015c5dc25179958f68d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723597550361489327,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-776907,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da5bdcd48f884b5b86c729f49cf3dd71,},Annotations:map[string]string{io.kuber
netes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddba3ebb8413d6647bf6c8fe0bff16a1a10b35bcd7219cad5b66979c372cd92e,PodSandboxId:ff775dc6fd48640328c7d30640188a25141e6e31471f94649135b200cc891a46,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723597550293425759,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-776907,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3d30aa4c418230085009c5296d2a369,},Annotations:map[string]string{io.kubernetes.contain
er.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d3fa2b0e-8cfc-4bbb-885d-af628d9c21a0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 01:19:21 no-preload-776907 crio[731]: time="2024-08-14 01:19:21.576006027Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=9b8998b7-504c-47b4-8783-ab47c80405f4 name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 14 01:19:21 no-preload-776907 crio[731]: time="2024-08-14 01:19:21.576554654Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:d9f891d25e8e1aaf25d0e48e092294c60510a060f2c32f09c772127917dfbc71,Metadata:&PodSandboxMetadata{Name:busybox,Uid:c514e832-2998-4439-bb97-0d6d4eb4e499,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723597561530983006,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c514e832-2998-4439-bb97-0d6d4eb4e499,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-14T01:05:53.637416801Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:83a1a082fd506659affe2870d9ff9a0d6fdf28c0c211596a2c186635a8880fc7,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-dz9zk,Uid:67e29ce3-7f67-4b96-8030-c980773b5772,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:17235975615289930
26,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-dz9zk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67e29ce3-7f67-4b96-8030-c980773b5772,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-14T01:05:53.637426537Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9666b7a6819d352f826d5d85fad28847c2f91d2904dd03b45371da4e6d291127,Metadata:&PodSandboxMetadata{Name:metrics-server-6867b74b74-gb2dt,Uid:c950c58e-c5c3-4535-b10f-f4379ff03409,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723597559726574250,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-6867b74b74-gb2dt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c950c58e-c5c3-4535-b10f-f4379ff03409,k8s-app: metrics-server,pod-template-hash: 6867b74b74,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-14T01:05:53.6
37409757Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6b12b85e75b67325c97708feca61417980a8504ed000e11ffe7929e7666afa80,Metadata:&PodSandboxMetadata{Name:kube-proxy-pgm9t,Uid:efad60b0-c62e-4c47-974b-98fdca9d3496,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723597553954500994,Labels:map[string]string{controller-revision-hash: 5976bc5f75,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-pgm9t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efad60b0-c62e-4c47-974b-98fdca9d3496,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-14T01:05:53.637422960Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c94f7f9e7de031c457a749f2cefd26e7eaecac814369bea2a126dc540ae95e8c,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:d0ba9510-e0a5-4558-98e3-a9510920f93a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723597553950201210,Labels:map[string]
string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0ba9510-e0a5-4558-98e3-a9510920f93a,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io
/config.seen: 2024-08-14T01:05:53.637425077Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8d49aac6a7eb624a202a61b82b0a35a7ce0277e4c21afb45f0db4970a93af7ae,Metadata:&PodSandboxMetadata{Name:kube-scheduler-no-preload-776907,Uid:f30aa569f7332a3771c25ad0568b0e7d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723597550155761361,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-no-preload-776907,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f30aa569f7332a3771c25ad0568b0e7d,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: f30aa569f7332a3771c25ad0568b0e7d,kubernetes.io/config.seen: 2024-08-14T01:05:49.636131321Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ff775dc6fd48640328c7d30640188a25141e6e31471f94649135b200cc891a46,Metadata:&PodSandboxMetadata{Name:kube-apiserver-no-preload-776907,Uid:e3d30aa4c418230085009c5296d2a369,Namespace:k
ube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723597550149165336,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-no-preload-776907,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3d30aa4c418230085009c5296d2a369,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.72.94:8443,kubernetes.io/config.hash: e3d30aa4c418230085009c5296d2a369,kubernetes.io/config.seen: 2024-08-14T01:05:49.636126472Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:cc557c0f92cc4b2da21354ba61b5934a1951b181ab44212a8a2bde2717195d7d,Metadata:&PodSandboxMetadata{Name:etcd-no-preload-776907,Uid:1727822331a98d206a1c6455e6be9d1a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723597550148231357,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-no-preload-776907,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 1727822331a98d206a1c6455e6be9d1a,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.72.94:2379,kubernetes.io/config.hash: 1727822331a98d206a1c6455e6be9d1a,kubernetes.io/config.seen: 2024-08-14T01:05:49.701378004Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ee7ecc8991ff707504a4b1e27f2e6763b86e88139265a015c5dc25179958f68d,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-no-preload-776907,Uid:da5bdcd48f884b5b86c729f49cf3dd71,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723597550144205219,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-no-preload-776907,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da5bdcd48f884b5b86c729f49cf3dd71,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: da5bdcd48f884b5b86c729f49cf3dd71,kube
rnetes.io/config.seen: 2024-08-14T01:05:49.636130339Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=9b8998b7-504c-47b4-8783-ab47c80405f4 name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 14 01:19:21 no-preload-776907 crio[731]: time="2024-08-14 01:19:21.577325159Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e2c7d3b5-05cc-4da7-aac8-6e5b0d82cc1b name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 01:19:21 no-preload-776907 crio[731]: time="2024-08-14 01:19:21.577390609Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e2c7d3b5-05cc-4da7-aac8-6e5b0d82cc1b name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 01:19:21 no-preload-776907 crio[731]: time="2024-08-14 01:19:21.577584784Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d4d7da10edbe3c5e4d9a0997c7f8909523ed34eacac6b44dc982bf6ab7504eff,PodSandboxId:c94f7f9e7de031c457a749f2cefd26e7eaecac814369bea2a126dc540ae95e8c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723597584961991657,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0ba9510-e0a5-4558-98e3-a9510920f93a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5778f1274c91f6882fd1efbc2d7c2f484c2f1daf8c772baf6f7d6398b11d2bcd,PodSandboxId:d9f891d25e8e1aaf25d0e48e092294c60510a060f2c32f09c772127917dfbc71,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723597564688648231,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c514e832-2998-4439-bb97-0d6d4eb4e499,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d3cb1d648607a62a84856164fa12f6d400c0d4316d7bda5d83a448870c145fc,PodSandboxId:83a1a082fd506659affe2870d9ff9a0d6fdf28c0c211596a2c186635a8880fc7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723597561866844465,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-dz9zk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67e29ce3-7f67-4b96-8030-c980773b5772,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ec88a5a7a9d566fabdf1393667a95e5eac0ed7f2a2aaa326ee51d3e99a72b12,PodSandboxId:6b12b85e75b67325c97708feca61417980a8504ed000e11ffe7929e7666afa80,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723597554114575314,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pgm9t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efad60b0-c62e-4c47-97
4b-98fdca9d3496,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bacb411cbea2000ee28b8a5eb1c371d4094e22dea31e33892860568e08ee9768,PodSandboxId:c94f7f9e7de031c457a749f2cefd26e7eaecac814369bea2a126dc540ae95e8c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723597554103696166,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0ba9510-e0a5-4558-98e3-a9510920f93
a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89953f1dc813e32748579ac4c2541c0f99b1443ae3956d85a367db06810107b2,PodSandboxId:8d49aac6a7eb624a202a61b82b0a35a7ce0277e4c21afb45f0db4970a93af7ae,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723597550385420710,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-776907,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f30aa569f7332a3771c25ad0568b0e7d,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1632d4b88f7f0e7e802d1848acf6c67950cfa7cdfbe5b4dd7f6fbc467a969388,PodSandboxId:cc557c0f92cc4b2da21354ba61b5934a1951b181ab44212a8a2bde2717195d7d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723597550340466652,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-776907,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1727822331a98d206a1c6455e6be9d1a,},Annotations:map[string]string{io.kubernetes.containe
r.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ef9bf666bbbc4c4c8b8036d2fa7e15e882f2bb301fe7d3b63a483d8b38e6091,PodSandboxId:ee7ecc8991ff707504a4b1e27f2e6763b86e88139265a015c5dc25179958f68d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723597550361489327,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-776907,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da5bdcd48f884b5b86c729f49cf3dd71,},Annotations:map[string]string{io.kuber
netes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddba3ebb8413d6647bf6c8fe0bff16a1a10b35bcd7219cad5b66979c372cd92e,PodSandboxId:ff775dc6fd48640328c7d30640188a25141e6e31471f94649135b200cc891a46,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723597550293425759,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-776907,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3d30aa4c418230085009c5296d2a369,},Annotations:map[string]string{io.kubernetes.contain
er.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e2c7d3b5-05cc-4da7-aac8-6e5b0d82cc1b name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 01:19:21 no-preload-776907 crio[731]: time="2024-08-14 01:19:21.598339077Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=639b6e43-e237-4c42-9d35-d03d9531ebff name=/runtime.v1.RuntimeService/Version
	Aug 14 01:19:21 no-preload-776907 crio[731]: time="2024-08-14 01:19:21.598432874Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=639b6e43-e237-4c42-9d35-d03d9531ebff name=/runtime.v1.RuntimeService/Version
	Aug 14 01:19:21 no-preload-776907 crio[731]: time="2024-08-14 01:19:21.599778540Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d37c19f0-d633-414b-861a-06c1b9918c0a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 01:19:21 no-preload-776907 crio[731]: time="2024-08-14 01:19:21.600344406Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598361600315450,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d37c19f0-d633-414b-861a-06c1b9918c0a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 01:19:21 no-preload-776907 crio[731]: time="2024-08-14 01:19:21.600801137Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ca264bc1-dcb0-458c-b04d-9563ccfecec7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 01:19:21 no-preload-776907 crio[731]: time="2024-08-14 01:19:21.600867750Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ca264bc1-dcb0-458c-b04d-9563ccfecec7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 01:19:21 no-preload-776907 crio[731]: time="2024-08-14 01:19:21.601122933Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d4d7da10edbe3c5e4d9a0997c7f8909523ed34eacac6b44dc982bf6ab7504eff,PodSandboxId:c94f7f9e7de031c457a749f2cefd26e7eaecac814369bea2a126dc540ae95e8c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723597584961991657,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0ba9510-e0a5-4558-98e3-a9510920f93a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5778f1274c91f6882fd1efbc2d7c2f484c2f1daf8c772baf6f7d6398b11d2bcd,PodSandboxId:d9f891d25e8e1aaf25d0e48e092294c60510a060f2c32f09c772127917dfbc71,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723597564688648231,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c514e832-2998-4439-bb97-0d6d4eb4e499,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d3cb1d648607a62a84856164fa12f6d400c0d4316d7bda5d83a448870c145fc,PodSandboxId:83a1a082fd506659affe2870d9ff9a0d6fdf28c0c211596a2c186635a8880fc7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723597561866844465,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-dz9zk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67e29ce3-7f67-4b96-8030-c980773b5772,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ec88a5a7a9d566fabdf1393667a95e5eac0ed7f2a2aaa326ee51d3e99a72b12,PodSandboxId:6b12b85e75b67325c97708feca61417980a8504ed000e11ffe7929e7666afa80,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723597554114575314,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pgm9t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efad60b0-c62e-4c47-97
4b-98fdca9d3496,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bacb411cbea2000ee28b8a5eb1c371d4094e22dea31e33892860568e08ee9768,PodSandboxId:c94f7f9e7de031c457a749f2cefd26e7eaecac814369bea2a126dc540ae95e8c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723597554103696166,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0ba9510-e0a5-4558-98e3-a9510920f93
a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89953f1dc813e32748579ac4c2541c0f99b1443ae3956d85a367db06810107b2,PodSandboxId:8d49aac6a7eb624a202a61b82b0a35a7ce0277e4c21afb45f0db4970a93af7ae,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723597550385420710,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-776907,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f30aa569f7332a3771c25ad0568b0e7d,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1632d4b88f7f0e7e802d1848acf6c67950cfa7cdfbe5b4dd7f6fbc467a969388,PodSandboxId:cc557c0f92cc4b2da21354ba61b5934a1951b181ab44212a8a2bde2717195d7d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723597550340466652,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-776907,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1727822331a98d206a1c6455e6be9d1a,},Annotations:map[string]string{io.kubernetes.containe
r.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ef9bf666bbbc4c4c8b8036d2fa7e15e882f2bb301fe7d3b63a483d8b38e6091,PodSandboxId:ee7ecc8991ff707504a4b1e27f2e6763b86e88139265a015c5dc25179958f68d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723597550361489327,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-776907,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da5bdcd48f884b5b86c729f49cf3dd71,},Annotations:map[string]string{io.kuber
netes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddba3ebb8413d6647bf6c8fe0bff16a1a10b35bcd7219cad5b66979c372cd92e,PodSandboxId:ff775dc6fd48640328c7d30640188a25141e6e31471f94649135b200cc891a46,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723597550293425759,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-776907,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3d30aa4c418230085009c5296d2a369,},Annotations:map[string]string{io.kubernetes.contain
er.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ca264bc1-dcb0-458c-b04d-9563ccfecec7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 01:19:21 no-preload-776907 crio[731]: time="2024-08-14 01:19:21.631649060Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6141b24e-fd99-4c52-9a2b-c39244c459b3 name=/runtime.v1.RuntimeService/Version
	Aug 14 01:19:21 no-preload-776907 crio[731]: time="2024-08-14 01:19:21.631752067Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6141b24e-fd99-4c52-9a2b-c39244c459b3 name=/runtime.v1.RuntimeService/Version
	Aug 14 01:19:21 no-preload-776907 crio[731]: time="2024-08-14 01:19:21.632954165Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9a024855-e59e-47cc-9db7-196dfa631c43 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 01:19:21 no-preload-776907 crio[731]: time="2024-08-14 01:19:21.633384211Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598361633362609,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9a024855-e59e-47cc-9db7-196dfa631c43 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 01:19:21 no-preload-776907 crio[731]: time="2024-08-14 01:19:21.634358772Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2c2bccba-b511-43da-b4e1-b9c59564a8f1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 01:19:21 no-preload-776907 crio[731]: time="2024-08-14 01:19:21.634778031Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2c2bccba-b511-43da-b4e1-b9c59564a8f1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 01:19:21 no-preload-776907 crio[731]: time="2024-08-14 01:19:21.635073650Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d4d7da10edbe3c5e4d9a0997c7f8909523ed34eacac6b44dc982bf6ab7504eff,PodSandboxId:c94f7f9e7de031c457a749f2cefd26e7eaecac814369bea2a126dc540ae95e8c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723597584961991657,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0ba9510-e0a5-4558-98e3-a9510920f93a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5778f1274c91f6882fd1efbc2d7c2f484c2f1daf8c772baf6f7d6398b11d2bcd,PodSandboxId:d9f891d25e8e1aaf25d0e48e092294c60510a060f2c32f09c772127917dfbc71,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723597564688648231,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c514e832-2998-4439-bb97-0d6d4eb4e499,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d3cb1d648607a62a84856164fa12f6d400c0d4316d7bda5d83a448870c145fc,PodSandboxId:83a1a082fd506659affe2870d9ff9a0d6fdf28c0c211596a2c186635a8880fc7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723597561866844465,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-dz9zk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67e29ce3-7f67-4b96-8030-c980773b5772,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ec88a5a7a9d566fabdf1393667a95e5eac0ed7f2a2aaa326ee51d3e99a72b12,PodSandboxId:6b12b85e75b67325c97708feca61417980a8504ed000e11ffe7929e7666afa80,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723597554114575314,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pgm9t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efad60b0-c62e-4c47-97
4b-98fdca9d3496,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bacb411cbea2000ee28b8a5eb1c371d4094e22dea31e33892860568e08ee9768,PodSandboxId:c94f7f9e7de031c457a749f2cefd26e7eaecac814369bea2a126dc540ae95e8c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723597554103696166,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0ba9510-e0a5-4558-98e3-a9510920f93
a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89953f1dc813e32748579ac4c2541c0f99b1443ae3956d85a367db06810107b2,PodSandboxId:8d49aac6a7eb624a202a61b82b0a35a7ce0277e4c21afb45f0db4970a93af7ae,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723597550385420710,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-776907,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f30aa569f7332a3771c25ad0568b0e7d,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1632d4b88f7f0e7e802d1848acf6c67950cfa7cdfbe5b4dd7f6fbc467a969388,PodSandboxId:cc557c0f92cc4b2da21354ba61b5934a1951b181ab44212a8a2bde2717195d7d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723597550340466652,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-776907,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1727822331a98d206a1c6455e6be9d1a,},Annotations:map[string]string{io.kubernetes.containe
r.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ef9bf666bbbc4c4c8b8036d2fa7e15e882f2bb301fe7d3b63a483d8b38e6091,PodSandboxId:ee7ecc8991ff707504a4b1e27f2e6763b86e88139265a015c5dc25179958f68d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723597550361489327,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-776907,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da5bdcd48f884b5b86c729f49cf3dd71,},Annotations:map[string]string{io.kuber
netes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddba3ebb8413d6647bf6c8fe0bff16a1a10b35bcd7219cad5b66979c372cd92e,PodSandboxId:ff775dc6fd48640328c7d30640188a25141e6e31471f94649135b200cc891a46,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723597550293425759,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-776907,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3d30aa4c418230085009c5296d2a369,},Annotations:map[string]string{io.kubernetes.contain
er.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2c2bccba-b511-43da-b4e1-b9c59564a8f1 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d4d7da10edbe3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   c94f7f9e7de03       storage-provisioner
	5778f1274c91f       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   d9f891d25e8e1       busybox
	7d3cb1d648607       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago      Running             coredns                   1                   83a1a082fd506       coredns-6f6b679f8f-dz9zk
	0ec88a5a7a9d5       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      13 minutes ago      Running             kube-proxy                1                   6b12b85e75b67       kube-proxy-pgm9t
	bacb411cbea20       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   c94f7f9e7de03       storage-provisioner
	89953f1dc813e       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      13 minutes ago      Running             kube-scheduler            1                   8d49aac6a7eb6       kube-scheduler-no-preload-776907
	3ef9bf666bbbc       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      13 minutes ago      Running             kube-controller-manager   1                   ee7ecc8991ff7       kube-controller-manager-no-preload-776907
	1632d4b88f7f0       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      13 minutes ago      Running             etcd                      1                   cc557c0f92cc4       etcd-no-preload-776907
	ddba3ebb8413d       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      13 minutes ago      Running             kube-apiserver            1                   ff775dc6fd486       kube-apiserver-no-preload-776907
	
	
	==> coredns [7d3cb1d648607a62a84856164fa12f6d400c0d4316d7bda5d83a448870c145fc] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:58321 - 41401 "HINFO IN 3415938331824396986.8339278305176018987. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.008149157s
	
	
	==> describe nodes <==
	Name:               no-preload-776907
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-776907
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a5ac17629aabe8562ef9220195ba6559cf416caf
	                    minikube.k8s.io/name=no-preload-776907
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_14T00_57_52_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 14 Aug 2024 00:57:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-776907
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 14 Aug 2024 01:19:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 14 Aug 2024 01:16:36 +0000   Wed, 14 Aug 2024 00:57:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 14 Aug 2024 01:16:36 +0000   Wed, 14 Aug 2024 00:57:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 14 Aug 2024 01:16:36 +0000   Wed, 14 Aug 2024 00:57:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 14 Aug 2024 01:16:36 +0000   Wed, 14 Aug 2024 01:06:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.94
	  Hostname:    no-preload-776907
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8aa38961189044b487fbdbba224d46d9
	  System UUID:                8aa38961-1890-44b4-87fb-dbba224d46d9
	  Boot ID:                    c38d77c1-1566-4add-8535-79ad41888d31
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 coredns-6f6b679f8f-dz9zk                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     21m
	  kube-system                 etcd-no-preload-776907                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         21m
	  kube-system                 kube-apiserver-no-preload-776907             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-controller-manager-no-preload-776907    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-proxy-pgm9t                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-scheduler-no-preload-776907             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 metrics-server-6867b74b74-gb2dt              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         20m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21m (x2 over 21m)  kubelet          Node no-preload-776907 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m (x2 over 21m)  kubelet          Node no-preload-776907 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m (x2 over 21m)  kubelet          Node no-preload-776907 status is now: NodeHasSufficientPID
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeReady                21m                kubelet          Node no-preload-776907 status is now: NodeReady
	  Normal  RegisteredNode           21m                node-controller  Node no-preload-776907 event: Registered Node no-preload-776907 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node no-preload-776907 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node no-preload-776907 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node no-preload-776907 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node no-preload-776907 event: Registered Node no-preload-776907 in Controller
	
	
	==> dmesg <==
	[Aug14 01:05] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050312] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036190] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.659382] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.819760] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.544186] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.265452] systemd-fstab-generator[649]: Ignoring "noauto" option for root device
	[  +0.058814] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056161] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[  +0.175041] systemd-fstab-generator[675]: Ignoring "noauto" option for root device
	[  +0.137703] systemd-fstab-generator[687]: Ignoring "noauto" option for root device
	[  +0.272850] systemd-fstab-generator[717]: Ignoring "noauto" option for root device
	[ +14.726382] systemd-fstab-generator[1308]: Ignoring "noauto" option for root device
	[  +0.053351] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.754942] systemd-fstab-generator[1429]: Ignoring "noauto" option for root device
	[  +3.835726] kauditd_printk_skb: 97 callbacks suppressed
	[  +3.204496] systemd-fstab-generator[2062]: Ignoring "noauto" option for root device
	[  +3.258017] kauditd_printk_skb: 61 callbacks suppressed
	[Aug14 01:06] kauditd_printk_skb: 46 callbacks suppressed
	
	
	==> etcd [1632d4b88f7f0e7e802d1848acf6c67950cfa7cdfbe5b4dd7f6fbc467a969388] <==
	{"level":"info","ts":"2024-08-14T01:05:51.995125Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-14T01:05:51.996561Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-14T01:05:51.997317Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-14T01:05:51.997445Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-14T01:05:51.998493Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-14T01:06:00.106143Z","caller":"traceutil/trace.go:171","msg":"trace[218644334] transaction","detail":"{read_only:false; response_revision:577; number_of_response:1; }","duration":"369.512897ms","start":"2024-08-14T01:05:59.736605Z","end":"2024-08-14T01:06:00.106118Z","steps":["trace[218644334] 'process raft request'  (duration: 369.32463ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-14T01:06:00.106802Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-14T01:05:59.736588Z","time spent":"369.676931ms","remote":"127.0.0.1:43718","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4784,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-proxy-pgm9t\" mod_revision:505 > success:<request_put:<key:\"/registry/pods/kube-system/kube-proxy-pgm9t\" value_size:4733 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-proxy-pgm9t\" > >"}
	{"level":"warn","ts":"2024-08-14T01:06:00.713370Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"317.989197ms","expected-duration":"100ms","prefix":"","request":"header:<ID:11493627490022472772 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/daemonsets/kube-system/kube-proxy\" mod_revision:519 > success:<request_put:<key:\"/registry/daemonsets/kube-system/kube-proxy\" value_size:2829 >> failure:<request_range:<key:\"/registry/daemonsets/kube-system/kube-proxy\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-08-14T01:06:00.713636Z","caller":"traceutil/trace.go:171","msg":"trace[1907212785] linearizableReadLoop","detail":"{readStateIndex:615; appliedIndex:613; }","duration":"547.382363ms","start":"2024-08-14T01:06:00.166239Z","end":"2024-08-14T01:06:00.713621Z","steps":["trace[1907212785] 'read index received'  (duration: 229.636315ms)","trace[1907212785] 'applied index is now lower than readState.Index'  (duration: 317.745475ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-14T01:06:00.713812Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"547.565924ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/no-preload-776907\" ","response":"range_response_count:1 size:4638"}
	{"level":"info","ts":"2024-08-14T01:06:00.713876Z","caller":"traceutil/trace.go:171","msg":"trace[439799794] range","detail":"{range_begin:/registry/minions/no-preload-776907; range_end:; response_count:1; response_revision:579; }","duration":"547.637876ms","start":"2024-08-14T01:06:00.166230Z","end":"2024-08-14T01:06:00.713868Z","steps":["trace[439799794] 'agreement among raft nodes before linearized reading'  (duration: 547.477268ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-14T01:06:00.713961Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-14T01:06:00.166197Z","time spent":"547.749165ms","remote":"127.0.0.1:43716","response type":"/etcdserverpb.KV/Range","request count":0,"request size":37,"response count":1,"response size":4661,"request content":"key:\"/registry/minions/no-preload-776907\" "}
	{"level":"info","ts":"2024-08-14T01:06:00.713985Z","caller":"traceutil/trace.go:171","msg":"trace[355767800] transaction","detail":"{read_only:false; response_revision:579; number_of_response:1; }","duration":"596.447032ms","start":"2024-08-14T01:06:00.117529Z","end":"2024-08-14T01:06:00.713976Z","steps":["trace[355767800] 'process raft request'  (duration: 596.022747ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-14T01:06:00.714185Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-14T01:06:00.117517Z","time spent":"596.514486ms","remote":"127.0.0.1:43718","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3806,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/storage-provisioner\" mod_revision:508 > success:<request_put:<key:\"/registry/pods/kube-system/storage-provisioner\" value_size:3752 >> failure:<request_range:<key:\"/registry/pods/kube-system/storage-provisioner\" > >"}
	{"level":"info","ts":"2024-08-14T01:06:00.713876Z","caller":"traceutil/trace.go:171","msg":"trace[1321742999] transaction","detail":"{read_only:false; response_revision:578; number_of_response:1; }","duration":"599.854489ms","start":"2024-08-14T01:06:00.114007Z","end":"2024-08-14T01:06:00.713861Z","steps":["trace[1321742999] 'process raft request'  (duration: 281.002886ms)","trace[1321742999] 'compare'  (duration: 317.861878ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-14T01:06:00.714365Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-14T01:06:00.113977Z","time spent":"600.353758ms","remote":"127.0.0.1:44046","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":2880,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/daemonsets/kube-system/kube-proxy\" mod_revision:519 > success:<request_put:<key:\"/registry/daemonsets/kube-system/kube-proxy\" value_size:2829 >> failure:<request_range:<key:\"/registry/daemonsets/kube-system/kube-proxy\" > >"}
	{"level":"info","ts":"2024-08-14T01:06:00.719504Z","caller":"traceutil/trace.go:171","msg":"trace[1153514793] transaction","detail":"{read_only:false; response_revision:580; number_of_response:1; }","duration":"316.504152ms","start":"2024-08-14T01:06:00.402953Z","end":"2024-08-14T01:06:00.719457Z","steps":["trace[1153514793] 'process raft request'  (duration: 316.395846ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-14T01:06:00.719587Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-14T01:06:00.402937Z","time spent":"316.614323ms","remote":"127.0.0.1:43632","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":802,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/metrics-server-6867b74b74-gb2dt.17eb72d96368e34b\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/metrics-server-6867b74b74-gb2dt.17eb72d96368e34b\" value_size:707 lease:2270255453167696814 >> failure:<>"}
	{"level":"warn","ts":"2024-08-14T01:06:20.558393Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"242.380863ms","expected-duration":"100ms","prefix":"","request":"header:<ID:11493627490022472974 > lease_revoke:<id:1f81914e6b743c81>","response":"size:28"}
	{"level":"info","ts":"2024-08-14T01:06:20.558660Z","caller":"traceutil/trace.go:171","msg":"trace[1788565001] linearizableReadLoop","detail":"{readStateIndex:664; appliedIndex:663; }","duration":"288.370866ms","start":"2024-08-14T01:06:20.270254Z","end":"2024-08-14T01:06:20.558625Z","steps":["trace[1788565001] 'read index received'  (duration: 45.634344ms)","trace[1788565001] 'applied index is now lower than readState.Index'  (duration: 242.735561ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-14T01:06:20.558855Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"288.571674ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-6867b74b74-gb2dt\" ","response":"range_response_count:1 size:4339"}
	{"level":"info","ts":"2024-08-14T01:06:20.558955Z","caller":"traceutil/trace.go:171","msg":"trace[1568626227] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-6867b74b74-gb2dt; range_end:; response_count:1; response_revision:622; }","duration":"288.690307ms","start":"2024-08-14T01:06:20.270249Z","end":"2024-08-14T01:06:20.558939Z","steps":["trace[1568626227] 'agreement among raft nodes before linearized reading'  (duration: 288.479034ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-14T01:15:52.053318Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":859}
	{"level":"info","ts":"2024-08-14T01:15:52.068734Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":859,"took":"14.681704ms","hash":3522602525,"current-db-size-bytes":2699264,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":2699264,"current-db-size-in-use":"2.7 MB"}
	{"level":"info","ts":"2024-08-14T01:15:52.068847Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3522602525,"revision":859,"compact-revision":-1}
	
	
	==> kernel <==
	 01:19:21 up 14 min,  0 users,  load average: 0.23, 0.16, 0.12
	Linux no-preload-776907 5.10.207 #1 SMP Tue Aug 13 22:05:29 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [ddba3ebb8413d6647bf6c8fe0bff16a1a10b35bcd7219cad5b66979c372cd92e] <==
	E0814 01:15:54.447156       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0814 01:15:54.447257       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0814 01:15:54.448428       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0814 01:15:54.448501       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0814 01:16:54.449391       1 handler_proxy.go:99] no RequestInfo found in the context
	E0814 01:16:54.449841       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0814 01:16:54.449723       1 handler_proxy.go:99] no RequestInfo found in the context
	E0814 01:16:54.449953       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0814 01:16:54.451164       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0814 01:16:54.451268       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0814 01:18:54.452238       1 handler_proxy.go:99] no RequestInfo found in the context
	E0814 01:18:54.452378       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0814 01:18:54.452484       1 handler_proxy.go:99] no RequestInfo found in the context
	E0814 01:18:54.452571       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0814 01:18:54.453742       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0814 01:18:54.453854       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [3ef9bf666bbbc4c4c8b8036d2fa7e15e882f2bb301fe7d3b63a483d8b38e6091] <==
	E0814 01:13:57.098877       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 01:13:57.604023       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0814 01:14:27.105875       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 01:14:27.614634       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0814 01:14:57.114893       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 01:14:57.622893       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0814 01:15:27.121514       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 01:15:27.629597       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0814 01:15:57.127781       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 01:15:57.638214       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0814 01:16:27.134218       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 01:16:27.646831       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0814 01:16:36.509545       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-776907"
	E0814 01:16:57.140777       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 01:16:57.655538       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0814 01:16:58.730751       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="255.954µs"
	I0814 01:17:13.730245       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="147.348µs"
	E0814 01:17:27.146880       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 01:17:27.663384       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0814 01:17:57.153539       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 01:17:57.672091       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0814 01:18:27.160093       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 01:18:27.679523       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0814 01:18:57.167714       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 01:18:57.687141       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [0ec88a5a7a9d566fabdf1393667a95e5eac0ed7f2a2aaa326ee51d3e99a72b12] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0814 01:05:54.606252       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0814 01:05:54.662856       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.72.94"]
	E0814 01:05:54.662980       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0814 01:05:54.719407       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0814 01:05:54.719534       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0814 01:05:54.719630       1 server_linux.go:169] "Using iptables Proxier"
	I0814 01:05:54.740931       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0814 01:05:54.741921       1 server.go:483] "Version info" version="v1.31.0"
	I0814 01:05:54.741961       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0814 01:05:54.749780       1 config.go:197] "Starting service config controller"
	I0814 01:05:54.750936       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0814 01:05:54.751586       1 config.go:326] "Starting node config controller"
	I0814 01:05:54.751614       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0814 01:05:54.754101       1 config.go:104] "Starting endpoint slice config controller"
	I0814 01:05:54.754145       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0814 01:05:54.852257       1 shared_informer.go:320] Caches are synced for service config
	I0814 01:05:54.852269       1 shared_informer.go:320] Caches are synced for node config
	I0814 01:05:54.855100       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [89953f1dc813e32748579ac4c2541c0f99b1443ae3956d85a367db06810107b2] <==
	I0814 01:05:51.574748       1 serving.go:386] Generated self-signed cert in-memory
	I0814 01:05:53.506579       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0814 01:05:53.509123       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0814 01:05:53.524184       1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController
	I0814 01:05:53.524302       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0814 01:05:53.524387       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0814 01:05:53.524433       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0814 01:05:53.524594       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0814 01:05:53.524525       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0814 01:05:53.524752       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0814 01:05:53.524777       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0814 01:05:53.624944       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0814 01:05:53.625176       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I0814 01:05:53.625349       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 14 01:18:13 no-preload-776907 kubelet[1436]: E0814 01:18:13.714694    1436 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-gb2dt" podUID="c950c58e-c5c3-4535-b10f-f4379ff03409"
	Aug 14 01:18:19 no-preload-776907 kubelet[1436]: E0814 01:18:19.855322    1436 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598299855019638,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 01:18:19 no-preload-776907 kubelet[1436]: E0814 01:18:19.855361    1436 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598299855019638,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 01:18:28 no-preload-776907 kubelet[1436]: E0814 01:18:28.714841    1436 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-gb2dt" podUID="c950c58e-c5c3-4535-b10f-f4379ff03409"
	Aug 14 01:18:29 no-preload-776907 kubelet[1436]: E0814 01:18:29.858975    1436 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598309858728546,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 01:18:29 no-preload-776907 kubelet[1436]: E0814 01:18:29.859396    1436 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598309858728546,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 01:18:39 no-preload-776907 kubelet[1436]: E0814 01:18:39.715456    1436 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-gb2dt" podUID="c950c58e-c5c3-4535-b10f-f4379ff03409"
	Aug 14 01:18:39 no-preload-776907 kubelet[1436]: E0814 01:18:39.861242    1436 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598319860377372,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 01:18:39 no-preload-776907 kubelet[1436]: E0814 01:18:39.861358    1436 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598319860377372,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 01:18:49 no-preload-776907 kubelet[1436]: E0814 01:18:49.729325    1436 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 14 01:18:49 no-preload-776907 kubelet[1436]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 14 01:18:49 no-preload-776907 kubelet[1436]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 14 01:18:49 no-preload-776907 kubelet[1436]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 14 01:18:49 no-preload-776907 kubelet[1436]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 14 01:18:49 no-preload-776907 kubelet[1436]: E0814 01:18:49.863071    1436 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598329862435259,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 01:18:49 no-preload-776907 kubelet[1436]: E0814 01:18:49.863096    1436 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598329862435259,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 01:18:52 no-preload-776907 kubelet[1436]: E0814 01:18:52.715011    1436 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-gb2dt" podUID="c950c58e-c5c3-4535-b10f-f4379ff03409"
	Aug 14 01:18:59 no-preload-776907 kubelet[1436]: E0814 01:18:59.867989    1436 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598339866258154,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 01:18:59 no-preload-776907 kubelet[1436]: E0814 01:18:59.868026    1436 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598339866258154,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 01:19:05 no-preload-776907 kubelet[1436]: E0814 01:19:05.716088    1436 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-gb2dt" podUID="c950c58e-c5c3-4535-b10f-f4379ff03409"
	Aug 14 01:19:09 no-preload-776907 kubelet[1436]: E0814 01:19:09.869843    1436 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598349869402331,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 01:19:09 no-preload-776907 kubelet[1436]: E0814 01:19:09.869947    1436 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598349869402331,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 01:19:18 no-preload-776907 kubelet[1436]: E0814 01:19:18.714616    1436 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-gb2dt" podUID="c950c58e-c5c3-4535-b10f-f4379ff03409"
	Aug 14 01:19:19 no-preload-776907 kubelet[1436]: E0814 01:19:19.871257    1436 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598359870990601,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 01:19:19 no-preload-776907 kubelet[1436]: E0814 01:19:19.871284    1436 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598359870990601,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [bacb411cbea2000ee28b8a5eb1c371d4094e22dea31e33892860568e08ee9768] <==
	I0814 01:05:54.251864       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0814 01:06:24.255920       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [d4d7da10edbe3c5e4d9a0997c7f8909523ed34eacac6b44dc982bf6ab7504eff] <==
	I0814 01:06:25.043632       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0814 01:06:25.053138       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0814 01:06:25.053251       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0814 01:06:42.455501       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0814 01:06:42.457132       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-776907_a5dfc76c-d470-49ff-ba3b-6cf96c638390!
	I0814 01:06:42.459143       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b183bb2f-bdc3-4b88-9bc3-98a8a2a13ac5", APIVersion:"v1", ResourceVersion:"638", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-776907_a5dfc76c-d470-49ff-ba3b-6cf96c638390 became leader
	I0814 01:06:42.557857       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-776907_a5dfc76c-d470-49ff-ba3b-6cf96c638390!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-776907 -n no-preload-776907
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-776907 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-gb2dt
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-776907 describe pod metrics-server-6867b74b74-gb2dt
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-776907 describe pod metrics-server-6867b74b74-gb2dt: exit status 1 (63.357764ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-gb2dt" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-776907 describe pod metrics-server-6867b74b74-gb2dt: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-585256 -n default-k8s-diff-port-585256
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-08-14 01:20:01.952681383 +0000 UTC m=+5592.116178618
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-585256 -n default-k8s-diff-port-585256
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-585256 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-585256 logs -n 25: (1.930515885s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| unpause | -p pause-074686                                        | pause-074686                 | jenkins | v1.33.1 | 14 Aug 24 00:56 UTC | 14 Aug 24 00:56 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| pause   | -p pause-074686                                        | pause-074686                 | jenkins | v1.33.1 | 14 Aug 24 00:56 UTC | 14 Aug 24 00:56 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| delete  | -p pause-074686                                        | pause-074686                 | jenkins | v1.33.1 | 14 Aug 24 00:56 UTC | 14 Aug 24 00:56 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| delete  | -p pause-074686                                        | pause-074686                 | jenkins | v1.33.1 | 14 Aug 24 00:56 UTC | 14 Aug 24 00:56 UTC |
	| delete  | -p                                                     | disable-driver-mounts-655306 | jenkins | v1.33.1 | 14 Aug 24 00:56 UTC | 14 Aug 24 00:56 UTC |
	|         | disable-driver-mounts-655306                           |                              |         |         |                     |                     |
	| start   | -p no-preload-776907                                   | no-preload-776907            | jenkins | v1.33.1 | 14 Aug 24 00:56 UTC | 14 Aug 24 00:58 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-769488                              | cert-expiration-769488       | jenkins | v1.33.1 | 14 Aug 24 00:57 UTC | 14 Aug 24 00:58 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-769488                              | cert-expiration-769488       | jenkins | v1.33.1 | 14 Aug 24 00:58 UTC | 14 Aug 24 00:58 UTC |
	| start   | -p                                                     | default-k8s-diff-port-585256 | jenkins | v1.33.1 | 14 Aug 24 00:58 UTC | 14 Aug 24 00:58 UTC |
	|         | default-k8s-diff-port-585256                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-901410            | embed-certs-901410           | jenkins | v1.33.1 | 14 Aug 24 00:58 UTC | 14 Aug 24 00:58 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-901410                                  | embed-certs-901410           | jenkins | v1.33.1 | 14 Aug 24 00:58 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-776907             | no-preload-776907            | jenkins | v1.33.1 | 14 Aug 24 00:58 UTC | 14 Aug 24 00:58 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-776907                                   | no-preload-776907            | jenkins | v1.33.1 | 14 Aug 24 00:58 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-585256  | default-k8s-diff-port-585256 | jenkins | v1.33.1 | 14 Aug 24 00:59 UTC | 14 Aug 24 00:59 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-585256 | jenkins | v1.33.1 | 14 Aug 24 00:59 UTC |                     |
	|         | default-k8s-diff-port-585256                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-179312        | old-k8s-version-179312       | jenkins | v1.33.1 | 14 Aug 24 01:00 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-901410                 | embed-certs-901410           | jenkins | v1.33.1 | 14 Aug 24 01:00 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-901410                                  | embed-certs-901410           | jenkins | v1.33.1 | 14 Aug 24 01:00 UTC | 14 Aug 24 01:11 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-776907                  | no-preload-776907            | jenkins | v1.33.1 | 14 Aug 24 01:01 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-776907                                   | no-preload-776907            | jenkins | v1.33.1 | 14 Aug 24 01:01 UTC | 14 Aug 24 01:10 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-585256       | default-k8s-diff-port-585256 | jenkins | v1.33.1 | 14 Aug 24 01:01 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-179312                              | old-k8s-version-179312       | jenkins | v1.33.1 | 14 Aug 24 01:01 UTC | 14 Aug 24 01:01 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-585256 | jenkins | v1.33.1 | 14 Aug 24 01:01 UTC | 14 Aug 24 01:11 UTC |
	|         | default-k8s-diff-port-585256                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-179312             | old-k8s-version-179312       | jenkins | v1.33.1 | 14 Aug 24 01:01 UTC | 14 Aug 24 01:01 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-179312                              | old-k8s-version-179312       | jenkins | v1.33.1 | 14 Aug 24 01:01 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/14 01:01:39
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0814 01:01:39.512898   61804 out.go:291] Setting OutFile to fd 1 ...
	I0814 01:01:39.513038   61804 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 01:01:39.513051   61804 out.go:304] Setting ErrFile to fd 2...
	I0814 01:01:39.513057   61804 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 01:01:39.513259   61804 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19429-9425/.minikube/bin
	I0814 01:01:39.513864   61804 out.go:298] Setting JSON to false
	I0814 01:01:39.514866   61804 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6245,"bootTime":1723591054,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0814 01:01:39.514924   61804 start.go:139] virtualization: kvm guest
	I0814 01:01:39.516858   61804 out.go:177] * [old-k8s-version-179312] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0814 01:01:39.518018   61804 out.go:177]   - MINIKUBE_LOCATION=19429
	I0814 01:01:39.518036   61804 notify.go:220] Checking for updates...
	I0814 01:01:39.520190   61804 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0814 01:01:39.521372   61804 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19429-9425/kubeconfig
	I0814 01:01:39.522536   61804 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19429-9425/.minikube
	I0814 01:01:39.523748   61804 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0814 01:01:39.524905   61804 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0814 01:01:39.526506   61804 config.go:182] Loaded profile config "old-k8s-version-179312": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0814 01:01:39.526919   61804 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:01:39.526976   61804 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:01:39.541877   61804 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35025
	I0814 01:01:39.542250   61804 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:01:39.542776   61804 main.go:141] libmachine: Using API Version  1
	I0814 01:01:39.542796   61804 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:01:39.543149   61804 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:01:39.543304   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .DriverName
	I0814 01:01:39.544990   61804 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0814 01:01:39.546103   61804 driver.go:392] Setting default libvirt URI to qemu:///system
	I0814 01:01:39.546426   61804 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:01:39.546461   61804 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:01:39.561404   61804 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42995
	I0814 01:01:39.561820   61804 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:01:39.562277   61804 main.go:141] libmachine: Using API Version  1
	I0814 01:01:39.562305   61804 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:01:39.562609   61804 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:01:39.562824   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .DriverName
	I0814 01:01:39.598760   61804 out.go:177] * Using the kvm2 driver based on existing profile
	I0814 01:01:39.599899   61804 start.go:297] selected driver: kvm2
	I0814 01:01:39.599912   61804 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-179312 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-179312 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.123 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 01:01:39.600052   61804 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0814 01:01:39.600706   61804 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 01:01:39.600767   61804 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19429-9425/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0814 01:01:39.616316   61804 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0814 01:01:39.616678   61804 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 01:01:39.616712   61804 cni.go:84] Creating CNI manager for ""
	I0814 01:01:39.616719   61804 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 01:01:39.616748   61804 start.go:340] cluster config:
	{Name:old-k8s-version-179312 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-179312 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.123 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 01:01:39.616839   61804 iso.go:125] acquiring lock: {Name:mk654171f0e78c238a265344dbbd1eacb21d0f1b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 01:01:39.618491   61804 out.go:177] * Starting "old-k8s-version-179312" primary control-plane node in "old-k8s-version-179312" cluster
	I0814 01:01:36.022382   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:01:39.094354   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:01:38.136107   61689 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0814 01:01:38.136146   61689 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0814 01:01:38.136159   61689 cache.go:56] Caching tarball of preloaded images
	I0814 01:01:38.136234   61689 preload.go:172] Found /home/jenkins/minikube-integration/19429-9425/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0814 01:01:38.136245   61689 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0814 01:01:38.136360   61689 profile.go:143] Saving config to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/default-k8s-diff-port-585256/config.json ...
	I0814 01:01:38.136567   61689 start.go:360] acquireMachinesLock for default-k8s-diff-port-585256: {Name:mk8ab1b491404dd6cc95a37a50c74a14d6d0e4c9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 01:01:39.619632   61804 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0814 01:01:39.619674   61804 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0814 01:01:39.619694   61804 cache.go:56] Caching tarball of preloaded images
	I0814 01:01:39.619767   61804 preload.go:172] Found /home/jenkins/minikube-integration/19429-9425/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0814 01:01:39.619781   61804 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0814 01:01:39.619899   61804 profile.go:143] Saving config to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312/config.json ...
	I0814 01:01:39.620085   61804 start.go:360] acquireMachinesLock for old-k8s-version-179312: {Name:mk8ab1b491404dd6cc95a37a50c74a14d6d0e4c9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 01:01:45.174229   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:01:48.246337   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:01:54.326275   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:01:57.398310   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:02:03.478349   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:02:06.550262   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:02:12.630330   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:02:15.702383   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:02:21.782321   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:02:24.854346   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:02:30.934349   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:02:34.006298   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:02:40.086361   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:02:43.158326   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:02:49.238298   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:02:52.310357   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:02:58.390361   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:03:01.462356   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:03:07.542292   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:03:10.614310   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:03:16.694325   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:03:19.766305   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:03:25.846331   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:03:28.918369   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:03:34.998360   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:03:38.070357   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:03:44.150338   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:03:47.222336   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:03:53.302301   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:03:56.374355   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:04:02.454379   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:04:05.526325   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:04:11.606322   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:04:14.678359   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:04:20.758332   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:04:23.830339   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:04:29.910318   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:04:32.982355   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:04:39.062376   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:04:42.134351   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:04:48.214321   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:04:51.286357   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:04:57.366282   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:05:00.438378   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:05:06.518254   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:05:09.590272   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:05:12.594550   61447 start.go:364] duration metric: took 3m55.982517455s to acquireMachinesLock for "no-preload-776907"
	I0814 01:05:12.594617   61447 start.go:96] Skipping create...Using existing machine configuration
	I0814 01:05:12.594639   61447 fix.go:54] fixHost starting: 
	I0814 01:05:12.595017   61447 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:05:12.595051   61447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:05:12.611377   61447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40079
	I0814 01:05:12.611848   61447 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:05:12.612405   61447 main.go:141] libmachine: Using API Version  1
	I0814 01:05:12.612433   61447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:05:12.612810   61447 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:05:12.613004   61447 main.go:141] libmachine: (no-preload-776907) Calling .DriverName
	I0814 01:05:12.613170   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetState
	I0814 01:05:12.614831   61447 fix.go:112] recreateIfNeeded on no-preload-776907: state=Stopped err=<nil>
	I0814 01:05:12.614852   61447 main.go:141] libmachine: (no-preload-776907) Calling .DriverName
	W0814 01:05:12.615027   61447 fix.go:138] unexpected machine state, will restart: <nil>
	I0814 01:05:12.616713   61447 out.go:177] * Restarting existing kvm2 VM for "no-preload-776907" ...
	I0814 01:05:12.591919   61115 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 01:05:12.591979   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetMachineName
	I0814 01:05:12.592302   61115 buildroot.go:166] provisioning hostname "embed-certs-901410"
	I0814 01:05:12.592333   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetMachineName
	I0814 01:05:12.592567   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHHostname
	I0814 01:05:12.594384   61115 machine.go:97] duration metric: took 4m37.436734696s to provisionDockerMachine
	I0814 01:05:12.594452   61115 fix.go:56] duration metric: took 4m37.45620173s for fixHost
	I0814 01:05:12.594468   61115 start.go:83] releasing machines lock for "embed-certs-901410", held for 4m37.456229846s
	W0814 01:05:12.594503   61115 start.go:714] error starting host: provision: host is not running
	W0814 01:05:12.594696   61115 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0814 01:05:12.594717   61115 start.go:729] Will try again in 5 seconds ...
	I0814 01:05:12.617855   61447 main.go:141] libmachine: (no-preload-776907) Calling .Start
	I0814 01:05:12.618047   61447 main.go:141] libmachine: (no-preload-776907) Ensuring networks are active...
	I0814 01:05:12.619058   61447 main.go:141] libmachine: (no-preload-776907) Ensuring network default is active
	I0814 01:05:12.619398   61447 main.go:141] libmachine: (no-preload-776907) Ensuring network mk-no-preload-776907 is active
	I0814 01:05:12.619763   61447 main.go:141] libmachine: (no-preload-776907) Getting domain xml...
	I0814 01:05:12.620437   61447 main.go:141] libmachine: (no-preload-776907) Creating domain...
	I0814 01:05:13.819938   61447 main.go:141] libmachine: (no-preload-776907) Waiting to get IP...
	I0814 01:05:13.820741   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:13.821142   61447 main.go:141] libmachine: (no-preload-776907) DBG | unable to find current IP address of domain no-preload-776907 in network mk-no-preload-776907
	I0814 01:05:13.821244   61447 main.go:141] libmachine: (no-preload-776907) DBG | I0814 01:05:13.821137   62559 retry.go:31] will retry after 224.897937ms: waiting for machine to come up
	I0814 01:05:14.047611   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:14.048046   61447 main.go:141] libmachine: (no-preload-776907) DBG | unable to find current IP address of domain no-preload-776907 in network mk-no-preload-776907
	I0814 01:05:14.048073   61447 main.go:141] libmachine: (no-preload-776907) DBG | I0814 01:05:14.047999   62559 retry.go:31] will retry after 289.797156ms: waiting for machine to come up
	I0814 01:05:14.339577   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:14.339966   61447 main.go:141] libmachine: (no-preload-776907) DBG | unable to find current IP address of domain no-preload-776907 in network mk-no-preload-776907
	I0814 01:05:14.339990   61447 main.go:141] libmachine: (no-preload-776907) DBG | I0814 01:05:14.339923   62559 retry.go:31] will retry after 335.55372ms: waiting for machine to come up
	I0814 01:05:14.677277   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:14.677646   61447 main.go:141] libmachine: (no-preload-776907) DBG | unable to find current IP address of domain no-preload-776907 in network mk-no-preload-776907
	I0814 01:05:14.677850   61447 main.go:141] libmachine: (no-preload-776907) DBG | I0814 01:05:14.677612   62559 retry.go:31] will retry after 376.666569ms: waiting for machine to come up
	I0814 01:05:15.056486   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:15.057008   61447 main.go:141] libmachine: (no-preload-776907) DBG | unable to find current IP address of domain no-preload-776907 in network mk-no-preload-776907
	I0814 01:05:15.057046   61447 main.go:141] libmachine: (no-preload-776907) DBG | I0814 01:05:15.056935   62559 retry.go:31] will retry after 594.277419ms: waiting for machine to come up
	I0814 01:05:15.652571   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:15.653122   61447 main.go:141] libmachine: (no-preload-776907) DBG | unable to find current IP address of domain no-preload-776907 in network mk-no-preload-776907
	I0814 01:05:15.653156   61447 main.go:141] libmachine: (no-preload-776907) DBG | I0814 01:05:15.653066   62559 retry.go:31] will retry after 827.123674ms: waiting for machine to come up
	I0814 01:05:16.482405   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:16.482799   61447 main.go:141] libmachine: (no-preload-776907) DBG | unable to find current IP address of domain no-preload-776907 in network mk-no-preload-776907
	I0814 01:05:16.482827   61447 main.go:141] libmachine: (no-preload-776907) DBG | I0814 01:05:16.482746   62559 retry.go:31] will retry after 897.843008ms: waiting for machine to come up
	I0814 01:05:17.595257   61115 start.go:360] acquireMachinesLock for embed-certs-901410: {Name:mk8ab1b491404dd6cc95a37a50c74a14d6d0e4c9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 01:05:17.381838   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:17.382282   61447 main.go:141] libmachine: (no-preload-776907) DBG | unable to find current IP address of domain no-preload-776907 in network mk-no-preload-776907
	I0814 01:05:17.382309   61447 main.go:141] libmachine: (no-preload-776907) DBG | I0814 01:05:17.382233   62559 retry.go:31] will retry after 1.346474914s: waiting for machine to come up
	I0814 01:05:18.730384   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:18.730837   61447 main.go:141] libmachine: (no-preload-776907) DBG | unable to find current IP address of domain no-preload-776907 in network mk-no-preload-776907
	I0814 01:05:18.730865   61447 main.go:141] libmachine: (no-preload-776907) DBG | I0814 01:05:18.730770   62559 retry.go:31] will retry after 1.755579596s: waiting for machine to come up
	I0814 01:05:20.488719   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:20.489235   61447 main.go:141] libmachine: (no-preload-776907) DBG | unable to find current IP address of domain no-preload-776907 in network mk-no-preload-776907
	I0814 01:05:20.489269   61447 main.go:141] libmachine: (no-preload-776907) DBG | I0814 01:05:20.489180   62559 retry.go:31] will retry after 1.82357845s: waiting for machine to come up
	I0814 01:05:22.315099   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:22.315508   61447 main.go:141] libmachine: (no-preload-776907) DBG | unable to find current IP address of domain no-preload-776907 in network mk-no-preload-776907
	I0814 01:05:22.315543   61447 main.go:141] libmachine: (no-preload-776907) DBG | I0814 01:05:22.315458   62559 retry.go:31] will retry after 1.799604975s: waiting for machine to come up
	I0814 01:05:24.116869   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:24.117361   61447 main.go:141] libmachine: (no-preload-776907) DBG | unable to find current IP address of domain no-preload-776907 in network mk-no-preload-776907
	I0814 01:05:24.117389   61447 main.go:141] libmachine: (no-preload-776907) DBG | I0814 01:05:24.117302   62559 retry.go:31] will retry after 2.588913034s: waiting for machine to come up
	I0814 01:05:26.708996   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:26.709436   61447 main.go:141] libmachine: (no-preload-776907) DBG | unable to find current IP address of domain no-preload-776907 in network mk-no-preload-776907
	I0814 01:05:26.709462   61447 main.go:141] libmachine: (no-preload-776907) DBG | I0814 01:05:26.709395   62559 retry.go:31] will retry after 3.736481406s: waiting for machine to come up
	I0814 01:05:30.449552   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:30.450068   61447 main.go:141] libmachine: (no-preload-776907) Found IP for machine: 192.168.72.94
	I0814 01:05:30.450093   61447 main.go:141] libmachine: (no-preload-776907) Reserving static IP address...
	I0814 01:05:30.450109   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has current primary IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:30.450584   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "no-preload-776907", mac: "52:54:00:96:29:79", ip: "192.168.72.94"} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:30.450609   61447 main.go:141] libmachine: (no-preload-776907) Reserved static IP address: 192.168.72.94
	I0814 01:05:30.450629   61447 main.go:141] libmachine: (no-preload-776907) DBG | skip adding static IP to network mk-no-preload-776907 - found existing host DHCP lease matching {name: "no-preload-776907", mac: "52:54:00:96:29:79", ip: "192.168.72.94"}
	I0814 01:05:30.450640   61447 main.go:141] libmachine: (no-preload-776907) Waiting for SSH to be available...
	I0814 01:05:30.450652   61447 main.go:141] libmachine: (no-preload-776907) DBG | Getting to WaitForSSH function...
	I0814 01:05:30.452908   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:30.453222   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:30.453250   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:30.453351   61447 main.go:141] libmachine: (no-preload-776907) DBG | Using SSH client type: external
	I0814 01:05:30.453380   61447 main.go:141] libmachine: (no-preload-776907) DBG | Using SSH private key: /home/jenkins/minikube-integration/19429-9425/.minikube/machines/no-preload-776907/id_rsa (-rw-------)
	I0814 01:05:30.453413   61447 main.go:141] libmachine: (no-preload-776907) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.94 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19429-9425/.minikube/machines/no-preload-776907/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0814 01:05:30.453430   61447 main.go:141] libmachine: (no-preload-776907) DBG | About to run SSH command:
	I0814 01:05:30.453443   61447 main.go:141] libmachine: (no-preload-776907) DBG | exit 0
	I0814 01:05:30.574126   61447 main.go:141] libmachine: (no-preload-776907) DBG | SSH cmd err, output: <nil>: 
	I0814 01:05:30.574502   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetConfigRaw
	I0814 01:05:30.575125   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetIP
	I0814 01:05:30.577732   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:30.578169   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:30.578203   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:30.578449   61447 profile.go:143] Saving config to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/no-preload-776907/config.json ...
	I0814 01:05:30.578651   61447 machine.go:94] provisionDockerMachine start ...
	I0814 01:05:30.578669   61447 main.go:141] libmachine: (no-preload-776907) Calling .DriverName
	I0814 01:05:30.578916   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHHostname
	I0814 01:05:30.581363   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:30.581653   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:30.581678   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:30.581769   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHPort
	I0814 01:05:30.581944   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 01:05:30.582114   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 01:05:30.582230   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHUsername
	I0814 01:05:30.582389   61447 main.go:141] libmachine: Using SSH client type: native
	I0814 01:05:30.582631   61447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.94 22 <nil> <nil>}
	I0814 01:05:30.582641   61447 main.go:141] libmachine: About to run SSH command:
	hostname
	I0814 01:05:30.678219   61447 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0814 01:05:30.678248   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetMachineName
	I0814 01:05:30.678530   61447 buildroot.go:166] provisioning hostname "no-preload-776907"
	I0814 01:05:30.678560   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetMachineName
	I0814 01:05:30.678736   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHHostname
	I0814 01:05:30.681602   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:30.681914   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:30.681943   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:30.682058   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHPort
	I0814 01:05:30.682224   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 01:05:30.682373   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 01:05:30.682507   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHUsername
	I0814 01:05:30.682662   61447 main.go:141] libmachine: Using SSH client type: native
	I0814 01:05:30.682832   61447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.94 22 <nil> <nil>}
	I0814 01:05:30.682844   61447 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-776907 && echo "no-preload-776907" | sudo tee /etc/hostname
	I0814 01:05:30.790444   61447 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-776907
	
	I0814 01:05:30.790476   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHHostname
	I0814 01:05:30.793090   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:30.793357   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:30.793386   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:30.793503   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHPort
	I0814 01:05:30.793713   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 01:05:30.793885   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 01:05:30.794030   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHUsername
	I0814 01:05:30.794206   61447 main.go:141] libmachine: Using SSH client type: native
	I0814 01:05:30.794390   61447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.94 22 <nil> <nil>}
	I0814 01:05:30.794411   61447 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-776907' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-776907/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-776907' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0814 01:05:30.897761   61447 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 01:05:30.897818   61447 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19429-9425/.minikube CaCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19429-9425/.minikube}
	I0814 01:05:30.897869   61447 buildroot.go:174] setting up certificates
	I0814 01:05:30.897890   61447 provision.go:84] configureAuth start
	I0814 01:05:30.897915   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetMachineName
	I0814 01:05:30.898272   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetIP
	I0814 01:05:30.900961   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:30.901235   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:30.901268   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:30.901432   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHHostname
	I0814 01:05:30.903329   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:30.903604   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:30.903634   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:30.903799   61447 provision.go:143] copyHostCerts
	I0814 01:05:30.903866   61447 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem, removing ...
	I0814 01:05:30.903881   61447 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem
	I0814 01:05:30.903960   61447 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem (1082 bytes)
	I0814 01:05:30.904104   61447 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem, removing ...
	I0814 01:05:30.904126   61447 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem
	I0814 01:05:30.904165   61447 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem (1123 bytes)
	I0814 01:05:30.904259   61447 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem, removing ...
	I0814 01:05:30.904271   61447 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem
	I0814 01:05:30.904304   61447 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem (1675 bytes)
	I0814 01:05:30.904389   61447 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem org=jenkins.no-preload-776907 san=[127.0.0.1 192.168.72.94 localhost minikube no-preload-776907]
	I0814 01:05:31.219047   61447 provision.go:177] copyRemoteCerts
	I0814 01:05:31.219108   61447 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0814 01:05:31.219138   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHHostname
	I0814 01:05:31.222328   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:31.222679   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:31.222719   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:31.222858   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHPort
	I0814 01:05:31.223059   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 01:05:31.223199   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHUsername
	I0814 01:05:31.223368   61447 sshutil.go:53] new ssh client: &{IP:192.168.72.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/no-preload-776907/id_rsa Username:docker}
	I0814 01:05:31.299711   61447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0814 01:05:31.321459   61447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0814 01:05:31.342798   61447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0814 01:05:31.363610   61447 provision.go:87] duration metric: took 465.708315ms to configureAuth
	I0814 01:05:31.363636   61447 buildroot.go:189] setting minikube options for container-runtime
	I0814 01:05:31.363877   61447 config.go:182] Loaded profile config "no-preload-776907": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 01:05:31.363970   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHHostname
	I0814 01:05:31.366458   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:31.366723   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:31.366753   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:31.366948   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHPort
	I0814 01:05:31.367154   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 01:05:31.367300   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 01:05:31.367452   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHUsername
	I0814 01:05:31.367605   61447 main.go:141] libmachine: Using SSH client type: native
	I0814 01:05:31.367826   61447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.94 22 <nil> <nil>}
	I0814 01:05:31.367848   61447 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0814 01:05:31.826307   61689 start.go:364] duration metric: took 3m53.689696917s to acquireMachinesLock for "default-k8s-diff-port-585256"
	I0814 01:05:31.826378   61689 start.go:96] Skipping create...Using existing machine configuration
	I0814 01:05:31.826394   61689 fix.go:54] fixHost starting: 
	I0814 01:05:31.826794   61689 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:05:31.826829   61689 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:05:31.842943   61689 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38143
	I0814 01:05:31.843345   61689 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:05:31.843840   61689 main.go:141] libmachine: Using API Version  1
	I0814 01:05:31.843872   61689 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:05:31.844236   61689 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:05:31.844445   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .DriverName
	I0814 01:05:31.844653   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetState
	I0814 01:05:31.846298   61689 fix.go:112] recreateIfNeeded on default-k8s-diff-port-585256: state=Stopped err=<nil>
	I0814 01:05:31.846319   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .DriverName
	W0814 01:05:31.846504   61689 fix.go:138] unexpected machine state, will restart: <nil>
	I0814 01:05:31.848477   61689 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-585256" ...
	I0814 01:05:31.849592   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .Start
	I0814 01:05:31.849779   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Ensuring networks are active...
	I0814 01:05:31.850320   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Ensuring network default is active
	I0814 01:05:31.850622   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Ensuring network mk-default-k8s-diff-port-585256 is active
	I0814 01:05:31.850949   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Getting domain xml...
	I0814 01:05:31.851706   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Creating domain...
	I0814 01:05:31.612709   61447 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0814 01:05:31.612730   61447 machine.go:97] duration metric: took 1.0340672s to provisionDockerMachine
	I0814 01:05:31.612741   61447 start.go:293] postStartSetup for "no-preload-776907" (driver="kvm2")
	I0814 01:05:31.612763   61447 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0814 01:05:31.612794   61447 main.go:141] libmachine: (no-preload-776907) Calling .DriverName
	I0814 01:05:31.613074   61447 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0814 01:05:31.613098   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHHostname
	I0814 01:05:31.615600   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:31.615957   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:31.615985   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:31.616091   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHPort
	I0814 01:05:31.616244   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 01:05:31.616373   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHUsername
	I0814 01:05:31.616516   61447 sshutil.go:53] new ssh client: &{IP:192.168.72.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/no-preload-776907/id_rsa Username:docker}
	I0814 01:05:31.691987   61447 ssh_runner.go:195] Run: cat /etc/os-release
	I0814 01:05:31.695849   61447 info.go:137] Remote host: Buildroot 2023.02.9
	I0814 01:05:31.695872   61447 filesync.go:126] Scanning /home/jenkins/minikube-integration/19429-9425/.minikube/addons for local assets ...
	I0814 01:05:31.695940   61447 filesync.go:126] Scanning /home/jenkins/minikube-integration/19429-9425/.minikube/files for local assets ...
	I0814 01:05:31.696016   61447 filesync.go:149] local asset: /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem -> 165892.pem in /etc/ssl/certs
	I0814 01:05:31.696099   61447 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0814 01:05:31.704650   61447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem --> /etc/ssl/certs/165892.pem (1708 bytes)
	I0814 01:05:31.725889   61447 start.go:296] duration metric: took 113.131949ms for postStartSetup
	I0814 01:05:31.725939   61447 fix.go:56] duration metric: took 19.131305949s for fixHost
	I0814 01:05:31.725962   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHHostname
	I0814 01:05:31.728613   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:31.729001   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:31.729030   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:31.729178   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHPort
	I0814 01:05:31.729379   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 01:05:31.729556   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 01:05:31.729721   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHUsername
	I0814 01:05:31.729861   61447 main.go:141] libmachine: Using SSH client type: native
	I0814 01:05:31.730062   61447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.94 22 <nil> <nil>}
	I0814 01:05:31.730076   61447 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0814 01:05:31.826139   61447 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723597531.803704808
	
	I0814 01:05:31.826161   61447 fix.go:216] guest clock: 1723597531.803704808
	I0814 01:05:31.826172   61447 fix.go:229] Guest: 2024-08-14 01:05:31.803704808 +0000 UTC Remote: 2024-08-14 01:05:31.72594365 +0000 UTC m=+255.249076472 (delta=77.761158ms)
	I0814 01:05:31.826197   61447 fix.go:200] guest clock delta is within tolerance: 77.761158ms
	I0814 01:05:31.826208   61447 start.go:83] releasing machines lock for "no-preload-776907", held for 19.231627325s
	I0814 01:05:31.826240   61447 main.go:141] libmachine: (no-preload-776907) Calling .DriverName
	I0814 01:05:31.826536   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetIP
	I0814 01:05:31.829417   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:31.829824   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:31.829854   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:31.829986   61447 main.go:141] libmachine: (no-preload-776907) Calling .DriverName
	I0814 01:05:31.830482   61447 main.go:141] libmachine: (no-preload-776907) Calling .DriverName
	I0814 01:05:31.830633   61447 main.go:141] libmachine: (no-preload-776907) Calling .DriverName
	I0814 01:05:31.830697   61447 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0814 01:05:31.830804   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHHostname
	I0814 01:05:31.830894   61447 ssh_runner.go:195] Run: cat /version.json
	I0814 01:05:31.830914   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHHostname
	I0814 01:05:31.833641   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:31.833963   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:31.833992   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:31.834096   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:31.834260   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHPort
	I0814 01:05:31.834427   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 01:05:31.834549   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHUsername
	I0814 01:05:31.834575   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:31.834599   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:31.834696   61447 sshutil.go:53] new ssh client: &{IP:192.168.72.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/no-preload-776907/id_rsa Username:docker}
	I0814 01:05:31.834773   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHPort
	I0814 01:05:31.834917   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 01:05:31.835101   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHUsername
	I0814 01:05:31.835253   61447 sshutil.go:53] new ssh client: &{IP:192.168.72.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/no-preload-776907/id_rsa Username:docker}
	I0814 01:05:31.915928   61447 ssh_runner.go:195] Run: systemctl --version
	I0814 01:05:31.947877   61447 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0814 01:05:32.091869   61447 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0814 01:05:32.097278   61447 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0814 01:05:32.097333   61447 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0814 01:05:32.112225   61447 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0814 01:05:32.112243   61447 start.go:495] detecting cgroup driver to use...
	I0814 01:05:32.112317   61447 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0814 01:05:32.131562   61447 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0814 01:05:32.145858   61447 docker.go:217] disabling cri-docker service (if available) ...
	I0814 01:05:32.145917   61447 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0814 01:05:32.160887   61447 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0814 01:05:32.175742   61447 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0814 01:05:32.290421   61447 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0814 01:05:32.420159   61447 docker.go:233] disabling docker service ...
	I0814 01:05:32.420237   61447 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0814 01:05:32.434020   61447 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0814 01:05:32.451378   61447 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0814 01:05:32.601306   61447 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0814 01:05:32.714480   61447 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0814 01:05:32.727033   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 01:05:32.743611   61447 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0814 01:05:32.743681   61447 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:05:32.753404   61447 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0814 01:05:32.753471   61447 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:05:32.762934   61447 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:05:32.772193   61447 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:05:32.781270   61447 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0814 01:05:32.791271   61447 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:05:32.802788   61447 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:05:32.821431   61447 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:05:32.831529   61447 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0814 01:05:32.840975   61447 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0814 01:05:32.841033   61447 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0814 01:05:32.854037   61447 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0814 01:05:32.863437   61447 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 01:05:32.999601   61447 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0814 01:05:33.152806   61447 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0814 01:05:33.152868   61447 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0814 01:05:33.157209   61447 start.go:563] Will wait 60s for crictl version
	I0814 01:05:33.157266   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:05:33.160792   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0814 01:05:33.196825   61447 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0814 01:05:33.196903   61447 ssh_runner.go:195] Run: crio --version
	I0814 01:05:33.222886   61447 ssh_runner.go:195] Run: crio --version
	I0814 01:05:33.258900   61447 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0814 01:05:33.260059   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetIP
	I0814 01:05:33.263044   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:33.263422   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:33.263449   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:33.263749   61447 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0814 01:05:33.268315   61447 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 01:05:33.282628   61447 kubeadm.go:883] updating cluster {Name:no-preload-776907 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:no-preload-776907 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.94 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0814 01:05:33.282744   61447 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0814 01:05:33.282800   61447 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 01:05:33.319748   61447 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0814 01:05:33.319777   61447 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0814 01:05:33.319875   61447 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0814 01:05:33.319855   61447 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0814 01:05:33.319906   61447 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0814 01:05:33.319846   61447 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 01:05:33.319845   61447 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0814 01:05:33.320006   61447 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0814 01:05:33.320011   61447 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0814 01:05:33.320011   61447 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0814 01:05:33.321705   61447 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0814 01:05:33.321719   61447 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0814 01:05:33.321741   61447 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0814 01:05:33.321800   61447 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0814 01:05:33.321820   61447 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0814 01:05:33.321851   61447 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 01:05:33.321862   61447 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0814 01:05:33.321858   61447 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0814 01:05:33.549228   61447 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0814 01:05:33.558351   61447 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0814 01:05:33.561199   61447 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0814 01:05:33.570929   61447 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0814 01:05:33.573362   61447 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0814 01:05:33.606128   61447 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0814 01:05:33.623839   61447 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0814 01:05:33.721634   61447 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0814 01:05:33.721674   61447 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0814 01:05:33.721695   61447 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0814 01:05:33.721706   61447 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0814 01:05:33.721718   61447 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0814 01:05:33.721743   61447 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0814 01:05:33.721756   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:05:33.721790   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:05:33.721743   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:05:33.721822   61447 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0814 01:05:33.721851   61447 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0814 01:05:33.721904   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:05:33.733731   61447 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0814 01:05:33.733762   61447 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0814 01:05:33.733792   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:05:33.745957   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0814 01:05:33.745957   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0814 01:05:33.746027   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0814 01:05:33.746031   61447 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0814 01:05:33.746075   61447 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0814 01:05:33.746100   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0814 01:05:33.746110   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0814 01:05:33.746128   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:05:33.837313   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0814 01:05:33.837334   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0814 01:05:33.840696   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0814 01:05:33.840751   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0814 01:05:33.840821   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0814 01:05:33.840959   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0814 01:05:33.952383   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0814 01:05:33.952459   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0814 01:05:33.960252   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0814 01:05:33.966935   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0814 01:05:33.966980   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0814 01:05:33.966949   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0814 01:05:34.070125   61447 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0814 01:05:34.070241   61447 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0814 01:05:34.070361   61447 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0814 01:05:34.070427   61447 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0814 01:05:34.070495   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0814 01:05:34.091128   61447 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0814 01:05:34.091240   61447 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0814 01:05:34.092453   61447 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0814 01:05:34.092547   61447 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.15-0
	I0814 01:05:34.092649   61447 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0814 01:05:34.092743   61447 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0814 01:05:34.100595   61447 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0814 01:05:34.100616   61447 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0814 01:05:34.100663   61447 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0814 01:05:34.100799   61447 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0814 01:05:34.130869   61447 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0814 01:05:34.130914   61447 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0814 01:05:34.130931   61447 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0814 01:05:34.130968   61447 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0814 01:05:34.131021   61447 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0814 01:05:34.197462   61447 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 01:05:36.080029   61447 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (1.979348221s)
	I0814 01:05:36.080056   61447 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0814 01:05:36.080081   61447 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0814 01:05:36.080140   61447 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0814 01:05:36.080175   61447 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.882683519s)
	I0814 01:05:36.080139   61447 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0: (1.949094618s)
	I0814 01:05:36.080227   61447 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0814 01:05:36.080270   61447 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 01:05:36.080310   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:05:36.080232   61447 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0814 01:05:33.131411   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting to get IP...
	I0814 01:05:33.132448   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:33.132806   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | unable to find current IP address of domain default-k8s-diff-port-585256 in network mk-default-k8s-diff-port-585256
	I0814 01:05:33.132920   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | I0814 01:05:33.132799   62699 retry.go:31] will retry after 311.730649ms: waiting for machine to come up
	I0814 01:05:33.446380   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:33.446841   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | unable to find current IP address of domain default-k8s-diff-port-585256 in network mk-default-k8s-diff-port-585256
	I0814 01:05:33.446870   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | I0814 01:05:33.446794   62699 retry.go:31] will retry after 383.687115ms: waiting for machine to come up
	I0814 01:05:33.832368   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:33.832974   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | unable to find current IP address of domain default-k8s-diff-port-585256 in network mk-default-k8s-diff-port-585256
	I0814 01:05:33.833008   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | I0814 01:05:33.832808   62699 retry.go:31] will retry after 455.445491ms: waiting for machine to come up
	I0814 01:05:34.289395   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:34.289832   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | unable to find current IP address of domain default-k8s-diff-port-585256 in network mk-default-k8s-diff-port-585256
	I0814 01:05:34.289869   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | I0814 01:05:34.289782   62699 retry.go:31] will retry after 513.174411ms: waiting for machine to come up
	I0814 01:05:34.804399   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:34.804842   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | unable to find current IP address of domain default-k8s-diff-port-585256 in network mk-default-k8s-diff-port-585256
	I0814 01:05:34.804877   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | I0814 01:05:34.804793   62699 retry.go:31] will retry after 497.23394ms: waiting for machine to come up
	I0814 01:05:35.303286   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:35.303809   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | unable to find current IP address of domain default-k8s-diff-port-585256 in network mk-default-k8s-diff-port-585256
	I0814 01:05:35.303839   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | I0814 01:05:35.303757   62699 retry.go:31] will retry after 774.036418ms: waiting for machine to come up
	I0814 01:05:36.080026   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:36.080605   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | unable to find current IP address of domain default-k8s-diff-port-585256 in network mk-default-k8s-diff-port-585256
	I0814 01:05:36.080631   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | I0814 01:05:36.080572   62699 retry.go:31] will retry after 970.636476ms: waiting for machine to come up
	I0814 01:05:37.052546   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:37.052978   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | unable to find current IP address of domain default-k8s-diff-port-585256 in network mk-default-k8s-diff-port-585256
	I0814 01:05:37.053007   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | I0814 01:05:37.052929   62699 retry.go:31] will retry after 1.471882931s: waiting for machine to come up
	I0814 01:05:37.749423   61447 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.669254345s)
	I0814 01:05:37.749462   61447 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0814 01:05:37.749464   61447 ssh_runner.go:235] Completed: which crictl: (1.669139781s)
	I0814 01:05:37.749508   61447 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0814 01:05:37.749520   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 01:05:37.749573   61447 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0814 01:05:40.024973   61447 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.275431609s)
	I0814 01:05:40.024997   61447 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (2.275404079s)
	I0814 01:05:40.025019   61447 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0814 01:05:40.025049   61447 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0814 01:05:40.025050   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 01:05:40.025084   61447 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0814 01:05:38.526491   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:38.527039   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | unable to find current IP address of domain default-k8s-diff-port-585256 in network mk-default-k8s-diff-port-585256
	I0814 01:05:38.527074   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | I0814 01:05:38.526996   62699 retry.go:31] will retry after 1.14308512s: waiting for machine to come up
	I0814 01:05:39.672470   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:39.672869   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | unable to find current IP address of domain default-k8s-diff-port-585256 in network mk-default-k8s-diff-port-585256
	I0814 01:05:39.672893   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | I0814 01:05:39.672812   62699 retry.go:31] will retry after 2.208537111s: waiting for machine to come up
	I0814 01:05:41.883541   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:41.883981   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | unable to find current IP address of domain default-k8s-diff-port-585256 in network mk-default-k8s-diff-port-585256
	I0814 01:05:41.884004   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | I0814 01:05:41.883925   62699 retry.go:31] will retry after 1.996466385s: waiting for machine to come up
	I0814 01:05:43.619471   61447 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.594358195s)
	I0814 01:05:43.619507   61447 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0814 01:05:43.619537   61447 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0814 01:05:43.619541   61447 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.594466847s)
	I0814 01:05:43.619586   61447 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0814 01:05:43.619612   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 01:05:44.986974   61447 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (1.367364508s)
	I0814 01:05:44.987013   61447 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0814 01:05:44.987045   61447 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0814 01:05:44.987041   61447 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.367403978s)
	I0814 01:05:44.987087   61447 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0814 01:05:44.987109   61447 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0814 01:05:44.987207   61447 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0814 01:05:44.991463   61447 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0814 01:05:43.882980   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:43.883366   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | unable to find current IP address of domain default-k8s-diff-port-585256 in network mk-default-k8s-diff-port-585256
	I0814 01:05:43.883395   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | I0814 01:05:43.883327   62699 retry.go:31] will retry after 3.565128765s: waiting for machine to come up
	I0814 01:05:47.449997   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:47.450447   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | unable to find current IP address of domain default-k8s-diff-port-585256 in network mk-default-k8s-diff-port-585256
	I0814 01:05:47.450477   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | I0814 01:05:47.450398   62699 retry.go:31] will retry after 3.284570516s: waiting for machine to come up
	I0814 01:05:46.846330   61447 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (1.859214752s)
	I0814 01:05:46.846363   61447 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0814 01:05:46.846397   61447 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0814 01:05:46.846448   61447 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0814 01:05:47.484561   61447 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0814 01:05:47.484612   61447 cache_images.go:123] Successfully loaded all cached images
	I0814 01:05:47.484618   61447 cache_images.go:92] duration metric: took 14.164829321s to LoadCachedImages
	I0814 01:05:47.484632   61447 kubeadm.go:934] updating node { 192.168.72.94 8443 v1.31.0 crio true true} ...
	I0814 01:05:47.484813   61447 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-776907 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.94
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-776907 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0814 01:05:47.484897   61447 ssh_runner.go:195] Run: crio config
	I0814 01:05:47.530082   61447 cni.go:84] Creating CNI manager for ""
	I0814 01:05:47.530105   61447 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 01:05:47.530120   61447 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0814 01:05:47.530143   61447 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.94 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-776907 NodeName:no-preload-776907 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.94"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.94 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0814 01:05:47.530285   61447 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.94
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-776907"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.94
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.94"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0814 01:05:47.530350   61447 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0814 01:05:47.540091   61447 binaries.go:44] Found k8s binaries, skipping transfer
	I0814 01:05:47.540155   61447 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0814 01:05:47.548445   61447 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0814 01:05:47.563668   61447 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0814 01:05:47.578184   61447 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0814 01:05:47.593013   61447 ssh_runner.go:195] Run: grep 192.168.72.94	control-plane.minikube.internal$ /etc/hosts
	I0814 01:05:47.596371   61447 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.94	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 01:05:47.606895   61447 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 01:05:47.711714   61447 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 01:05:47.726979   61447 certs.go:68] Setting up /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/no-preload-776907 for IP: 192.168.72.94
	I0814 01:05:47.727006   61447 certs.go:194] generating shared ca certs ...
	I0814 01:05:47.727027   61447 certs.go:226] acquiring lock for ca certs: {Name:mke321171338291cf6d66f3acbfe43d3fabcafa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 01:05:47.727236   61447 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19429-9425/.minikube/ca.key
	I0814 01:05:47.727305   61447 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.key
	I0814 01:05:47.727321   61447 certs.go:256] generating profile certs ...
	I0814 01:05:47.727446   61447 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/no-preload-776907/client.key
	I0814 01:05:47.727532   61447 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/no-preload-776907/apiserver.key.b2b1ec25
	I0814 01:05:47.727583   61447 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/no-preload-776907/proxy-client.key
	I0814 01:05:47.727745   61447 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589.pem (1338 bytes)
	W0814 01:05:47.727796   61447 certs.go:480] ignoring /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589_empty.pem, impossibly tiny 0 bytes
	I0814 01:05:47.727811   61447 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem (1679 bytes)
	I0814 01:05:47.727846   61447 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem (1082 bytes)
	I0814 01:05:47.727882   61447 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem (1123 bytes)
	I0814 01:05:47.727907   61447 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem (1675 bytes)
	I0814 01:05:47.727948   61447 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem (1708 bytes)
	I0814 01:05:47.728598   61447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0814 01:05:47.758661   61447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0814 01:05:47.790036   61447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0814 01:05:47.814323   61447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0814 01:05:47.839537   61447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/no-preload-776907/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0814 01:05:47.867466   61447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/no-preload-776907/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0814 01:05:47.898996   61447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/no-preload-776907/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0814 01:05:47.923051   61447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/no-preload-776907/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0814 01:05:47.946004   61447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0814 01:05:47.967147   61447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589.pem --> /usr/share/ca-certificates/16589.pem (1338 bytes)
	I0814 01:05:47.988005   61447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem --> /usr/share/ca-certificates/165892.pem (1708 bytes)
	I0814 01:05:48.009704   61447 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0814 01:05:48.024096   61447 ssh_runner.go:195] Run: openssl version
	I0814 01:05:48.029499   61447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0814 01:05:48.038961   61447 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0814 01:05:48.042928   61447 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 13 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0814 01:05:48.042967   61447 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0814 01:05:48.048101   61447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0814 01:05:48.057498   61447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16589.pem && ln -fs /usr/share/ca-certificates/16589.pem /etc/ssl/certs/16589.pem"
	I0814 01:05:48.067275   61447 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16589.pem
	I0814 01:05:48.071457   61447 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 13 23:59 /usr/share/ca-certificates/16589.pem
	I0814 01:05:48.071503   61447 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16589.pem
	I0814 01:05:48.076924   61447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16589.pem /etc/ssl/certs/51391683.0"
	I0814 01:05:48.086951   61447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/165892.pem && ln -fs /usr/share/ca-certificates/165892.pem /etc/ssl/certs/165892.pem"
	I0814 01:05:48.097071   61447 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/165892.pem
	I0814 01:05:48.101070   61447 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 13 23:59 /usr/share/ca-certificates/165892.pem
	I0814 01:05:48.101116   61447 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/165892.pem
	I0814 01:05:48.106289   61447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/165892.pem /etc/ssl/certs/3ec20f2e.0"
	I0814 01:05:48.116109   61447 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0814 01:05:48.119931   61447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0814 01:05:48.124976   61447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0814 01:05:48.129900   61447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0814 01:05:48.135041   61447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0814 01:05:48.140528   61447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0814 01:05:48.145653   61447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0814 01:05:48.150733   61447 kubeadm.go:392] StartCluster: {Name:no-preload-776907 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:no-preload-776907 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.94 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 01:05:48.150833   61447 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0814 01:05:48.150869   61447 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 01:05:48.184513   61447 cri.go:89] found id: ""
	I0814 01:05:48.184585   61447 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0814 01:05:48.194089   61447 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0814 01:05:48.194107   61447 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0814 01:05:48.194145   61447 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0814 01:05:48.202993   61447 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0814 01:05:48.203917   61447 kubeconfig.go:125] found "no-preload-776907" server: "https://192.168.72.94:8443"
	I0814 01:05:48.205929   61447 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0814 01:05:48.214947   61447 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.94
	I0814 01:05:48.214974   61447 kubeadm.go:1160] stopping kube-system containers ...
	I0814 01:05:48.214985   61447 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0814 01:05:48.215023   61447 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 01:05:48.247731   61447 cri.go:89] found id: ""
	I0814 01:05:48.247803   61447 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0814 01:05:48.262901   61447 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 01:05:48.271600   61447 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 01:05:48.271616   61447 kubeadm.go:157] found existing configuration files:
	
	I0814 01:05:48.271652   61447 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 01:05:48.279915   61447 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 01:05:48.279963   61447 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 01:05:48.288458   61447 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 01:05:48.296996   61447 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 01:05:48.297049   61447 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 01:05:48.305625   61447 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 01:05:48.313796   61447 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 01:05:48.313837   61447 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 01:05:48.322211   61447 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 01:05:48.330289   61447 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 01:05:48.330350   61447 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 01:05:48.338604   61447 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 01:05:48.347106   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:05:48.452598   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:05:49.345180   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:05:49.535832   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:05:49.597770   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:05:49.711880   61447 api_server.go:52] waiting for apiserver process to appear ...
	I0814 01:05:49.711964   61447 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:05:50.212332   61447 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:05:50.712073   61447 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:05:50.726301   61447 api_server.go:72] duration metric: took 1.014425118s to wait for apiserver process to appear ...
	I0814 01:05:50.726335   61447 api_server.go:88] waiting for apiserver healthz status ...
	I0814 01:05:50.726369   61447 api_server.go:253] Checking apiserver healthz at https://192.168.72.94:8443/healthz ...
	I0814 01:05:52.086727   61804 start.go:364] duration metric: took 4m12.466611913s to acquireMachinesLock for "old-k8s-version-179312"
	I0814 01:05:52.086801   61804 start.go:96] Skipping create...Using existing machine configuration
	I0814 01:05:52.086811   61804 fix.go:54] fixHost starting: 
	I0814 01:05:52.087240   61804 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:05:52.087282   61804 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:05:52.104210   61804 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42343
	I0814 01:05:52.104679   61804 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:05:52.105122   61804 main.go:141] libmachine: Using API Version  1
	I0814 01:05:52.105146   61804 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:05:52.105462   61804 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:05:52.105656   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .DriverName
	I0814 01:05:52.105804   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetState
	I0814 01:05:52.107362   61804 fix.go:112] recreateIfNeeded on old-k8s-version-179312: state=Stopped err=<nil>
	I0814 01:05:52.107399   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .DriverName
	W0814 01:05:52.107543   61804 fix.go:138] unexpected machine state, will restart: <nil>
	I0814 01:05:52.109460   61804 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-179312" ...
	I0814 01:05:50.738825   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:50.739311   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Found IP for machine: 192.168.39.110
	I0814 01:05:50.739333   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Reserving static IP address...
	I0814 01:05:50.739353   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has current primary IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:50.739784   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-585256", mac: "52:54:00:00:bd:a3", ip: "192.168.39.110"} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:05:50.739819   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Reserved static IP address: 192.168.39.110
	I0814 01:05:50.739844   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | skip adding static IP to network mk-default-k8s-diff-port-585256 - found existing host DHCP lease matching {name: "default-k8s-diff-port-585256", mac: "52:54:00:00:bd:a3", ip: "192.168.39.110"}
	I0814 01:05:50.739871   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | Getting to WaitForSSH function...
	I0814 01:05:50.739888   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for SSH to be available...
	I0814 01:05:50.742187   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:50.742563   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:05:50.742597   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:50.742696   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | Using SSH client type: external
	I0814 01:05:50.742726   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | Using SSH private key: /home/jenkins/minikube-integration/19429-9425/.minikube/machines/default-k8s-diff-port-585256/id_rsa (-rw-------)
	I0814 01:05:50.742755   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.110 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19429-9425/.minikube/machines/default-k8s-diff-port-585256/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0814 01:05:50.742769   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | About to run SSH command:
	I0814 01:05:50.742784   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | exit 0
	I0814 01:05:50.870185   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | SSH cmd err, output: <nil>: 
	I0814 01:05:50.870601   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetConfigRaw
	I0814 01:05:50.871331   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetIP
	I0814 01:05:50.873990   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:50.874371   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:05:50.874401   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:50.874720   61689 profile.go:143] Saving config to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/default-k8s-diff-port-585256/config.json ...
	I0814 01:05:50.874962   61689 machine.go:94] provisionDockerMachine start ...
	I0814 01:05:50.874984   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .DriverName
	I0814 01:05:50.875223   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHHostname
	I0814 01:05:50.877460   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:50.877829   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:05:50.877868   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:50.877958   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHPort
	I0814 01:05:50.878140   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 01:05:50.878274   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 01:05:50.878440   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHUsername
	I0814 01:05:50.878596   61689 main.go:141] libmachine: Using SSH client type: native
	I0814 01:05:50.878828   61689 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0814 01:05:50.878844   61689 main.go:141] libmachine: About to run SSH command:
	hostname
	I0814 01:05:50.990920   61689 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0814 01:05:50.990952   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetMachineName
	I0814 01:05:50.991216   61689 buildroot.go:166] provisioning hostname "default-k8s-diff-port-585256"
	I0814 01:05:50.991244   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetMachineName
	I0814 01:05:50.991445   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHHostname
	I0814 01:05:50.994031   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:50.994353   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:05:50.994384   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:50.994595   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHPort
	I0814 01:05:50.994785   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 01:05:50.994936   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 01:05:50.995105   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHUsername
	I0814 01:05:50.995273   61689 main.go:141] libmachine: Using SSH client type: native
	I0814 01:05:50.995458   61689 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0814 01:05:50.995475   61689 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-585256 && echo "default-k8s-diff-port-585256" | sudo tee /etc/hostname
	I0814 01:05:51.115106   61689 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-585256
	
	I0814 01:05:51.115141   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHHostname
	I0814 01:05:51.118113   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:51.118480   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:05:51.118509   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:51.118726   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHPort
	I0814 01:05:51.118932   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 01:05:51.119097   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 01:05:51.119218   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHUsername
	I0814 01:05:51.119418   61689 main.go:141] libmachine: Using SSH client type: native
	I0814 01:05:51.119594   61689 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0814 01:05:51.119619   61689 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-585256' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-585256/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-585256' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0814 01:05:51.239368   61689 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 01:05:51.239404   61689 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19429-9425/.minikube CaCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19429-9425/.minikube}
	I0814 01:05:51.239430   61689 buildroot.go:174] setting up certificates
	I0814 01:05:51.239438   61689 provision.go:84] configureAuth start
	I0814 01:05:51.239450   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetMachineName
	I0814 01:05:51.239744   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetIP
	I0814 01:05:51.242426   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:51.242864   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:05:51.242894   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:51.243061   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHHostname
	I0814 01:05:51.245385   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:51.245774   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:05:51.245802   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:51.245950   61689 provision.go:143] copyHostCerts
	I0814 01:05:51.246001   61689 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem, removing ...
	I0814 01:05:51.246012   61689 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem
	I0814 01:05:51.246090   61689 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem (1082 bytes)
	I0814 01:05:51.246184   61689 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem, removing ...
	I0814 01:05:51.246192   61689 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem
	I0814 01:05:51.246211   61689 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem (1123 bytes)
	I0814 01:05:51.246268   61689 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem, removing ...
	I0814 01:05:51.246274   61689 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem
	I0814 01:05:51.246291   61689 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem (1675 bytes)
	I0814 01:05:51.246345   61689 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-585256 san=[127.0.0.1 192.168.39.110 default-k8s-diff-port-585256 localhost minikube]
	I0814 01:05:51.390720   61689 provision.go:177] copyRemoteCerts
	I0814 01:05:51.390779   61689 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0814 01:05:51.390828   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHHostname
	I0814 01:05:51.393583   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:51.394011   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:05:51.394065   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:51.394311   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHPort
	I0814 01:05:51.394493   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 01:05:51.394648   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHUsername
	I0814 01:05:51.394774   61689 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/default-k8s-diff-port-585256/id_rsa Username:docker}
	I0814 01:05:51.479700   61689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0814 01:05:51.501643   61689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0814 01:05:51.523469   61689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0814 01:05:51.548552   61689 provision.go:87] duration metric: took 309.100404ms to configureAuth
	I0814 01:05:51.548579   61689 buildroot.go:189] setting minikube options for container-runtime
	I0814 01:05:51.548811   61689 config.go:182] Loaded profile config "default-k8s-diff-port-585256": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 01:05:51.548902   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHHostname
	I0814 01:05:51.551955   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:51.552410   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:05:51.552439   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:51.552657   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHPort
	I0814 01:05:51.552846   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 01:05:51.553007   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 01:05:51.553131   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHUsername
	I0814 01:05:51.553293   61689 main.go:141] libmachine: Using SSH client type: native
	I0814 01:05:51.553506   61689 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0814 01:05:51.553536   61689 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0814 01:05:51.836027   61689 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0814 01:05:51.836048   61689 machine.go:97] duration metric: took 961.072984ms to provisionDockerMachine
	I0814 01:05:51.836060   61689 start.go:293] postStartSetup for "default-k8s-diff-port-585256" (driver="kvm2")
	I0814 01:05:51.836075   61689 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0814 01:05:51.836092   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .DriverName
	I0814 01:05:51.836448   61689 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0814 01:05:51.836483   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHHostname
	I0814 01:05:51.839252   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:51.839608   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:05:51.839634   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:51.839785   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHPort
	I0814 01:05:51.839998   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 01:05:51.840158   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHUsername
	I0814 01:05:51.840306   61689 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/default-k8s-diff-port-585256/id_rsa Username:docker}
	I0814 01:05:51.928323   61689 ssh_runner.go:195] Run: cat /etc/os-release
	I0814 01:05:51.932227   61689 info.go:137] Remote host: Buildroot 2023.02.9
	I0814 01:05:51.932252   61689 filesync.go:126] Scanning /home/jenkins/minikube-integration/19429-9425/.minikube/addons for local assets ...
	I0814 01:05:51.932331   61689 filesync.go:126] Scanning /home/jenkins/minikube-integration/19429-9425/.minikube/files for local assets ...
	I0814 01:05:51.932417   61689 filesync.go:149] local asset: /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem -> 165892.pem in /etc/ssl/certs
	I0814 01:05:51.932539   61689 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0814 01:05:51.941299   61689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem --> /etc/ssl/certs/165892.pem (1708 bytes)
	I0814 01:05:51.966445   61689 start.go:296] duration metric: took 130.370634ms for postStartSetup
	I0814 01:05:51.966488   61689 fix.go:56] duration metric: took 20.140102397s for fixHost
	I0814 01:05:51.966509   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHHostname
	I0814 01:05:51.969169   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:51.969542   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:05:51.969574   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:51.970716   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHPort
	I0814 01:05:51.970923   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 01:05:51.971093   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 01:05:51.971233   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHUsername
	I0814 01:05:51.971411   61689 main.go:141] libmachine: Using SSH client type: native
	I0814 01:05:51.971649   61689 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0814 01:05:51.971663   61689 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0814 01:05:52.086583   61689 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723597552.047212997
	
	I0814 01:05:52.086611   61689 fix.go:216] guest clock: 1723597552.047212997
	I0814 01:05:52.086621   61689 fix.go:229] Guest: 2024-08-14 01:05:52.047212997 +0000 UTC Remote: 2024-08-14 01:05:51.966492542 +0000 UTC m=+253.980961749 (delta=80.720455ms)
	I0814 01:05:52.086647   61689 fix.go:200] guest clock delta is within tolerance: 80.720455ms
	I0814 01:05:52.086653   61689 start.go:83] releasing machines lock for "default-k8s-diff-port-585256", held for 20.260304872s
	I0814 01:05:52.086686   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .DriverName
	I0814 01:05:52.086988   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetIP
	I0814 01:05:52.089862   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:52.090237   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:05:52.090269   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:52.090388   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .DriverName
	I0814 01:05:52.090896   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .DriverName
	I0814 01:05:52.091065   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .DriverName
	I0814 01:05:52.091161   61689 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0814 01:05:52.091208   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHHostname
	I0814 01:05:52.091307   61689 ssh_runner.go:195] Run: cat /version.json
	I0814 01:05:52.091327   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHHostname
	I0814 01:05:52.094188   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:52.094456   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:52.094520   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:05:52.094539   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:52.094722   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHPort
	I0814 01:05:52.094906   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 01:05:52.095028   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:05:52.095052   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:52.095095   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHUsername
	I0814 01:05:52.095210   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHPort
	I0814 01:05:52.095290   61689 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/default-k8s-diff-port-585256/id_rsa Username:docker}
	I0814 01:05:52.095355   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 01:05:52.095505   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHUsername
	I0814 01:05:52.095657   61689 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/default-k8s-diff-port-585256/id_rsa Username:docker}
	I0814 01:05:52.214838   61689 ssh_runner.go:195] Run: systemctl --version
	I0814 01:05:52.222204   61689 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0814 01:05:52.375439   61689 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0814 01:05:52.381523   61689 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0814 01:05:52.381609   61689 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0814 01:05:52.401552   61689 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0814 01:05:52.401582   61689 start.go:495] detecting cgroup driver to use...
	I0814 01:05:52.401651   61689 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0814 01:05:52.417919   61689 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0814 01:05:52.437217   61689 docker.go:217] disabling cri-docker service (if available) ...
	I0814 01:05:52.437288   61689 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0814 01:05:52.453875   61689 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0814 01:05:52.470300   61689 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0814 01:05:52.595346   61689 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0814 01:05:52.762539   61689 docker.go:233] disabling docker service ...
	I0814 01:05:52.762616   61689 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0814 01:05:52.778328   61689 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0814 01:05:52.791736   61689 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0814 01:05:52.935414   61689 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0814 01:05:53.120909   61689 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0814 01:05:53.134424   61689 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 01:05:53.152618   61689 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0814 01:05:53.152693   61689 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:05:53.164847   61689 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0814 01:05:53.164922   61689 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:05:53.176337   61689 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:05:53.187338   61689 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:05:53.198573   61689 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0814 01:05:53.208385   61689 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:05:53.218220   61689 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:05:53.234795   61689 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:05:53.251006   61689 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0814 01:05:53.265820   61689 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0814 01:05:53.265883   61689 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0814 01:05:53.285753   61689 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0814 01:05:53.298127   61689 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 01:05:53.458646   61689 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0814 01:05:53.610690   61689 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0814 01:05:53.610765   61689 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0814 01:05:53.615292   61689 start.go:563] Will wait 60s for crictl version
	I0814 01:05:53.615348   61689 ssh_runner.go:195] Run: which crictl
	I0814 01:05:53.618756   61689 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0814 01:05:53.658450   61689 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0814 01:05:53.658551   61689 ssh_runner.go:195] Run: crio --version
	I0814 01:05:53.685316   61689 ssh_runner.go:195] Run: crio --version
	I0814 01:05:53.715106   61689 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0814 01:05:52.110579   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .Start
	I0814 01:05:52.110744   61804 main.go:141] libmachine: (old-k8s-version-179312) Ensuring networks are active...
	I0814 01:05:52.111309   61804 main.go:141] libmachine: (old-k8s-version-179312) Ensuring network default is active
	I0814 01:05:52.111709   61804 main.go:141] libmachine: (old-k8s-version-179312) Ensuring network mk-old-k8s-version-179312 is active
	I0814 01:05:52.112094   61804 main.go:141] libmachine: (old-k8s-version-179312) Getting domain xml...
	I0814 01:05:52.112845   61804 main.go:141] libmachine: (old-k8s-version-179312) Creating domain...
	I0814 01:05:53.502995   61804 main.go:141] libmachine: (old-k8s-version-179312) Waiting to get IP...
	I0814 01:05:53.504003   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:05:53.504428   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 01:05:53.504496   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 01:05:53.504392   62858 retry.go:31] will retry after 197.24813ms: waiting for machine to come up
	I0814 01:05:53.702874   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:05:53.703413   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 01:05:53.703435   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 01:05:53.703362   62858 retry.go:31] will retry after 310.273767ms: waiting for machine to come up
	I0814 01:05:54.015867   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:05:54.016309   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 01:05:54.016343   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 01:05:54.016247   62858 retry.go:31] will retry after 401.494411ms: waiting for machine to come up
	I0814 01:05:54.419847   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:05:54.420305   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 01:05:54.420330   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 01:05:54.420256   62858 retry.go:31] will retry after 407.322632ms: waiting for machine to come up
	I0814 01:05:53.379895   61447 api_server.go:279] https://192.168.72.94:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0814 01:05:53.379926   61447 api_server.go:103] status: https://192.168.72.94:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0814 01:05:53.379939   61447 api_server.go:253] Checking apiserver healthz at https://192.168.72.94:8443/healthz ...
	I0814 01:05:53.410913   61447 api_server.go:279] https://192.168.72.94:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0814 01:05:53.410945   61447 api_server.go:103] status: https://192.168.72.94:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0814 01:05:53.727193   61447 api_server.go:253] Checking apiserver healthz at https://192.168.72.94:8443/healthz ...
	I0814 01:05:53.740840   61447 api_server.go:279] https://192.168.72.94:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0814 01:05:53.740877   61447 api_server.go:103] status: https://192.168.72.94:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0814 01:05:54.227186   61447 api_server.go:253] Checking apiserver healthz at https://192.168.72.94:8443/healthz ...
	I0814 01:05:54.238685   61447 api_server.go:279] https://192.168.72.94:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0814 01:05:54.238721   61447 api_server.go:103] status: https://192.168.72.94:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0814 01:05:54.727193   61447 api_server.go:253] Checking apiserver healthz at https://192.168.72.94:8443/healthz ...
	I0814 01:05:54.733996   61447 api_server.go:279] https://192.168.72.94:8443/healthz returned 200:
	ok
	I0814 01:05:54.744409   61447 api_server.go:141] control plane version: v1.31.0
	I0814 01:05:54.744439   61447 api_server.go:131] duration metric: took 4.018095644s to wait for apiserver health ...
	I0814 01:05:54.744455   61447 cni.go:84] Creating CNI manager for ""
	I0814 01:05:54.744495   61447 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 01:05:54.746461   61447 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0814 01:05:54.748115   61447 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0814 01:05:54.764310   61447 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0814 01:05:54.794096   61447 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 01:05:54.818989   61447 system_pods.go:59] 8 kube-system pods found
	I0814 01:05:54.819032   61447 system_pods.go:61] "coredns-6f6b679f8f-dz9zk" [67e29ce3-7f67-4b96-8030-c980773b5772] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0814 01:05:54.819042   61447 system_pods.go:61] "etcd-no-preload-776907" [b81b7341-dcd8-4374-8241-8797eb33d707] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0814 01:05:54.819081   61447 system_pods.go:61] "kube-apiserver-no-preload-776907" [33b066e2-28ef-46a7-95d7-b17806cdbde6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0814 01:05:54.819094   61447 system_pods.go:61] "kube-controller-manager-no-preload-776907" [1de07b1f-7e0d-4704-84dc-fbb1280fc3bf] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0814 01:05:54.819106   61447 system_pods.go:61] "kube-proxy-pgm9t" [efad60b0-c62e-4c47-974b-98fdca9d3496] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0814 01:05:54.819119   61447 system_pods.go:61] "kube-scheduler-no-preload-776907" [6a57c2f5-6194-4e84-bfd3-985a6ff2333d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0814 01:05:54.819136   61447 system_pods.go:61] "metrics-server-6867b74b74-gb2dt" [c950c58e-c5c3-4535-b10f-f4379ff03409] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 01:05:54.819157   61447 system_pods.go:61] "storage-provisioner" [d0ba9510-e0a5-4558-98e3-a9510920f93a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0814 01:05:54.819172   61447 system_pods.go:74] duration metric: took 25.05113ms to wait for pod list to return data ...
	I0814 01:05:54.819195   61447 node_conditions.go:102] verifying NodePressure condition ...
	I0814 01:05:54.826286   61447 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0814 01:05:54.826394   61447 node_conditions.go:123] node cpu capacity is 2
	I0814 01:05:54.826437   61447 node_conditions.go:105] duration metric: took 7.224617ms to run NodePressure ...
	I0814 01:05:54.826473   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:05:55.135886   61447 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0814 01:05:55.142122   61447 kubeadm.go:739] kubelet initialised
	I0814 01:05:55.142142   61447 kubeadm.go:740] duration metric: took 6.231178ms waiting for restarted kubelet to initialise ...
	I0814 01:05:55.142157   61447 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 01:05:55.147513   61447 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6f6b679f8f-dz9zk" in "kube-system" namespace to be "Ready" ...
	I0814 01:05:55.153178   61447 pod_ready.go:97] node "no-preload-776907" hosting pod "coredns-6f6b679f8f-dz9zk" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-776907" has status "Ready":"False"
	I0814 01:05:55.153200   61447 pod_ready.go:81] duration metric: took 5.659541ms for pod "coredns-6f6b679f8f-dz9zk" in "kube-system" namespace to be "Ready" ...
	E0814 01:05:55.153208   61447 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-776907" hosting pod "coredns-6f6b679f8f-dz9zk" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-776907" has status "Ready":"False"
	I0814 01:05:55.153215   61447 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-776907" in "kube-system" namespace to be "Ready" ...
	I0814 01:05:55.158158   61447 pod_ready.go:97] node "no-preload-776907" hosting pod "etcd-no-preload-776907" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-776907" has status "Ready":"False"
	I0814 01:05:55.158182   61447 pod_ready.go:81] duration metric: took 4.958453ms for pod "etcd-no-preload-776907" in "kube-system" namespace to be "Ready" ...
	E0814 01:05:55.158192   61447 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-776907" hosting pod "etcd-no-preload-776907" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-776907" has status "Ready":"False"
	I0814 01:05:55.158199   61447 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-776907" in "kube-system" namespace to be "Ready" ...
	I0814 01:05:55.164468   61447 pod_ready.go:97] node "no-preload-776907" hosting pod "kube-apiserver-no-preload-776907" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-776907" has status "Ready":"False"
	I0814 01:05:55.164490   61447 pod_ready.go:81] duration metric: took 6.286201ms for pod "kube-apiserver-no-preload-776907" in "kube-system" namespace to be "Ready" ...
	E0814 01:05:55.164499   61447 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-776907" hosting pod "kube-apiserver-no-preload-776907" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-776907" has status "Ready":"False"
	I0814 01:05:55.164506   61447 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-776907" in "kube-system" namespace to be "Ready" ...
	I0814 01:05:55.198966   61447 pod_ready.go:97] node "no-preload-776907" hosting pod "kube-controller-manager-no-preload-776907" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-776907" has status "Ready":"False"
	I0814 01:05:55.199003   61447 pod_ready.go:81] duration metric: took 34.484311ms for pod "kube-controller-manager-no-preload-776907" in "kube-system" namespace to be "Ready" ...
	E0814 01:05:55.199017   61447 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-776907" hosting pod "kube-controller-manager-no-preload-776907" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-776907" has status "Ready":"False"
	I0814 01:05:55.199026   61447 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-pgm9t" in "kube-system" namespace to be "Ready" ...
	I0814 01:05:55.598334   61447 pod_ready.go:97] node "no-preload-776907" hosting pod "kube-proxy-pgm9t" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-776907" has status "Ready":"False"
	I0814 01:05:55.598365   61447 pod_ready.go:81] duration metric: took 399.329275ms for pod "kube-proxy-pgm9t" in "kube-system" namespace to be "Ready" ...
	E0814 01:05:55.598377   61447 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-776907" hosting pod "kube-proxy-pgm9t" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-776907" has status "Ready":"False"
	I0814 01:05:55.598386   61447 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-776907" in "kube-system" namespace to be "Ready" ...
	I0814 01:05:55.998091   61447 pod_ready.go:97] node "no-preload-776907" hosting pod "kube-scheduler-no-preload-776907" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-776907" has status "Ready":"False"
	I0814 01:05:55.998127   61447 pod_ready.go:81] duration metric: took 399.731033ms for pod "kube-scheduler-no-preload-776907" in "kube-system" namespace to be "Ready" ...
	E0814 01:05:55.998142   61447 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-776907" hosting pod "kube-scheduler-no-preload-776907" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-776907" has status "Ready":"False"
	I0814 01:05:55.998152   61447 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace to be "Ready" ...
	I0814 01:05:56.397421   61447 pod_ready.go:97] node "no-preload-776907" hosting pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-776907" has status "Ready":"False"
	I0814 01:05:56.397448   61447 pod_ready.go:81] duration metric: took 399.277712ms for pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace to be "Ready" ...
	E0814 01:05:56.397458   61447 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-776907" hosting pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-776907" has status "Ready":"False"
	I0814 01:05:56.397465   61447 pod_ready.go:38] duration metric: took 1.255299191s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 01:05:56.397481   61447 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0814 01:05:56.409600   61447 ops.go:34] apiserver oom_adj: -16
	I0814 01:05:56.409643   61447 kubeadm.go:597] duration metric: took 8.215521031s to restartPrimaryControlPlane
	I0814 01:05:56.409656   61447 kubeadm.go:394] duration metric: took 8.258927601s to StartCluster
	I0814 01:05:56.409677   61447 settings.go:142] acquiring lock: {Name:mkb0f793aa2a6618ff3457f9cd2d34beec5f1b5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 01:05:56.409769   61447 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19429-9425/kubeconfig
	I0814 01:05:56.411135   61447 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19429-9425/kubeconfig: {Name:mkd99f966962f24d3ec49056c344f5320df43dba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 01:05:56.411434   61447 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.94 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0814 01:05:56.411510   61447 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0814 01:05:56.411605   61447 addons.go:69] Setting storage-provisioner=true in profile "no-preload-776907"
	I0814 01:05:56.411639   61447 addons.go:234] Setting addon storage-provisioner=true in "no-preload-776907"
	W0814 01:05:56.411651   61447 addons.go:243] addon storage-provisioner should already be in state true
	I0814 01:05:56.411692   61447 host.go:66] Checking if "no-preload-776907" exists ...
	I0814 01:05:56.411702   61447 config.go:182] Loaded profile config "no-preload-776907": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 01:05:56.411755   61447 addons.go:69] Setting default-storageclass=true in profile "no-preload-776907"
	I0814 01:05:56.411792   61447 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-776907"
	I0814 01:05:56.412127   61447 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:05:56.412169   61447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:05:56.412221   61447 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:05:56.412238   61447 addons.go:69] Setting metrics-server=true in profile "no-preload-776907"
	I0814 01:05:56.412249   61447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:05:56.412272   61447 addons.go:234] Setting addon metrics-server=true in "no-preload-776907"
	W0814 01:05:56.412289   61447 addons.go:243] addon metrics-server should already be in state true
	I0814 01:05:56.412325   61447 host.go:66] Checking if "no-preload-776907" exists ...
	I0814 01:05:56.412679   61447 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:05:56.412726   61447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:05:56.413470   61447 out.go:177] * Verifying Kubernetes components...
	I0814 01:05:56.414907   61447 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 01:05:56.432617   61447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40991
	I0814 01:05:56.433633   61447 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:05:56.433655   61447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42565
	I0814 01:05:56.433682   61447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33323
	I0814 01:05:56.434304   61447 main.go:141] libmachine: Using API Version  1
	I0814 01:05:56.434325   61447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:05:56.434348   61447 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:05:56.434768   61447 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:05:56.434828   61447 main.go:141] libmachine: Using API Version  1
	I0814 01:05:56.434849   61447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:05:56.435292   61447 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:05:56.435318   61447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:05:56.435500   61447 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:05:56.436085   61447 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:05:56.436133   61447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:05:56.436678   61447 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:05:56.438722   61447 main.go:141] libmachine: Using API Version  1
	I0814 01:05:56.438744   61447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:05:56.439300   61447 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:05:56.442254   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetState
	I0814 01:05:56.445951   61447 addons.go:234] Setting addon default-storageclass=true in "no-preload-776907"
	W0814 01:05:56.445969   61447 addons.go:243] addon default-storageclass should already be in state true
	I0814 01:05:56.445997   61447 host.go:66] Checking if "no-preload-776907" exists ...
	I0814 01:05:56.446331   61447 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:05:56.446364   61447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:05:56.457855   61447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36297
	I0814 01:05:56.459973   61447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40635
	I0814 01:05:56.460484   61447 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:05:56.461068   61447 main.go:141] libmachine: Using API Version  1
	I0814 01:05:56.461089   61447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:05:56.461565   61447 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:05:56.462741   61447 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:05:56.462899   61447 main.go:141] libmachine: Using API Version  1
	I0814 01:05:56.462913   61447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:05:56.463577   61447 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:05:56.463640   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetState
	I0814 01:05:56.464100   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetState
	I0814 01:05:56.464341   61447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38841
	I0814 01:05:56.465394   61447 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:05:56.465878   61447 main.go:141] libmachine: (no-preload-776907) Calling .DriverName
	I0814 01:05:56.465995   61447 main.go:141] libmachine: Using API Version  1
	I0814 01:05:56.466007   61447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:05:56.466617   61447 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:05:56.466684   61447 main.go:141] libmachine: (no-preload-776907) Calling .DriverName
	I0814 01:05:56.467327   61447 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:05:56.467367   61447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:05:56.468708   61447 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0814 01:05:56.468802   61447 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 01:05:56.469927   61447 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0814 01:05:56.469944   61447 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0814 01:05:56.469963   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHHostname
	I0814 01:05:56.473235   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:56.473684   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:56.473705   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:56.473879   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHPort
	I0814 01:05:56.474052   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 01:05:56.474176   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHUsername
	I0814 01:05:56.474181   61447 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 01:05:56.474230   61447 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0814 01:05:56.474244   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHHostname
	I0814 01:05:56.474328   61447 sshutil.go:53] new ssh client: &{IP:192.168.72.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/no-preload-776907/id_rsa Username:docker}
	I0814 01:05:56.477789   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:56.478291   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:56.478307   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:56.478643   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHPort
	I0814 01:05:56.478813   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 01:05:56.478932   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHUsername
	I0814 01:05:56.479056   61447 sshutil.go:53] new ssh client: &{IP:192.168.72.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/no-preload-776907/id_rsa Username:docker}
	I0814 01:05:56.506690   61447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40059
	I0814 01:05:56.507196   61447 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:05:56.507726   61447 main.go:141] libmachine: Using API Version  1
	I0814 01:05:56.507750   61447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:05:56.508129   61447 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:05:56.508352   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetState
	I0814 01:05:53.716678   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetIP
	I0814 01:05:53.719662   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:53.720132   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:05:53.720161   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:53.720382   61689 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0814 01:05:53.724276   61689 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 01:05:53.736896   61689 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-585256 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:default-k8s-diff-port-585256 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0814 01:05:53.737033   61689 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0814 01:05:53.737090   61689 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 01:05:53.786464   61689 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0814 01:05:53.786549   61689 ssh_runner.go:195] Run: which lz4
	I0814 01:05:53.791254   61689 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0814 01:05:53.796216   61689 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0814 01:05:53.796251   61689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0814 01:05:55.074296   61689 crio.go:462] duration metric: took 1.283077887s to copy over tarball
	I0814 01:05:55.074381   61689 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0814 01:05:57.330151   61689 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.255736783s)
	I0814 01:05:57.330183   61689 crio.go:469] duration metric: took 2.255855524s to extract the tarball
	I0814 01:05:57.330193   61689 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0814 01:05:57.390001   61689 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 01:05:57.438765   61689 crio.go:514] all images are preloaded for cri-o runtime.
	I0814 01:05:57.438795   61689 cache_images.go:84] Images are preloaded, skipping loading
	I0814 01:05:57.438804   61689 kubeadm.go:934] updating node { 192.168.39.110 8444 v1.31.0 crio true true} ...
	I0814 01:05:57.438939   61689 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-585256 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.110
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-585256 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0814 01:05:57.439019   61689 ssh_runner.go:195] Run: crio config
	I0814 01:05:57.487432   61689 cni.go:84] Creating CNI manager for ""
	I0814 01:05:57.487456   61689 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 01:05:57.487468   61689 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0814 01:05:57.487488   61689 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.110 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-585256 NodeName:default-k8s-diff-port-585256 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.110"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.110 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0814 01:05:57.487628   61689 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.110
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-585256"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.110
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.110"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0814 01:05:57.487683   61689 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0814 01:05:57.499806   61689 binaries.go:44] Found k8s binaries, skipping transfer
	I0814 01:05:57.499875   61689 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0814 01:05:57.508987   61689 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0814 01:05:57.527561   61689 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0814 01:05:57.546193   61689 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0814 01:05:57.566209   61689 ssh_runner.go:195] Run: grep 192.168.39.110	control-plane.minikube.internal$ /etc/hosts
	I0814 01:05:57.569852   61689 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.110	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 01:05:57.584800   61689 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 01:05:57.718643   61689 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 01:05:57.739124   61689 certs.go:68] Setting up /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/default-k8s-diff-port-585256 for IP: 192.168.39.110
	I0814 01:05:57.739153   61689 certs.go:194] generating shared ca certs ...
	I0814 01:05:57.739174   61689 certs.go:226] acquiring lock for ca certs: {Name:mke321171338291cf6d66f3acbfe43d3fabcafa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 01:05:57.739390   61689 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19429-9425/.minikube/ca.key
	I0814 01:05:57.739461   61689 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.key
	I0814 01:05:57.739476   61689 certs.go:256] generating profile certs ...
	I0814 01:05:57.739607   61689 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/default-k8s-diff-port-585256/client.key
	I0814 01:05:57.739700   61689 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/default-k8s-diff-port-585256/apiserver.key.7cbada89
	I0814 01:05:57.739764   61689 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/default-k8s-diff-port-585256/proxy-client.key
	I0814 01:05:57.739951   61689 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589.pem (1338 bytes)
	W0814 01:05:57.740000   61689 certs.go:480] ignoring /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589_empty.pem, impossibly tiny 0 bytes
	I0814 01:05:57.740017   61689 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem (1679 bytes)
	I0814 01:05:57.740054   61689 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem (1082 bytes)
	I0814 01:05:57.740096   61689 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem (1123 bytes)
	I0814 01:05:57.740128   61689 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem (1675 bytes)
	I0814 01:05:57.740198   61689 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem (1708 bytes)
	I0814 01:05:57.740914   61689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0814 01:05:57.776830   61689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0814 01:05:57.805557   61689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0814 01:05:57.838303   61689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0814 01:05:57.878807   61689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/default-k8s-diff-port-585256/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0814 01:05:57.918149   61689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/default-k8s-diff-port-585256/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0814 01:05:57.951098   61689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/default-k8s-diff-port-585256/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0814 01:05:57.979966   61689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/default-k8s-diff-port-585256/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0814 01:05:58.008045   61689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem --> /usr/share/ca-certificates/165892.pem (1708 bytes)
	I0814 01:05:56.510326   61447 main.go:141] libmachine: (no-preload-776907) Calling .DriverName
	I0814 01:05:56.510711   61447 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0814 01:05:56.510727   61447 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0814 01:05:56.510746   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHHostname
	I0814 01:05:56.513933   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:56.514347   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:56.514366   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:56.514640   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHPort
	I0814 01:05:56.514790   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 01:05:56.514921   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHUsername
	I0814 01:05:56.515041   61447 sshutil.go:53] new ssh client: &{IP:192.168.72.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/no-preload-776907/id_rsa Username:docker}
	I0814 01:05:56.648210   61447 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 01:05:56.669968   61447 node_ready.go:35] waiting up to 6m0s for node "no-preload-776907" to be "Ready" ...
	I0814 01:05:56.752258   61447 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0814 01:05:56.752282   61447 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0814 01:05:56.784534   61447 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0814 01:05:56.784570   61447 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0814 01:05:56.797555   61447 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0814 01:05:56.811711   61447 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 01:05:56.852143   61447 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0814 01:05:56.852222   61447 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0814 01:05:56.896802   61447 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0814 01:05:57.332181   61447 main.go:141] libmachine: Making call to close driver server
	I0814 01:05:57.332207   61447 main.go:141] libmachine: (no-preload-776907) Calling .Close
	I0814 01:05:57.332534   61447 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:05:57.332552   61447 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:05:57.332562   61447 main.go:141] libmachine: Making call to close driver server
	I0814 01:05:57.332570   61447 main.go:141] libmachine: (no-preload-776907) Calling .Close
	I0814 01:05:57.332892   61447 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:05:57.332908   61447 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:05:57.332999   61447 main.go:141] libmachine: (no-preload-776907) DBG | Closing plugin on server side
	I0814 01:05:57.377695   61447 main.go:141] libmachine: Making call to close driver server
	I0814 01:05:57.377726   61447 main.go:141] libmachine: (no-preload-776907) Calling .Close
	I0814 01:05:57.378310   61447 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:05:57.378335   61447 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:05:57.378307   61447 main.go:141] libmachine: (no-preload-776907) DBG | Closing plugin on server side
	I0814 01:05:58.285384   61447 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.388491618s)
	I0814 01:05:58.285399   61447 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.473604802s)
	I0814 01:05:58.285438   61447 main.go:141] libmachine: Making call to close driver server
	I0814 01:05:58.285466   61447 main.go:141] libmachine: (no-preload-776907) Calling .Close
	I0814 01:05:58.285438   61447 main.go:141] libmachine: Making call to close driver server
	I0814 01:05:58.285542   61447 main.go:141] libmachine: (no-preload-776907) Calling .Close
	I0814 01:05:58.285816   61447 main.go:141] libmachine: (no-preload-776907) DBG | Closing plugin on server side
	I0814 01:05:58.285858   61447 main.go:141] libmachine: (no-preload-776907) DBG | Closing plugin on server side
	I0814 01:05:58.285874   61447 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:05:58.285881   61447 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:05:58.285890   61447 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:05:58.285897   61447 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:05:58.285903   61447 main.go:141] libmachine: Making call to close driver server
	I0814 01:05:58.285908   61447 main.go:141] libmachine: Making call to close driver server
	I0814 01:05:58.285915   61447 main.go:141] libmachine: (no-preload-776907) Calling .Close
	I0814 01:05:58.285934   61447 main.go:141] libmachine: (no-preload-776907) Calling .Close
	I0814 01:05:58.286168   61447 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:05:58.286180   61447 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:05:58.287529   61447 main.go:141] libmachine: (no-preload-776907) DBG | Closing plugin on server side
	I0814 01:05:58.287541   61447 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:05:58.287560   61447 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:05:58.287576   61447 addons.go:475] Verifying addon metrics-server=true in "no-preload-776907"
	I0814 01:05:58.289411   61447 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0814 01:05:54.828943   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:05:54.829542   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 01:05:54.829567   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 01:05:54.829451   62858 retry.go:31] will retry after 761.368258ms: waiting for machine to come up
	I0814 01:05:55.592398   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:05:55.593051   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 01:05:55.593077   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 01:05:55.592959   62858 retry.go:31] will retry after 776.526082ms: waiting for machine to come up
	I0814 01:05:56.370701   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:05:56.371193   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 01:05:56.371214   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 01:05:56.371176   62858 retry.go:31] will retry after 1.033572565s: waiting for machine to come up
	I0814 01:05:57.407052   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:05:57.407572   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 01:05:57.407608   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 01:05:57.407514   62858 retry.go:31] will retry after 1.075443116s: waiting for machine to come up
	I0814 01:05:58.484020   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:05:58.484428   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 01:05:58.484450   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 01:05:58.484400   62858 retry.go:31] will retry after 1.753983606s: waiting for machine to come up
	I0814 01:05:58.290516   61447 addons.go:510] duration metric: took 1.879011423s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0814 01:05:58.674495   61447 node_ready.go:53] node "no-preload-776907" has status "Ready":"False"
	I0814 01:06:00.726396   61447 node_ready.go:53] node "no-preload-776907" has status "Ready":"False"
	I0814 01:05:58.035164   61689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0814 01:05:58.062151   61689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589.pem --> /usr/share/ca-certificates/16589.pem (1338 bytes)
	I0814 01:05:58.088779   61689 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0814 01:05:58.104815   61689 ssh_runner.go:195] Run: openssl version
	I0814 01:05:58.111743   61689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0814 01:05:58.122523   61689 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0814 01:05:58.126771   61689 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 13 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0814 01:05:58.126827   61689 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0814 01:05:58.132103   61689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0814 01:05:58.143604   61689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16589.pem && ln -fs /usr/share/ca-certificates/16589.pem /etc/ssl/certs/16589.pem"
	I0814 01:05:58.155065   61689 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16589.pem
	I0814 01:05:58.160457   61689 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 13 23:59 /usr/share/ca-certificates/16589.pem
	I0814 01:05:58.160511   61689 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16589.pem
	I0814 01:05:58.167417   61689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16589.pem /etc/ssl/certs/51391683.0"
	I0814 01:05:58.180825   61689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/165892.pem && ln -fs /usr/share/ca-certificates/165892.pem /etc/ssl/certs/165892.pem"
	I0814 01:05:58.193263   61689 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/165892.pem
	I0814 01:05:58.198571   61689 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 13 23:59 /usr/share/ca-certificates/165892.pem
	I0814 01:05:58.198637   61689 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/165892.pem
	I0814 01:05:58.205645   61689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/165892.pem /etc/ssl/certs/3ec20f2e.0"
	I0814 01:05:58.219088   61689 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0814 01:05:58.224431   61689 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0814 01:05:58.231762   61689 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0814 01:05:58.238996   61689 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0814 01:05:58.244758   61689 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0814 01:05:58.250112   61689 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0814 01:05:58.257224   61689 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0814 01:05:58.262563   61689 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-585256 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:default-k8s-diff-port-585256 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 01:05:58.262677   61689 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0814 01:05:58.262745   61689 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 01:05:58.309680   61689 cri.go:89] found id: ""
	I0814 01:05:58.309753   61689 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0814 01:05:58.319775   61689 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0814 01:05:58.319796   61689 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0814 01:05:58.319852   61689 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0814 01:05:58.329093   61689 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0814 01:05:58.330026   61689 kubeconfig.go:125] found "default-k8s-diff-port-585256" server: "https://192.168.39.110:8444"
	I0814 01:05:58.332001   61689 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0814 01:05:58.341206   61689 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.110
	I0814 01:05:58.341235   61689 kubeadm.go:1160] stopping kube-system containers ...
	I0814 01:05:58.341247   61689 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0814 01:05:58.341311   61689 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 01:05:58.376929   61689 cri.go:89] found id: ""
	I0814 01:05:58.376991   61689 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0814 01:05:58.393789   61689 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 01:05:58.402954   61689 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 01:05:58.402979   61689 kubeadm.go:157] found existing configuration files:
	
	I0814 01:05:58.403032   61689 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0814 01:05:58.412025   61689 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 01:05:58.412081   61689 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 01:05:58.421031   61689 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0814 01:05:58.429702   61689 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 01:05:58.429774   61689 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 01:05:58.438859   61689 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0814 01:05:58.447047   61689 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 01:05:58.447106   61689 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 01:05:58.455697   61689 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0814 01:05:58.463942   61689 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 01:05:58.464004   61689 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 01:05:58.472399   61689 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 01:05:58.481173   61689 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:05:58.591187   61689 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:05:59.150641   61689 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:05:59.356842   61689 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:05:59.416846   61689 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:05:59.500693   61689 api_server.go:52] waiting for apiserver process to appear ...
	I0814 01:05:59.500779   61689 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:00.001860   61689 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:00.500969   61689 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:01.001662   61689 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:01.030737   61689 api_server.go:72] duration metric: took 1.530044643s to wait for apiserver process to appear ...
	I0814 01:06:01.030766   61689 api_server.go:88] waiting for apiserver healthz status ...
	I0814 01:06:01.030790   61689 api_server.go:253] Checking apiserver healthz at https://192.168.39.110:8444/healthz ...
	I0814 01:06:01.031270   61689 api_server.go:269] stopped: https://192.168.39.110:8444/healthz: Get "https://192.168.39.110:8444/healthz": dial tcp 192.168.39.110:8444: connect: connection refused
	I0814 01:06:01.530913   61689 api_server.go:253] Checking apiserver healthz at https://192.168.39.110:8444/healthz ...
	I0814 01:06:00.239701   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:00.240210   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 01:06:00.240234   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 01:06:00.240157   62858 retry.go:31] will retry after 1.471169968s: waiting for machine to come up
	I0814 01:06:01.713921   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:01.714410   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 01:06:01.714449   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 01:06:01.714385   62858 retry.go:31] will retry after 2.509653415s: waiting for machine to come up
	I0814 01:06:04.225883   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:04.226391   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 01:06:04.226417   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 01:06:04.226346   62858 retry.go:31] will retry after 3.61921572s: waiting for machine to come up
	I0814 01:06:04.011296   61689 api_server.go:279] https://192.168.39.110:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0814 01:06:04.011342   61689 api_server.go:103] status: https://192.168.39.110:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0814 01:06:04.011359   61689 api_server.go:253] Checking apiserver healthz at https://192.168.39.110:8444/healthz ...
	I0814 01:06:04.030095   61689 api_server.go:279] https://192.168.39.110:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0814 01:06:04.030128   61689 api_server.go:103] status: https://192.168.39.110:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0814 01:06:04.031159   61689 api_server.go:253] Checking apiserver healthz at https://192.168.39.110:8444/healthz ...
	I0814 01:06:04.149715   61689 api_server.go:279] https://192.168.39.110:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0814 01:06:04.149760   61689 api_server.go:103] status: https://192.168.39.110:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0814 01:06:04.530942   61689 api_server.go:253] Checking apiserver healthz at https://192.168.39.110:8444/healthz ...
	I0814 01:06:04.541074   61689 api_server.go:279] https://192.168.39.110:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0814 01:06:04.541119   61689 api_server.go:103] status: https://192.168.39.110:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0814 01:06:05.031232   61689 api_server.go:253] Checking apiserver healthz at https://192.168.39.110:8444/healthz ...
	I0814 01:06:05.036252   61689 api_server.go:279] https://192.168.39.110:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0814 01:06:05.036278   61689 api_server.go:103] status: https://192.168.39.110:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0814 01:06:05.531902   61689 api_server.go:253] Checking apiserver healthz at https://192.168.39.110:8444/healthz ...
	I0814 01:06:05.536016   61689 api_server.go:279] https://192.168.39.110:8444/healthz returned 200:
	ok
	I0814 01:06:05.542693   61689 api_server.go:141] control plane version: v1.31.0
	I0814 01:06:05.542718   61689 api_server.go:131] duration metric: took 4.511944733s to wait for apiserver health ...
	I0814 01:06:05.542728   61689 cni.go:84] Creating CNI manager for ""
	I0814 01:06:05.542736   61689 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 01:06:05.544557   61689 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0814 01:06:03.174271   61447 node_ready.go:53] node "no-preload-776907" has status "Ready":"False"
	I0814 01:06:04.174287   61447 node_ready.go:49] node "no-preload-776907" has status "Ready":"True"
	I0814 01:06:04.174312   61447 node_ready.go:38] duration metric: took 7.504312709s for node "no-preload-776907" to be "Ready" ...
	I0814 01:06:04.174324   61447 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 01:06:04.181275   61447 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-dz9zk" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:04.187150   61447 pod_ready.go:92] pod "coredns-6f6b679f8f-dz9zk" in "kube-system" namespace has status "Ready":"True"
	I0814 01:06:04.187171   61447 pod_ready.go:81] duration metric: took 5.866488ms for pod "coredns-6f6b679f8f-dz9zk" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:04.187180   61447 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-776907" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:04.192673   61447 pod_ready.go:92] pod "etcd-no-preload-776907" in "kube-system" namespace has status "Ready":"True"
	I0814 01:06:04.192694   61447 pod_ready.go:81] duration metric: took 5.50752ms for pod "etcd-no-preload-776907" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:04.192705   61447 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-776907" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:06.199283   61447 pod_ready.go:102] pod "kube-apiserver-no-preload-776907" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:05.545819   61689 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0814 01:06:05.556019   61689 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0814 01:06:05.598403   61689 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 01:06:05.608687   61689 system_pods.go:59] 8 kube-system pods found
	I0814 01:06:05.608718   61689 system_pods.go:61] "coredns-6f6b679f8f-7vdsf" [ea069874-e3a9-41a4-b038-cfca429e60cc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0814 01:06:05.608730   61689 system_pods.go:61] "etcd-default-k8s-diff-port-585256" [922a7db1-2b4d-4f7b-af08-3ed730f1d6e3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0814 01:06:05.608737   61689 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-585256" [2db632ae-aaf3-4df4-85b2-7ba505297efb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0814 01:06:05.608743   61689 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-585256" [d9cc182b-9153-4606-a719-465aed72c481] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0814 01:06:05.608747   61689 system_pods.go:61] "kube-proxy-cz77l" [67d1af69-ecbd-4564-be50-f96936604345] Running
	I0814 01:06:05.608751   61689 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-585256" [f0e99120-b573-4eb6-909f-a9b79886ec47] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0814 01:06:05.608755   61689 system_pods.go:61] "metrics-server-6867b74b74-6cql9" [f1213ad4-770d-4b81-96b9-7b5e10f2a23a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 01:06:05.608760   61689 system_pods.go:61] "storage-provisioner" [589b83be-2ad6-4b16-829f-cb944487303c] Running
	I0814 01:06:05.608766   61689 system_pods.go:74] duration metric: took 10.339955ms to wait for pod list to return data ...
	I0814 01:06:05.608772   61689 node_conditions.go:102] verifying NodePressure condition ...
	I0814 01:06:05.612993   61689 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0814 01:06:05.613024   61689 node_conditions.go:123] node cpu capacity is 2
	I0814 01:06:05.613037   61689 node_conditions.go:105] duration metric: took 4.259435ms to run NodePressure ...
	I0814 01:06:05.613055   61689 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:06:05.884859   61689 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0814 01:06:05.889608   61689 kubeadm.go:739] kubelet initialised
	I0814 01:06:05.889636   61689 kubeadm.go:740] duration metric: took 4.742229ms waiting for restarted kubelet to initialise ...
	I0814 01:06:05.889644   61689 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 01:06:05.991222   61689 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6f6b679f8f-7vdsf" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:05.997411   61689 pod_ready.go:97] node "default-k8s-diff-port-585256" hosting pod "coredns-6f6b679f8f-7vdsf" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-585256" has status "Ready":"False"
	I0814 01:06:05.997442   61689 pod_ready.go:81] duration metric: took 6.186188ms for pod "coredns-6f6b679f8f-7vdsf" in "kube-system" namespace to be "Ready" ...
	E0814 01:06:05.997455   61689 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-585256" hosting pod "coredns-6f6b679f8f-7vdsf" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-585256" has status "Ready":"False"
	I0814 01:06:05.997463   61689 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-585256" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:06.008153   61689 pod_ready.go:97] node "default-k8s-diff-port-585256" hosting pod "etcd-default-k8s-diff-port-585256" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-585256" has status "Ready":"False"
	I0814 01:06:06.008188   61689 pod_ready.go:81] duration metric: took 10.714691ms for pod "etcd-default-k8s-diff-port-585256" in "kube-system" namespace to be "Ready" ...
	E0814 01:06:06.008204   61689 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-585256" hosting pod "etcd-default-k8s-diff-port-585256" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-585256" has status "Ready":"False"
	I0814 01:06:06.008213   61689 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-585256" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:06.013480   61689 pod_ready.go:97] node "default-k8s-diff-port-585256" hosting pod "kube-apiserver-default-k8s-diff-port-585256" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-585256" has status "Ready":"False"
	I0814 01:06:06.013500   61689 pod_ready.go:81] duration metric: took 5.279106ms for pod "kube-apiserver-default-k8s-diff-port-585256" in "kube-system" namespace to be "Ready" ...
	E0814 01:06:06.013510   61689 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-585256" hosting pod "kube-apiserver-default-k8s-diff-port-585256" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-585256" has status "Ready":"False"
	I0814 01:06:06.013517   61689 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-585256" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:06.022821   61689 pod_ready.go:97] node "default-k8s-diff-port-585256" hosting pod "kube-controller-manager-default-k8s-diff-port-585256" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-585256" has status "Ready":"False"
	I0814 01:06:06.022841   61689 pod_ready.go:81] duration metric: took 9.318586ms for pod "kube-controller-manager-default-k8s-diff-port-585256" in "kube-system" namespace to be "Ready" ...
	E0814 01:06:06.022851   61689 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-585256" hosting pod "kube-controller-manager-default-k8s-diff-port-585256" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-585256" has status "Ready":"False"
	I0814 01:06:06.022857   61689 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-cz77l" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:06.402225   61689 pod_ready.go:92] pod "kube-proxy-cz77l" in "kube-system" namespace has status "Ready":"True"
	I0814 01:06:06.402251   61689 pod_ready.go:81] duration metric: took 379.387097ms for pod "kube-proxy-cz77l" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:06.402267   61689 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-585256" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:07.847343   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:07.847844   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 01:06:07.847879   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 01:06:07.847800   62858 retry.go:31] will retry after 2.983420512s: waiting for machine to come up
	I0814 01:06:07.699362   61447 pod_ready.go:92] pod "kube-apiserver-no-preload-776907" in "kube-system" namespace has status "Ready":"True"
	I0814 01:06:07.699393   61447 pod_ready.go:81] duration metric: took 3.506678951s for pod "kube-apiserver-no-preload-776907" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:07.699407   61447 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-776907" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:07.704007   61447 pod_ready.go:92] pod "kube-controller-manager-no-preload-776907" in "kube-system" namespace has status "Ready":"True"
	I0814 01:06:07.704028   61447 pod_ready.go:81] duration metric: took 4.613152ms for pod "kube-controller-manager-no-preload-776907" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:07.704038   61447 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pgm9t" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:07.708027   61447 pod_ready.go:92] pod "kube-proxy-pgm9t" in "kube-system" namespace has status "Ready":"True"
	I0814 01:06:07.708044   61447 pod_ready.go:81] duration metric: took 3.999792ms for pod "kube-proxy-pgm9t" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:07.708052   61447 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-776907" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:07.774591   61447 pod_ready.go:92] pod "kube-scheduler-no-preload-776907" in "kube-system" namespace has status "Ready":"True"
	I0814 01:06:07.774621   61447 pod_ready.go:81] duration metric: took 66.56102ms for pod "kube-scheduler-no-preload-776907" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:07.774642   61447 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:09.781156   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:12.050400   61115 start.go:364] duration metric: took 54.455049928s to acquireMachinesLock for "embed-certs-901410"
	I0814 01:06:12.050448   61115 start.go:96] Skipping create...Using existing machine configuration
	I0814 01:06:12.050458   61115 fix.go:54] fixHost starting: 
	I0814 01:06:12.050897   61115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:06:12.050932   61115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:06:12.067865   61115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41559
	I0814 01:06:12.068209   61115 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:06:12.068726   61115 main.go:141] libmachine: Using API Version  1
	I0814 01:06:12.068757   61115 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:06:12.069116   61115 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:06:12.069354   61115 main.go:141] libmachine: (embed-certs-901410) Calling .DriverName
	I0814 01:06:12.069516   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetState
	I0814 01:06:12.070994   61115 fix.go:112] recreateIfNeeded on embed-certs-901410: state=Stopped err=<nil>
	I0814 01:06:12.071029   61115 main.go:141] libmachine: (embed-certs-901410) Calling .DriverName
	W0814 01:06:12.071156   61115 fix.go:138] unexpected machine state, will restart: <nil>
	I0814 01:06:12.072932   61115 out.go:177] * Restarting existing kvm2 VM for "embed-certs-901410" ...
	I0814 01:06:08.410114   61689 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-585256" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:10.909528   61689 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-585256" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:12.911385   61689 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-585256" in "kube-system" namespace has status "Ready":"True"
	I0814 01:06:12.911416   61689 pod_ready.go:81] duration metric: took 6.509140238s for pod "kube-scheduler-default-k8s-diff-port-585256" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:12.911432   61689 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:10.834861   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:10.835358   61804 main.go:141] libmachine: (old-k8s-version-179312) Found IP for machine: 192.168.61.123
	I0814 01:06:10.835381   61804 main.go:141] libmachine: (old-k8s-version-179312) Reserving static IP address...
	I0814 01:06:10.835396   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has current primary IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:10.835795   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "old-k8s-version-179312", mac: "52:54:00:b2:76:73", ip: "192.168.61.123"} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:10.835827   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | skip adding static IP to network mk-old-k8s-version-179312 - found existing host DHCP lease matching {name: "old-k8s-version-179312", mac: "52:54:00:b2:76:73", ip: "192.168.61.123"}
	I0814 01:06:10.835846   61804 main.go:141] libmachine: (old-k8s-version-179312) Reserved static IP address: 192.168.61.123
	I0814 01:06:10.835866   61804 main.go:141] libmachine: (old-k8s-version-179312) Waiting for SSH to be available...
	I0814 01:06:10.835880   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | Getting to WaitForSSH function...
	I0814 01:06:10.837965   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:10.838336   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:10.838379   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:10.838482   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | Using SSH client type: external
	I0814 01:06:10.838520   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | Using SSH private key: /home/jenkins/minikube-integration/19429-9425/.minikube/machines/old-k8s-version-179312/id_rsa (-rw-------)
	I0814 01:06:10.838549   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.123 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19429-9425/.minikube/machines/old-k8s-version-179312/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0814 01:06:10.838568   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | About to run SSH command:
	I0814 01:06:10.838578   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | exit 0
	I0814 01:06:10.965836   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | SSH cmd err, output: <nil>: 
	I0814 01:06:10.966231   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetConfigRaw
	I0814 01:06:10.966912   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetIP
	I0814 01:06:10.969194   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:10.969535   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:10.969560   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:10.969789   61804 profile.go:143] Saving config to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312/config.json ...
	I0814 01:06:10.969969   61804 machine.go:94] provisionDockerMachine start ...
	I0814 01:06:10.969987   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .DriverName
	I0814 01:06:10.970183   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHHostname
	I0814 01:06:10.972010   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:10.972332   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:10.972361   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:10.972476   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHPort
	I0814 01:06:10.972658   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 01:06:10.972807   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 01:06:10.972942   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHUsername
	I0814 01:06:10.973088   61804 main.go:141] libmachine: Using SSH client type: native
	I0814 01:06:10.973257   61804 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I0814 01:06:10.973267   61804 main.go:141] libmachine: About to run SSH command:
	hostname
	I0814 01:06:11.074077   61804 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0814 01:06:11.074111   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetMachineName
	I0814 01:06:11.074328   61804 buildroot.go:166] provisioning hostname "old-k8s-version-179312"
	I0814 01:06:11.074364   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetMachineName
	I0814 01:06:11.074666   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHHostname
	I0814 01:06:11.077309   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.077697   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:11.077730   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.077803   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHPort
	I0814 01:06:11.077990   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 01:06:11.078161   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 01:06:11.078304   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHUsername
	I0814 01:06:11.078510   61804 main.go:141] libmachine: Using SSH client type: native
	I0814 01:06:11.078729   61804 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I0814 01:06:11.078743   61804 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-179312 && echo "old-k8s-version-179312" | sudo tee /etc/hostname
	I0814 01:06:11.193209   61804 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-179312
	
	I0814 01:06:11.193241   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHHostname
	I0814 01:06:11.195907   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.196315   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:11.196342   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.196569   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHPort
	I0814 01:06:11.196774   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 01:06:11.196936   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 01:06:11.197079   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHUsername
	I0814 01:06:11.197234   61804 main.go:141] libmachine: Using SSH client type: native
	I0814 01:06:11.197448   61804 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I0814 01:06:11.197477   61804 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-179312' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-179312/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-179312' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0814 01:06:11.312005   61804 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 01:06:11.312037   61804 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19429-9425/.minikube CaCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19429-9425/.minikube}
	I0814 01:06:11.312082   61804 buildroot.go:174] setting up certificates
	I0814 01:06:11.312093   61804 provision.go:84] configureAuth start
	I0814 01:06:11.312103   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetMachineName
	I0814 01:06:11.312396   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetIP
	I0814 01:06:11.315412   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.315909   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:11.315952   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.316043   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHHostname
	I0814 01:06:11.318283   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.318603   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:11.318630   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.318791   61804 provision.go:143] copyHostCerts
	I0814 01:06:11.318852   61804 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem, removing ...
	I0814 01:06:11.318875   61804 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem
	I0814 01:06:11.318944   61804 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem (1082 bytes)
	I0814 01:06:11.319073   61804 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem, removing ...
	I0814 01:06:11.319085   61804 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem
	I0814 01:06:11.319115   61804 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem (1123 bytes)
	I0814 01:06:11.319199   61804 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem, removing ...
	I0814 01:06:11.319209   61804 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem
	I0814 01:06:11.319262   61804 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem (1675 bytes)
	I0814 01:06:11.319351   61804 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-179312 san=[127.0.0.1 192.168.61.123 localhost minikube old-k8s-version-179312]
	I0814 01:06:11.396260   61804 provision.go:177] copyRemoteCerts
	I0814 01:06:11.396338   61804 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0814 01:06:11.396372   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHHostname
	I0814 01:06:11.399365   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.399788   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:11.399824   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.399989   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHPort
	I0814 01:06:11.400186   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 01:06:11.400349   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHUsername
	I0814 01:06:11.400555   61804 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/old-k8s-version-179312/id_rsa Username:docker}
	I0814 01:06:11.483862   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0814 01:06:11.506282   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0814 01:06:11.529014   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0814 01:06:11.550986   61804 provision.go:87] duration metric: took 238.880389ms to configureAuth
	I0814 01:06:11.551022   61804 buildroot.go:189] setting minikube options for container-runtime
	I0814 01:06:11.551253   61804 config.go:182] Loaded profile config "old-k8s-version-179312": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0814 01:06:11.551330   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHHostname
	I0814 01:06:11.554244   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.554622   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:11.554655   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.554880   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHPort
	I0814 01:06:11.555073   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 01:06:11.555249   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 01:06:11.555402   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHUsername
	I0814 01:06:11.555590   61804 main.go:141] libmachine: Using SSH client type: native
	I0814 01:06:11.555834   61804 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I0814 01:06:11.555856   61804 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0814 01:06:11.824529   61804 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0814 01:06:11.824553   61804 machine.go:97] duration metric: took 854.572333ms to provisionDockerMachine
	I0814 01:06:11.824569   61804 start.go:293] postStartSetup for "old-k8s-version-179312" (driver="kvm2")
	I0814 01:06:11.824581   61804 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0814 01:06:11.824626   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .DriverName
	I0814 01:06:11.824929   61804 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0814 01:06:11.824952   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHHostname
	I0814 01:06:11.828165   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.828510   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:11.828545   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.828693   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHPort
	I0814 01:06:11.828883   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 01:06:11.829032   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHUsername
	I0814 01:06:11.829206   61804 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/old-k8s-version-179312/id_rsa Username:docker}
	I0814 01:06:11.909667   61804 ssh_runner.go:195] Run: cat /etc/os-release
	I0814 01:06:11.913426   61804 info.go:137] Remote host: Buildroot 2023.02.9
	I0814 01:06:11.913452   61804 filesync.go:126] Scanning /home/jenkins/minikube-integration/19429-9425/.minikube/addons for local assets ...
	I0814 01:06:11.913530   61804 filesync.go:126] Scanning /home/jenkins/minikube-integration/19429-9425/.minikube/files for local assets ...
	I0814 01:06:11.913630   61804 filesync.go:149] local asset: /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem -> 165892.pem in /etc/ssl/certs
	I0814 01:06:11.913753   61804 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0814 01:06:11.923687   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem --> /etc/ssl/certs/165892.pem (1708 bytes)
	I0814 01:06:11.946123   61804 start.go:296] duration metric: took 121.53594ms for postStartSetup
	I0814 01:06:11.946172   61804 fix.go:56] duration metric: took 19.859362691s for fixHost
	I0814 01:06:11.946192   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHHostname
	I0814 01:06:11.948880   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.949241   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:11.949264   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.949490   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHPort
	I0814 01:06:11.949702   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 01:06:11.949889   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 01:06:11.950031   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHUsername
	I0814 01:06:11.950210   61804 main.go:141] libmachine: Using SSH client type: native
	I0814 01:06:11.950390   61804 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I0814 01:06:11.950403   61804 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0814 01:06:12.050230   61804 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723597572.007643909
	
	I0814 01:06:12.050252   61804 fix.go:216] guest clock: 1723597572.007643909
	I0814 01:06:12.050259   61804 fix.go:229] Guest: 2024-08-14 01:06:12.007643909 +0000 UTC Remote: 2024-08-14 01:06:11.946176003 +0000 UTC m=+272.466568091 (delta=61.467906ms)
	I0814 01:06:12.050292   61804 fix.go:200] guest clock delta is within tolerance: 61.467906ms
	I0814 01:06:12.050297   61804 start.go:83] releasing machines lock for "old-k8s-version-179312", held for 19.963518958s
	I0814 01:06:12.050328   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .DriverName
	I0814 01:06:12.050593   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetIP
	I0814 01:06:12.053723   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:12.054140   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:12.054170   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:12.054376   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .DriverName
	I0814 01:06:12.054804   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .DriverName
	I0814 01:06:12.054992   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .DriverName
	I0814 01:06:12.055076   61804 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0814 01:06:12.055137   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHHostname
	I0814 01:06:12.055191   61804 ssh_runner.go:195] Run: cat /version.json
	I0814 01:06:12.055216   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHHostname
	I0814 01:06:12.058027   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:12.058378   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:12.058404   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:12.058455   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:12.058684   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHPort
	I0814 01:06:12.058796   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:12.058828   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:12.058874   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 01:06:12.059041   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHUsername
	I0814 01:06:12.059107   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHPort
	I0814 01:06:12.059179   61804 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/old-k8s-version-179312/id_rsa Username:docker}
	I0814 01:06:12.059276   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 01:06:12.059582   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHUsername
	I0814 01:06:12.059721   61804 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/old-k8s-version-179312/id_rsa Username:docker}
	I0814 01:06:12.169671   61804 ssh_runner.go:195] Run: systemctl --version
	I0814 01:06:12.175640   61804 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0814 01:06:12.326156   61804 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0814 01:06:12.332951   61804 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0814 01:06:12.333015   61804 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0814 01:06:12.351706   61804 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0814 01:06:12.351737   61804 start.go:495] detecting cgroup driver to use...
	I0814 01:06:12.351808   61804 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0814 01:06:12.367945   61804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0814 01:06:12.381540   61804 docker.go:217] disabling cri-docker service (if available) ...
	I0814 01:06:12.381607   61804 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0814 01:06:12.394497   61804 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0814 01:06:12.408848   61804 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0814 01:06:12.530080   61804 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0814 01:06:12.705566   61804 docker.go:233] disabling docker service ...
	I0814 01:06:12.705627   61804 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0814 01:06:12.721274   61804 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0814 01:06:12.736855   61804 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0814 01:06:12.851178   61804 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0814 01:06:12.973876   61804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0814 01:06:12.987600   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 01:06:13.004553   61804 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0814 01:06:13.004656   61804 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:06:13.014424   61804 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0814 01:06:13.014507   61804 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:06:13.024038   61804 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:06:13.033588   61804 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:06:13.043124   61804 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0814 01:06:13.052585   61804 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0814 01:06:13.061221   61804 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0814 01:06:13.061308   61804 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0814 01:06:13.075277   61804 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0814 01:06:13.087018   61804 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 01:06:13.227288   61804 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0814 01:06:13.372753   61804 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0814 01:06:13.372848   61804 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0814 01:06:13.377444   61804 start.go:563] Will wait 60s for crictl version
	I0814 01:06:13.377499   61804 ssh_runner.go:195] Run: which crictl
	I0814 01:06:13.381068   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0814 01:06:13.430604   61804 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0814 01:06:13.430694   61804 ssh_runner.go:195] Run: crio --version
	I0814 01:06:13.460827   61804 ssh_runner.go:195] Run: crio --version
	I0814 01:06:13.491550   61804 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0814 01:06:13.492760   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetIP
	I0814 01:06:13.495846   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:13.496218   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:13.496255   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:13.496435   61804 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0814 01:06:13.500489   61804 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 01:06:13.512643   61804 kubeadm.go:883] updating cluster {Name:old-k8s-version-179312 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-179312 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.123 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0814 01:06:13.512785   61804 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0814 01:06:13.512842   61804 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 01:06:13.560050   61804 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0814 01:06:13.560112   61804 ssh_runner.go:195] Run: which lz4
	I0814 01:06:13.564105   61804 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0814 01:06:13.567985   61804 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0814 01:06:13.568014   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0814 01:06:12.074155   61115 main.go:141] libmachine: (embed-certs-901410) Calling .Start
	I0814 01:06:12.074285   61115 main.go:141] libmachine: (embed-certs-901410) Ensuring networks are active...
	I0814 01:06:12.074948   61115 main.go:141] libmachine: (embed-certs-901410) Ensuring network default is active
	I0814 01:06:12.075282   61115 main.go:141] libmachine: (embed-certs-901410) Ensuring network mk-embed-certs-901410 is active
	I0814 01:06:12.075694   61115 main.go:141] libmachine: (embed-certs-901410) Getting domain xml...
	I0814 01:06:12.076354   61115 main.go:141] libmachine: (embed-certs-901410) Creating domain...
	I0814 01:06:13.425468   61115 main.go:141] libmachine: (embed-certs-901410) Waiting to get IP...
	I0814 01:06:13.426367   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:13.426876   61115 main.go:141] libmachine: (embed-certs-901410) DBG | unable to find current IP address of domain embed-certs-901410 in network mk-embed-certs-901410
	I0814 01:06:13.426936   61115 main.go:141] libmachine: (embed-certs-901410) DBG | I0814 01:06:13.426842   63044 retry.go:31] will retry after 280.861769ms: waiting for machine to come up
	I0814 01:06:13.709645   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:13.710369   61115 main.go:141] libmachine: (embed-certs-901410) DBG | unable to find current IP address of domain embed-certs-901410 in network mk-embed-certs-901410
	I0814 01:06:13.710524   61115 main.go:141] libmachine: (embed-certs-901410) DBG | I0814 01:06:13.710442   63044 retry.go:31] will retry after 316.02196ms: waiting for machine to come up
	I0814 01:06:14.028197   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:14.028722   61115 main.go:141] libmachine: (embed-certs-901410) DBG | unable to find current IP address of domain embed-certs-901410 in network mk-embed-certs-901410
	I0814 01:06:14.028751   61115 main.go:141] libmachine: (embed-certs-901410) DBG | I0814 01:06:14.028683   63044 retry.go:31] will retry after 317.388844ms: waiting for machine to come up
	I0814 01:06:14.347390   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:14.347888   61115 main.go:141] libmachine: (embed-certs-901410) DBG | unable to find current IP address of domain embed-certs-901410 in network mk-embed-certs-901410
	I0814 01:06:14.347917   61115 main.go:141] libmachine: (embed-certs-901410) DBG | I0814 01:06:14.347834   63044 retry.go:31] will retry after 422.687955ms: waiting for machine to come up
	I0814 01:06:14.772182   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:14.772756   61115 main.go:141] libmachine: (embed-certs-901410) DBG | unable to find current IP address of domain embed-certs-901410 in network mk-embed-certs-901410
	I0814 01:06:14.772785   61115 main.go:141] libmachine: (embed-certs-901410) DBG | I0814 01:06:14.772704   63044 retry.go:31] will retry after 517.722001ms: waiting for machine to come up
	I0814 01:06:11.781300   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:13.782226   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:15.782509   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:14.919068   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:16.920536   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:15.010425   61804 crio.go:462] duration metric: took 1.446361159s to copy over tarball
	I0814 01:06:15.010503   61804 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0814 01:06:17.960543   61804 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.950002604s)
	I0814 01:06:17.960583   61804 crio.go:469] duration metric: took 2.950131362s to extract the tarball
	I0814 01:06:17.960595   61804 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0814 01:06:18.002898   61804 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 01:06:18.039862   61804 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0814 01:06:18.039887   61804 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0814 01:06:18.039949   61804 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 01:06:18.039976   61804 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0814 01:06:18.040029   61804 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0814 01:06:18.040037   61804 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 01:06:18.040076   61804 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0814 01:06:18.040092   61804 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0814 01:06:18.040279   61804 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0814 01:06:18.040285   61804 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0814 01:06:18.041502   61804 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 01:06:18.041605   61804 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0814 01:06:18.041642   61804 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 01:06:18.041655   61804 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0814 01:06:18.041683   61804 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0814 01:06:18.041709   61804 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0814 01:06:18.041712   61804 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0814 01:06:18.041643   61804 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0814 01:06:18.267865   61804 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0814 01:06:18.300630   61804 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0814 01:06:18.309691   61804 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0814 01:06:18.312711   61804 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0814 01:06:18.319830   61804 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 01:06:18.333483   61804 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0814 01:06:18.333571   61804 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0814 01:06:18.333617   61804 ssh_runner.go:195] Run: which crictl
	I0814 01:06:18.333854   61804 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0814 01:06:18.355530   61804 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0814 01:06:18.460940   61804 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0814 01:06:18.460989   61804 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0814 01:06:18.460991   61804 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0814 01:06:18.461028   61804 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0814 01:06:18.461038   61804 ssh_runner.go:195] Run: which crictl
	I0814 01:06:18.461072   61804 ssh_runner.go:195] Run: which crictl
	I0814 01:06:18.466105   61804 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0814 01:06:18.466146   61804 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 01:06:18.466158   61804 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0814 01:06:18.466194   61804 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0814 01:06:18.466200   61804 ssh_runner.go:195] Run: which crictl
	I0814 01:06:18.466232   61804 ssh_runner.go:195] Run: which crictl
	I0814 01:06:18.466109   61804 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0814 01:06:18.466290   61804 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0814 01:06:18.466163   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0814 01:06:18.466338   61804 ssh_runner.go:195] Run: which crictl
	I0814 01:06:18.471203   61804 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0814 01:06:18.471244   61804 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0814 01:06:18.471327   61804 ssh_runner.go:195] Run: which crictl
	I0814 01:06:18.477596   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0814 01:06:18.477709   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 01:06:18.477741   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0814 01:06:18.536417   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0814 01:06:18.536483   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0814 01:06:18.536443   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0814 01:06:18.536516   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0814 01:06:18.560937   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0814 01:06:18.560979   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0814 01:06:18.571932   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 01:06:18.690215   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0814 01:06:18.690271   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0814 01:06:18.690385   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0814 01:06:18.690416   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0814 01:06:18.710801   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0814 01:06:18.722130   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0814 01:06:18.722180   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 01:06:18.854942   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0814 01:06:18.854975   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0814 01:06:18.855019   61804 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0814 01:06:18.855064   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0814 01:06:18.855069   61804 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0814 01:06:18.855143   61804 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0814 01:06:18.855197   61804 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0814 01:06:18.917832   61804 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0814 01:06:18.917892   61804 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0814 01:06:18.919778   61804 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0814 01:06:18.937014   61804 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 01:06:19.077956   61804 cache_images.go:92] duration metric: took 1.038051355s to LoadCachedImages
	W0814 01:06:19.078050   61804 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0814 01:06:19.078068   61804 kubeadm.go:934] updating node { 192.168.61.123 8443 v1.20.0 crio true true} ...
	I0814 01:06:19.078198   61804 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-179312 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.123
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-179312 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0814 01:06:19.078309   61804 ssh_runner.go:195] Run: crio config
	I0814 01:06:19.126091   61804 cni.go:84] Creating CNI manager for ""
	I0814 01:06:19.126114   61804 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 01:06:19.126129   61804 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0814 01:06:19.126159   61804 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.123 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-179312 NodeName:old-k8s-version-179312 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.123"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.123 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0814 01:06:19.126325   61804 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.123
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-179312"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.123
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.123"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0814 01:06:19.126402   61804 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0814 01:06:19.136422   61804 binaries.go:44] Found k8s binaries, skipping transfer
	I0814 01:06:19.136481   61804 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0814 01:06:19.145476   61804 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0814 01:06:19.161780   61804 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0814 01:06:19.178893   61804 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0814 01:06:19.196515   61804 ssh_runner.go:195] Run: grep 192.168.61.123	control-plane.minikube.internal$ /etc/hosts
	I0814 01:06:19.200204   61804 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.123	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 01:06:19.211943   61804 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 01:06:19.333517   61804 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 01:06:19.350008   61804 certs.go:68] Setting up /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312 for IP: 192.168.61.123
	I0814 01:06:19.350055   61804 certs.go:194] generating shared ca certs ...
	I0814 01:06:19.350094   61804 certs.go:226] acquiring lock for ca certs: {Name:mke321171338291cf6d66f3acbfe43d3fabcafa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 01:06:19.350294   61804 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19429-9425/.minikube/ca.key
	I0814 01:06:19.350371   61804 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.key
	I0814 01:06:19.350387   61804 certs.go:256] generating profile certs ...
	I0814 01:06:19.350530   61804 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312/client.key
	I0814 01:06:19.350603   61804 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312/apiserver.key.6e56bf34
	I0814 01:06:19.350667   61804 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312/proxy-client.key
	I0814 01:06:19.350846   61804 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589.pem (1338 bytes)
	W0814 01:06:19.350928   61804 certs.go:480] ignoring /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589_empty.pem, impossibly tiny 0 bytes
	I0814 01:06:19.350958   61804 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem (1679 bytes)
	I0814 01:06:19.350995   61804 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem (1082 bytes)
	I0814 01:06:19.351032   61804 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem (1123 bytes)
	I0814 01:06:19.351076   61804 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem (1675 bytes)
	I0814 01:06:19.351152   61804 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem (1708 bytes)
	I0814 01:06:19.352060   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0814 01:06:19.400249   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0814 01:06:19.430497   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0814 01:06:19.478315   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0814 01:06:19.507327   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0814 01:06:15.292336   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:15.292816   61115 main.go:141] libmachine: (embed-certs-901410) DBG | unable to find current IP address of domain embed-certs-901410 in network mk-embed-certs-901410
	I0814 01:06:15.292847   61115 main.go:141] libmachine: (embed-certs-901410) DBG | I0814 01:06:15.292765   63044 retry.go:31] will retry after 585.844986ms: waiting for machine to come up
	I0814 01:06:15.880233   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:15.880833   61115 main.go:141] libmachine: (embed-certs-901410) DBG | unable to find current IP address of domain embed-certs-901410 in network mk-embed-certs-901410
	I0814 01:06:15.880903   61115 main.go:141] libmachine: (embed-certs-901410) DBG | I0814 01:06:15.880810   63044 retry.go:31] will retry after 827.81891ms: waiting for machine to come up
	I0814 01:06:16.710168   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:16.710630   61115 main.go:141] libmachine: (embed-certs-901410) DBG | unable to find current IP address of domain embed-certs-901410 in network mk-embed-certs-901410
	I0814 01:06:16.710671   61115 main.go:141] libmachine: (embed-certs-901410) DBG | I0814 01:06:16.710577   63044 retry.go:31] will retry after 1.430172339s: waiting for machine to come up
	I0814 01:06:18.142094   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:18.142557   61115 main.go:141] libmachine: (embed-certs-901410) DBG | unable to find current IP address of domain embed-certs-901410 in network mk-embed-certs-901410
	I0814 01:06:18.142604   61115 main.go:141] libmachine: (embed-certs-901410) DBG | I0814 01:06:18.142477   63044 retry.go:31] will retry after 1.240583508s: waiting for machine to come up
	I0814 01:06:19.384686   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:19.385102   61115 main.go:141] libmachine: (embed-certs-901410) DBG | unable to find current IP address of domain embed-certs-901410 in network mk-embed-certs-901410
	I0814 01:06:19.385132   61115 main.go:141] libmachine: (embed-certs-901410) DBG | I0814 01:06:19.385044   63044 retry.go:31] will retry after 2.005758756s: waiting for machine to come up
	I0814 01:06:18.281722   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:20.571594   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:19.619695   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:21.918897   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:19.535095   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0814 01:06:19.564128   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0814 01:06:19.600227   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0814 01:06:19.624712   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0814 01:06:19.649975   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589.pem --> /usr/share/ca-certificates/16589.pem (1338 bytes)
	I0814 01:06:19.673278   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem --> /usr/share/ca-certificates/165892.pem (1708 bytes)
	I0814 01:06:19.697408   61804 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0814 01:06:19.716197   61804 ssh_runner.go:195] Run: openssl version
	I0814 01:06:19.723669   61804 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/165892.pem && ln -fs /usr/share/ca-certificates/165892.pem /etc/ssl/certs/165892.pem"
	I0814 01:06:19.737165   61804 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/165892.pem
	I0814 01:06:19.742731   61804 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 13 23:59 /usr/share/ca-certificates/165892.pem
	I0814 01:06:19.742778   61804 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/165892.pem
	I0814 01:06:19.750009   61804 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/165892.pem /etc/ssl/certs/3ec20f2e.0"
	I0814 01:06:19.761830   61804 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0814 01:06:19.772601   61804 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0814 01:06:19.777222   61804 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 13 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0814 01:06:19.777311   61804 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0814 01:06:19.784554   61804 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0814 01:06:19.794731   61804 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16589.pem && ln -fs /usr/share/ca-certificates/16589.pem /etc/ssl/certs/16589.pem"
	I0814 01:06:19.804326   61804 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16589.pem
	I0814 01:06:19.808528   61804 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 13 23:59 /usr/share/ca-certificates/16589.pem
	I0814 01:06:19.808589   61804 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16589.pem
	I0814 01:06:19.815518   61804 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16589.pem /etc/ssl/certs/51391683.0"
	I0814 01:06:19.828687   61804 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0814 01:06:19.833943   61804 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0814 01:06:19.839826   61804 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0814 01:06:19.845576   61804 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0814 01:06:19.851700   61804 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0814 01:06:19.857179   61804 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0814 01:06:19.862728   61804 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0814 01:06:19.868172   61804 kubeadm.go:392] StartCluster: {Name:old-k8s-version-179312 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-179312 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.123 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 01:06:19.868280   61804 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0814 01:06:19.868327   61804 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 01:06:19.905130   61804 cri.go:89] found id: ""
	I0814 01:06:19.905208   61804 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0814 01:06:19.915743   61804 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0814 01:06:19.915763   61804 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0814 01:06:19.915812   61804 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0814 01:06:19.926673   61804 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0814 01:06:19.928112   61804 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-179312" does not appear in /home/jenkins/minikube-integration/19429-9425/kubeconfig
	I0814 01:06:19.929057   61804 kubeconfig.go:62] /home/jenkins/minikube-integration/19429-9425/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-179312" cluster setting kubeconfig missing "old-k8s-version-179312" context setting]
	I0814 01:06:19.931588   61804 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19429-9425/kubeconfig: {Name:mkd99f966962f24d3ec49056c344f5320df43dba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 01:06:19.938507   61804 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0814 01:06:19.947574   61804 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.123
	I0814 01:06:19.947601   61804 kubeadm.go:1160] stopping kube-system containers ...
	I0814 01:06:19.947641   61804 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0814 01:06:19.947698   61804 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 01:06:19.986219   61804 cri.go:89] found id: ""
	I0814 01:06:19.986301   61804 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0814 01:06:20.001325   61804 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 01:06:20.010260   61804 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 01:06:20.010278   61804 kubeadm.go:157] found existing configuration files:
	
	I0814 01:06:20.010320   61804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 01:06:20.018691   61804 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 01:06:20.018753   61804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 01:06:20.027627   61804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 01:06:20.035892   61804 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 01:06:20.035948   61804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 01:06:20.044508   61804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 01:06:20.052714   61804 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 01:06:20.052760   61804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 01:06:20.062524   61804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 01:06:20.070978   61804 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 01:06:20.071037   61804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 01:06:20.079423   61804 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 01:06:20.088368   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:06:20.206955   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:06:21.197237   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:06:21.439928   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:06:21.552279   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:06:21.636249   61804 api_server.go:52] waiting for apiserver process to appear ...
	I0814 01:06:21.636337   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:22.136661   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:22.636861   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:23.136511   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:23.636583   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:24.136899   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:21.392188   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:21.392717   61115 main.go:141] libmachine: (embed-certs-901410) DBG | unable to find current IP address of domain embed-certs-901410 in network mk-embed-certs-901410
	I0814 01:06:21.392744   61115 main.go:141] libmachine: (embed-certs-901410) DBG | I0814 01:06:21.392636   63044 retry.go:31] will retry after 2.297974145s: waiting for machine to come up
	I0814 01:06:23.692024   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:23.692545   61115 main.go:141] libmachine: (embed-certs-901410) DBG | unable to find current IP address of domain embed-certs-901410 in network mk-embed-certs-901410
	I0814 01:06:23.692574   61115 main.go:141] libmachine: (embed-certs-901410) DBG | I0814 01:06:23.692496   63044 retry.go:31] will retry after 2.273164713s: waiting for machine to come up
	I0814 01:06:22.780588   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:24.781349   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:23.919847   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:26.417563   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:24.636605   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:25.136809   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:25.636474   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:26.137253   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:26.636758   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:27.137184   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:27.637201   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:28.137082   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:28.637409   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:29.136794   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:25.967275   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:25.967771   61115 main.go:141] libmachine: (embed-certs-901410) DBG | unable to find current IP address of domain embed-certs-901410 in network mk-embed-certs-901410
	I0814 01:06:25.967799   61115 main.go:141] libmachine: (embed-certs-901410) DBG | I0814 01:06:25.967714   63044 retry.go:31] will retry after 3.279375715s: waiting for machine to come up
	I0814 01:06:29.249387   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.249873   61115 main.go:141] libmachine: (embed-certs-901410) Found IP for machine: 192.168.50.210
	I0814 01:06:29.249893   61115 main.go:141] libmachine: (embed-certs-901410) Reserving static IP address...
	I0814 01:06:29.249911   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has current primary IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.250345   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "embed-certs-901410", mac: "52:54:00:fa:4e:56", ip: "192.168.50.210"} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:06:29.250380   61115 main.go:141] libmachine: (embed-certs-901410) DBG | skip adding static IP to network mk-embed-certs-901410 - found existing host DHCP lease matching {name: "embed-certs-901410", mac: "52:54:00:fa:4e:56", ip: "192.168.50.210"}
	I0814 01:06:29.250394   61115 main.go:141] libmachine: (embed-certs-901410) Reserved static IP address: 192.168.50.210
	I0814 01:06:29.250409   61115 main.go:141] libmachine: (embed-certs-901410) Waiting for SSH to be available...
	I0814 01:06:29.250425   61115 main.go:141] libmachine: (embed-certs-901410) DBG | Getting to WaitForSSH function...
	I0814 01:06:29.252472   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.252801   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:06:29.252825   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.252933   61115 main.go:141] libmachine: (embed-certs-901410) DBG | Using SSH client type: external
	I0814 01:06:29.252973   61115 main.go:141] libmachine: (embed-certs-901410) DBG | Using SSH private key: /home/jenkins/minikube-integration/19429-9425/.minikube/machines/embed-certs-901410/id_rsa (-rw-------)
	I0814 01:06:29.253015   61115 main.go:141] libmachine: (embed-certs-901410) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.210 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19429-9425/.minikube/machines/embed-certs-901410/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0814 01:06:29.253031   61115 main.go:141] libmachine: (embed-certs-901410) DBG | About to run SSH command:
	I0814 01:06:29.253044   61115 main.go:141] libmachine: (embed-certs-901410) DBG | exit 0
	I0814 01:06:29.381821   61115 main.go:141] libmachine: (embed-certs-901410) DBG | SSH cmd err, output: <nil>: 
	I0814 01:06:29.382216   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetConfigRaw
	I0814 01:06:29.382909   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetIP
	I0814 01:06:29.385247   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.385611   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:06:29.385648   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.385918   61115 profile.go:143] Saving config to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/embed-certs-901410/config.json ...
	I0814 01:06:29.386116   61115 machine.go:94] provisionDockerMachine start ...
	I0814 01:06:29.386151   61115 main.go:141] libmachine: (embed-certs-901410) Calling .DriverName
	I0814 01:06:29.386370   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHHostname
	I0814 01:06:29.388690   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.389026   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:06:29.389054   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.389185   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHPort
	I0814 01:06:29.389353   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 01:06:29.389510   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 01:06:29.389658   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHUsername
	I0814 01:06:29.389812   61115 main.go:141] libmachine: Using SSH client type: native
	I0814 01:06:29.390022   61115 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.210 22 <nil> <nil>}
	I0814 01:06:29.390033   61115 main.go:141] libmachine: About to run SSH command:
	hostname
	I0814 01:06:29.502650   61115 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0814 01:06:29.502704   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetMachineName
	I0814 01:06:29.502923   61115 buildroot.go:166] provisioning hostname "embed-certs-901410"
	I0814 01:06:29.502947   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetMachineName
	I0814 01:06:29.503141   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHHostname
	I0814 01:06:29.505440   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.505866   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:06:29.505903   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.506078   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHPort
	I0814 01:06:29.506278   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 01:06:29.506425   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 01:06:29.506558   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHUsername
	I0814 01:06:29.506733   61115 main.go:141] libmachine: Using SSH client type: native
	I0814 01:06:29.506942   61115 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.210 22 <nil> <nil>}
	I0814 01:06:29.506961   61115 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-901410 && echo "embed-certs-901410" | sudo tee /etc/hostname
	I0814 01:06:29.632717   61115 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-901410
	
	I0814 01:06:29.632749   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHHostname
	I0814 01:06:29.635919   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.636318   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:06:29.636346   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.636582   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHPort
	I0814 01:06:29.636804   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 01:06:29.637010   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 01:06:29.637205   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHUsername
	I0814 01:06:29.637413   61115 main.go:141] libmachine: Using SSH client type: native
	I0814 01:06:29.637574   61115 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.210 22 <nil> <nil>}
	I0814 01:06:29.637590   61115 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-901410' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-901410/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-901410' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0814 01:06:29.759030   61115 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 01:06:29.759059   61115 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19429-9425/.minikube CaCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19429-9425/.minikube}
	I0814 01:06:29.759100   61115 buildroot.go:174] setting up certificates
	I0814 01:06:29.759114   61115 provision.go:84] configureAuth start
	I0814 01:06:29.759126   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetMachineName
	I0814 01:06:29.759412   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetIP
	I0814 01:06:29.761597   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.761918   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:06:29.761946   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.762095   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHHostname
	I0814 01:06:29.763969   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.764320   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:06:29.764353   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.764497   61115 provision.go:143] copyHostCerts
	I0814 01:06:29.764568   61115 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem, removing ...
	I0814 01:06:29.764582   61115 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem
	I0814 01:06:29.764653   61115 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem (1082 bytes)
	I0814 01:06:29.764781   61115 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem, removing ...
	I0814 01:06:29.764791   61115 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem
	I0814 01:06:29.764814   61115 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem (1123 bytes)
	I0814 01:06:29.764875   61115 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem, removing ...
	I0814 01:06:29.764882   61115 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem
	I0814 01:06:29.764899   61115 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem (1675 bytes)
	I0814 01:06:29.764954   61115 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem org=jenkins.embed-certs-901410 san=[127.0.0.1 192.168.50.210 embed-certs-901410 localhost minikube]
	I0814 01:06:29.870234   61115 provision.go:177] copyRemoteCerts
	I0814 01:06:29.870290   61115 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0814 01:06:29.870314   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHHostname
	I0814 01:06:29.872903   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.873188   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:06:29.873220   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.873388   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHPort
	I0814 01:06:29.873582   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 01:06:29.873748   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHUsername
	I0814 01:06:29.873849   61115 sshutil.go:53] new ssh client: &{IP:192.168.50.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/embed-certs-901410/id_rsa Username:docker}
	I0814 01:06:29.959592   61115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0814 01:06:29.982484   61115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0814 01:06:30.005257   61115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0814 01:06:30.029571   61115 provision.go:87] duration metric: took 270.444778ms to configureAuth
	I0814 01:06:30.029595   61115 buildroot.go:189] setting minikube options for container-runtime
	I0814 01:06:30.029773   61115 config.go:182] Loaded profile config "embed-certs-901410": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 01:06:30.029836   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHHostname
	I0814 01:06:30.032696   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:30.033078   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:06:30.033115   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:30.033301   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHPort
	I0814 01:06:30.033492   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 01:06:30.033658   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 01:06:30.033798   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHUsername
	I0814 01:06:30.033953   61115 main.go:141] libmachine: Using SSH client type: native
	I0814 01:06:30.034162   61115 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.210 22 <nil> <nil>}
	I0814 01:06:30.034182   61115 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0814 01:06:27.281267   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:29.284406   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:30.310330   61115 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0814 01:06:30.310362   61115 machine.go:97] duration metric: took 924.221855ms to provisionDockerMachine
	I0814 01:06:30.310376   61115 start.go:293] postStartSetup for "embed-certs-901410" (driver="kvm2")
	I0814 01:06:30.310391   61115 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0814 01:06:30.310412   61115 main.go:141] libmachine: (embed-certs-901410) Calling .DriverName
	I0814 01:06:30.310792   61115 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0814 01:06:30.310829   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHHostname
	I0814 01:06:30.313781   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:30.314184   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:06:30.314211   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:30.314417   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHPort
	I0814 01:06:30.314605   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 01:06:30.314775   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHUsername
	I0814 01:06:30.314921   61115 sshutil.go:53] new ssh client: &{IP:192.168.50.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/embed-certs-901410/id_rsa Username:docker}
	I0814 01:06:30.400094   61115 ssh_runner.go:195] Run: cat /etc/os-release
	I0814 01:06:30.403861   61115 info.go:137] Remote host: Buildroot 2023.02.9
	I0814 01:06:30.403879   61115 filesync.go:126] Scanning /home/jenkins/minikube-integration/19429-9425/.minikube/addons for local assets ...
	I0814 01:06:30.403936   61115 filesync.go:126] Scanning /home/jenkins/minikube-integration/19429-9425/.minikube/files for local assets ...
	I0814 01:06:30.404014   61115 filesync.go:149] local asset: /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem -> 165892.pem in /etc/ssl/certs
	I0814 01:06:30.404128   61115 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0814 01:06:30.412469   61115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem --> /etc/ssl/certs/165892.pem (1708 bytes)
	I0814 01:06:30.434728   61115 start.go:296] duration metric: took 124.33735ms for postStartSetup
	I0814 01:06:30.434768   61115 fix.go:56] duration metric: took 18.384308902s for fixHost
	I0814 01:06:30.434792   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHHostname
	I0814 01:06:30.437730   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:30.438155   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:06:30.438177   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:30.438320   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHPort
	I0814 01:06:30.438510   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 01:06:30.438677   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 01:06:30.438818   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHUsername
	I0814 01:06:30.439014   61115 main.go:141] libmachine: Using SSH client type: native
	I0814 01:06:30.439219   61115 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.210 22 <nil> <nil>}
	I0814 01:06:30.439234   61115 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0814 01:06:30.550947   61115 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723597590.505165718
	
	I0814 01:06:30.550974   61115 fix.go:216] guest clock: 1723597590.505165718
	I0814 01:06:30.550984   61115 fix.go:229] Guest: 2024-08-14 01:06:30.505165718 +0000 UTC Remote: 2024-08-14 01:06:30.434773276 +0000 UTC m=+355.429845421 (delta=70.392442ms)
	I0814 01:06:30.551009   61115 fix.go:200] guest clock delta is within tolerance: 70.392442ms
	I0814 01:06:30.551018   61115 start.go:83] releasing machines lock for "embed-certs-901410", held for 18.500591627s
	I0814 01:06:30.551046   61115 main.go:141] libmachine: (embed-certs-901410) Calling .DriverName
	I0814 01:06:30.551330   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetIP
	I0814 01:06:30.553946   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:30.554367   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:06:30.554403   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:30.554586   61115 main.go:141] libmachine: (embed-certs-901410) Calling .DriverName
	I0814 01:06:30.555088   61115 main.go:141] libmachine: (embed-certs-901410) Calling .DriverName
	I0814 01:06:30.555280   61115 main.go:141] libmachine: (embed-certs-901410) Calling .DriverName
	I0814 01:06:30.555371   61115 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0814 01:06:30.555415   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHHostname
	I0814 01:06:30.555523   61115 ssh_runner.go:195] Run: cat /version.json
	I0814 01:06:30.555549   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHHostname
	I0814 01:06:30.558280   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:30.558369   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:30.558704   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:06:30.558730   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:30.558909   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHPort
	I0814 01:06:30.558922   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:06:30.558945   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:30.559110   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHPort
	I0814 01:06:30.559121   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 01:06:30.559307   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHUsername
	I0814 01:06:30.559319   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 01:06:30.559477   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHUsername
	I0814 01:06:30.559473   61115 sshutil.go:53] new ssh client: &{IP:192.168.50.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/embed-certs-901410/id_rsa Username:docker}
	I0814 01:06:30.559633   61115 sshutil.go:53] new ssh client: &{IP:192.168.50.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/embed-certs-901410/id_rsa Username:docker}
	I0814 01:06:30.650942   61115 ssh_runner.go:195] Run: systemctl --version
	I0814 01:06:30.686931   61115 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0814 01:06:30.834893   61115 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0814 01:06:30.840573   61115 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0814 01:06:30.840644   61115 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0814 01:06:30.856179   61115 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0814 01:06:30.856200   61115 start.go:495] detecting cgroup driver to use...
	I0814 01:06:30.856268   61115 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0814 01:06:30.872056   61115 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0814 01:06:30.884525   61115 docker.go:217] disabling cri-docker service (if available) ...
	I0814 01:06:30.884604   61115 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0814 01:06:30.897219   61115 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0814 01:06:30.910649   61115 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0814 01:06:31.031843   61115 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0814 01:06:31.170959   61115 docker.go:233] disabling docker service ...
	I0814 01:06:31.171034   61115 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0814 01:06:31.185812   61115 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0814 01:06:31.198349   61115 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0814 01:06:31.334492   61115 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0814 01:06:31.448638   61115 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0814 01:06:31.462494   61115 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 01:06:31.479307   61115 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0814 01:06:31.479376   61115 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:06:31.489135   61115 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0814 01:06:31.489202   61115 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:06:31.500174   61115 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:06:31.509884   61115 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:06:31.519412   61115 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0814 01:06:31.529352   61115 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:06:31.539360   61115 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:06:31.555761   61115 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:06:31.566278   61115 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0814 01:06:31.575191   61115 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0814 01:06:31.575242   61115 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0814 01:06:31.587429   61115 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0814 01:06:31.596637   61115 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 01:06:31.702555   61115 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0814 01:06:31.836836   61115 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0814 01:06:31.836908   61115 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0814 01:06:31.841202   61115 start.go:563] Will wait 60s for crictl version
	I0814 01:06:31.841272   61115 ssh_runner.go:195] Run: which crictl
	I0814 01:06:31.844681   61115 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0814 01:06:31.882260   61115 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0814 01:06:31.882348   61115 ssh_runner.go:195] Run: crio --version
	I0814 01:06:31.908181   61115 ssh_runner.go:195] Run: crio --version
	I0814 01:06:31.938158   61115 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0814 01:06:28.917018   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:30.917940   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:32.919466   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:29.636401   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:30.136547   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:30.636748   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:31.136557   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:31.636752   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:32.137082   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:32.637429   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:33.136895   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:33.636703   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:34.136811   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:31.939399   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetIP
	I0814 01:06:31.942325   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:31.942622   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:06:31.942660   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:31.942828   61115 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0814 01:06:31.947071   61115 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 01:06:31.958632   61115 kubeadm.go:883] updating cluster {Name:embed-certs-901410 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:embed-certs-901410 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.210 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0814 01:06:31.958783   61115 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0814 01:06:31.958853   61115 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 01:06:31.996526   61115 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0814 01:06:31.996602   61115 ssh_runner.go:195] Run: which lz4
	I0814 01:06:32.000322   61115 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0814 01:06:32.004629   61115 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0814 01:06:32.004661   61115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0814 01:06:33.171433   61115 crio.go:462] duration metric: took 1.171173942s to copy over tarball
	I0814 01:06:33.171504   61115 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0814 01:06:31.781468   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:33.781547   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:35.781641   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:35.418170   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:37.920694   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:34.637429   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:35.137322   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:35.636955   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:36.136713   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:36.636457   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:37.137396   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:37.637271   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:38.137099   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:38.637303   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:39.136673   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:35.285022   61115 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.11348357s)
	I0814 01:06:35.285047   61115 crio.go:469] duration metric: took 2.113589929s to extract the tarball
	I0814 01:06:35.285054   61115 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0814 01:06:35.320814   61115 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 01:06:35.362145   61115 crio.go:514] all images are preloaded for cri-o runtime.
	I0814 01:06:35.362169   61115 cache_images.go:84] Images are preloaded, skipping loading
	I0814 01:06:35.362177   61115 kubeadm.go:934] updating node { 192.168.50.210 8443 v1.31.0 crio true true} ...
	I0814 01:06:35.362289   61115 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-901410 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.210
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-901410 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0814 01:06:35.362359   61115 ssh_runner.go:195] Run: crio config
	I0814 01:06:35.413412   61115 cni.go:84] Creating CNI manager for ""
	I0814 01:06:35.413433   61115 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 01:06:35.413442   61115 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0814 01:06:35.413461   61115 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.210 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-901410 NodeName:embed-certs-901410 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.210"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.210 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0814 01:06:35.413620   61115 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.210
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-901410"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.210
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.210"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0814 01:06:35.413681   61115 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0814 01:06:35.424217   61115 binaries.go:44] Found k8s binaries, skipping transfer
	I0814 01:06:35.424287   61115 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0814 01:06:35.433358   61115 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0814 01:06:35.448828   61115 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0814 01:06:35.463579   61115 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0814 01:06:35.478423   61115 ssh_runner.go:195] Run: grep 192.168.50.210	control-plane.minikube.internal$ /etc/hosts
	I0814 01:06:35.482005   61115 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.210	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 01:06:35.493411   61115 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 01:06:35.625613   61115 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 01:06:35.642901   61115 certs.go:68] Setting up /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/embed-certs-901410 for IP: 192.168.50.210
	I0814 01:06:35.642927   61115 certs.go:194] generating shared ca certs ...
	I0814 01:06:35.642955   61115 certs.go:226] acquiring lock for ca certs: {Name:mke321171338291cf6d66f3acbfe43d3fabcafa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 01:06:35.643119   61115 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19429-9425/.minikube/ca.key
	I0814 01:06:35.643172   61115 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.key
	I0814 01:06:35.643184   61115 certs.go:256] generating profile certs ...
	I0814 01:06:35.643301   61115 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/embed-certs-901410/client.key
	I0814 01:06:35.643390   61115 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/embed-certs-901410/apiserver.key.0b2ea541
	I0814 01:06:35.643439   61115 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/embed-certs-901410/proxy-client.key
	I0814 01:06:35.643591   61115 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589.pem (1338 bytes)
	W0814 01:06:35.643630   61115 certs.go:480] ignoring /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589_empty.pem, impossibly tiny 0 bytes
	I0814 01:06:35.643648   61115 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem (1679 bytes)
	I0814 01:06:35.643682   61115 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem (1082 bytes)
	I0814 01:06:35.643727   61115 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem (1123 bytes)
	I0814 01:06:35.643768   61115 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem (1675 bytes)
	I0814 01:06:35.643825   61115 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem (1708 bytes)
	I0814 01:06:35.644478   61115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0814 01:06:35.681297   61115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0814 01:06:35.730067   61115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0814 01:06:35.763133   61115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0814 01:06:35.790593   61115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/embed-certs-901410/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0814 01:06:35.815663   61115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/embed-certs-901410/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0814 01:06:35.840763   61115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/embed-certs-901410/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0814 01:06:35.863820   61115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/embed-certs-901410/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0814 01:06:35.887018   61115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589.pem --> /usr/share/ca-certificates/16589.pem (1338 bytes)
	I0814 01:06:35.909408   61115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem --> /usr/share/ca-certificates/165892.pem (1708 bytes)
	I0814 01:06:35.934175   61115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0814 01:06:35.957179   61115 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0814 01:06:35.972922   61115 ssh_runner.go:195] Run: openssl version
	I0814 01:06:35.978523   61115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16589.pem && ln -fs /usr/share/ca-certificates/16589.pem /etc/ssl/certs/16589.pem"
	I0814 01:06:35.987896   61115 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16589.pem
	I0814 01:06:35.991861   61115 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 13 23:59 /usr/share/ca-certificates/16589.pem
	I0814 01:06:35.991922   61115 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16589.pem
	I0814 01:06:35.997354   61115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16589.pem /etc/ssl/certs/51391683.0"
	I0814 01:06:36.007366   61115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/165892.pem && ln -fs /usr/share/ca-certificates/165892.pem /etc/ssl/certs/165892.pem"
	I0814 01:06:36.017502   61115 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/165892.pem
	I0814 01:06:36.021456   61115 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 13 23:59 /usr/share/ca-certificates/165892.pem
	I0814 01:06:36.021506   61115 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/165892.pem
	I0814 01:06:36.026605   61115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/165892.pem /etc/ssl/certs/3ec20f2e.0"
	I0814 01:06:36.035758   61115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0814 01:06:36.044976   61115 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0814 01:06:36.048866   61115 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 13 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0814 01:06:36.048905   61115 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0814 01:06:36.053841   61115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0814 01:06:36.062901   61115 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0814 01:06:36.066905   61115 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0814 01:06:36.072359   61115 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0814 01:06:36.077384   61115 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0814 01:06:36.082634   61115 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0814 01:06:36.087734   61115 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0814 01:06:36.093076   61115 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0814 01:06:36.098239   61115 kubeadm.go:392] StartCluster: {Name:embed-certs-901410 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:embed-certs-901410 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.210 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 01:06:36.098366   61115 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0814 01:06:36.098414   61115 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 01:06:36.137745   61115 cri.go:89] found id: ""
	I0814 01:06:36.137812   61115 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0814 01:06:36.151288   61115 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0814 01:06:36.151304   61115 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0814 01:06:36.151346   61115 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0814 01:06:36.160854   61115 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0814 01:06:36.162454   61115 kubeconfig.go:125] found "embed-certs-901410" server: "https://192.168.50.210:8443"
	I0814 01:06:36.165608   61115 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0814 01:06:36.174251   61115 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.210
	I0814 01:06:36.174272   61115 kubeadm.go:1160] stopping kube-system containers ...
	I0814 01:06:36.174307   61115 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0814 01:06:36.174355   61115 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 01:06:36.208617   61115 cri.go:89] found id: ""
	I0814 01:06:36.208689   61115 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0814 01:06:36.223217   61115 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 01:06:36.231791   61115 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 01:06:36.231807   61115 kubeadm.go:157] found existing configuration files:
	
	I0814 01:06:36.231846   61115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 01:06:36.239738   61115 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 01:06:36.239779   61115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 01:06:36.248183   61115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 01:06:36.256052   61115 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 01:06:36.256099   61115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 01:06:36.264174   61115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 01:06:36.271909   61115 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 01:06:36.271951   61115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 01:06:36.280467   61115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 01:06:36.288795   61115 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 01:06:36.288841   61115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 01:06:36.297142   61115 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 01:06:36.305326   61115 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:06:36.419654   61115 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:06:37.266994   61115 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:06:37.469417   61115 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:06:37.544102   61115 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:06:37.616596   61115 api_server.go:52] waiting for apiserver process to appear ...
	I0814 01:06:37.616684   61115 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:38.117278   61115 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:38.616805   61115 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:39.117789   61115 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:39.616986   61115 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:39.684640   61115 api_server.go:72] duration metric: took 2.068036759s to wait for apiserver process to appear ...
	I0814 01:06:39.684668   61115 api_server.go:88] waiting for apiserver healthz status ...
	I0814 01:06:39.684690   61115 api_server.go:253] Checking apiserver healthz at https://192.168.50.210:8443/healthz ...
	I0814 01:06:39.685138   61115 api_server.go:269] stopped: https://192.168.50.210:8443/healthz: Get "https://192.168.50.210:8443/healthz": dial tcp 192.168.50.210:8443: connect: connection refused
	I0814 01:06:37.782873   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:40.281438   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:40.418079   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:42.418440   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:40.184807   61115 api_server.go:253] Checking apiserver healthz at https://192.168.50.210:8443/healthz ...
	I0814 01:06:42.435930   61115 api_server.go:279] https://192.168.50.210:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0814 01:06:42.435960   61115 api_server.go:103] status: https://192.168.50.210:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0814 01:06:42.435997   61115 api_server.go:253] Checking apiserver healthz at https://192.168.50.210:8443/healthz ...
	I0814 01:06:42.464919   61115 api_server.go:279] https://192.168.50.210:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0814 01:06:42.464949   61115 api_server.go:103] status: https://192.168.50.210:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0814 01:06:42.685218   61115 api_server.go:253] Checking apiserver healthz at https://192.168.50.210:8443/healthz ...
	I0814 01:06:42.691065   61115 api_server.go:279] https://192.168.50.210:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0814 01:06:42.691089   61115 api_server.go:103] status: https://192.168.50.210:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0814 01:06:43.185274   61115 api_server.go:253] Checking apiserver healthz at https://192.168.50.210:8443/healthz ...
	I0814 01:06:43.191160   61115 api_server.go:279] https://192.168.50.210:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0814 01:06:43.191189   61115 api_server.go:103] status: https://192.168.50.210:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0814 01:06:43.685407   61115 api_server.go:253] Checking apiserver healthz at https://192.168.50.210:8443/healthz ...
	I0814 01:06:43.689515   61115 api_server.go:279] https://192.168.50.210:8443/healthz returned 200:
	ok
	I0814 01:06:43.695408   61115 api_server.go:141] control plane version: v1.31.0
	I0814 01:06:43.695435   61115 api_server.go:131] duration metric: took 4.010759094s to wait for apiserver health ...
	I0814 01:06:43.695445   61115 cni.go:84] Creating CNI manager for ""
	I0814 01:06:43.695454   61115 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 01:06:43.696966   61115 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0814 01:06:39.637384   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:40.136562   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:40.637447   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:41.137212   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:41.636824   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:42.136790   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:42.637352   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:43.137237   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:43.637327   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:44.136777   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:43.698444   61115 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0814 01:06:43.713840   61115 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0814 01:06:43.754611   61115 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 01:06:43.765369   61115 system_pods.go:59] 8 kube-system pods found
	I0814 01:06:43.765402   61115 system_pods.go:61] "coredns-6f6b679f8f-fpz8f" [0fae381f-1394-4a55-9735-61197051e0da] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0814 01:06:43.765410   61115 system_pods.go:61] "etcd-embed-certs-901410" [238a87a0-88ab-4663-bc2f-6bf2cb641902] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0814 01:06:43.765421   61115 system_pods.go:61] "kube-apiserver-embed-certs-901410" [0847b62e-42c4-4616-9412-a1547f991ea5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0814 01:06:43.765427   61115 system_pods.go:61] "kube-controller-manager-embed-certs-901410" [868c288a-504f-4bc6-9af3-8d3eff0a4e66] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0814 01:06:43.765431   61115 system_pods.go:61] "kube-proxy-gtr77" [f7b7a6b1-e47f-4982-8247-2adf9ce6690b] Running
	I0814 01:06:43.765436   61115 system_pods.go:61] "kube-scheduler-embed-certs-901410" [803a8501-9a24-436d-8439-2e05ed2b6e2c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0814 01:06:43.765443   61115 system_pods.go:61] "metrics-server-6867b74b74-82tmq" [4683e8c4-92a5-4b81-86c8-55da6044e780] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 01:06:43.765447   61115 system_pods.go:61] "storage-provisioner" [796497c7-c7b4-4207-9dbb-970702bab314] Running
	I0814 01:06:43.765453   61115 system_pods.go:74] duration metric: took 10.823914ms to wait for pod list to return data ...
	I0814 01:06:43.765468   61115 node_conditions.go:102] verifying NodePressure condition ...
	I0814 01:06:43.769292   61115 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0814 01:06:43.769319   61115 node_conditions.go:123] node cpu capacity is 2
	I0814 01:06:43.769334   61115 node_conditions.go:105] duration metric: took 3.855137ms to run NodePressure ...
	I0814 01:06:43.769355   61115 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:06:44.041384   61115 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0814 01:06:44.045549   61115 kubeadm.go:739] kubelet initialised
	I0814 01:06:44.045569   61115 kubeadm.go:740] duration metric: took 4.15887ms waiting for restarted kubelet to initialise ...
	I0814 01:06:44.045576   61115 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 01:06:44.050480   61115 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6f6b679f8f-fpz8f" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:42.281812   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:44.795089   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:44.917037   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:46.918399   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:44.636971   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:45.137082   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:45.636661   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:46.136690   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:46.636597   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:47.136601   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:47.636799   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:48.136486   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:48.637415   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:49.136703   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:46.057380   61115 pod_ready.go:102] pod "coredns-6f6b679f8f-fpz8f" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:48.556914   61115 pod_ready.go:102] pod "coredns-6f6b679f8f-fpz8f" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:49.561672   61115 pod_ready.go:92] pod "coredns-6f6b679f8f-fpz8f" in "kube-system" namespace has status "Ready":"True"
	I0814 01:06:49.561693   61115 pod_ready.go:81] duration metric: took 5.511190087s for pod "coredns-6f6b679f8f-fpz8f" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:49.561705   61115 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-901410" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:47.281700   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:49.780884   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:49.418739   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:51.918181   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:49.636646   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:50.137134   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:50.637310   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:51.136913   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:51.636930   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:52.137158   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:52.636489   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:53.137140   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:53.637032   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:54.137345   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:51.567510   61115 pod_ready.go:102] pod "etcd-embed-certs-901410" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:52.567550   61115 pod_ready.go:92] pod "etcd-embed-certs-901410" in "kube-system" namespace has status "Ready":"True"
	I0814 01:06:52.567575   61115 pod_ready.go:81] duration metric: took 3.005862861s for pod "etcd-embed-certs-901410" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:52.567584   61115 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-901410" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:52.572128   61115 pod_ready.go:92] pod "kube-apiserver-embed-certs-901410" in "kube-system" namespace has status "Ready":"True"
	I0814 01:06:52.572150   61115 pod_ready.go:81] duration metric: took 4.558756ms for pod "kube-apiserver-embed-certs-901410" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:52.572160   61115 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-901410" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:52.575875   61115 pod_ready.go:92] pod "kube-controller-manager-embed-certs-901410" in "kube-system" namespace has status "Ready":"True"
	I0814 01:06:52.575894   61115 pod_ready.go:81] duration metric: took 3.728258ms for pod "kube-controller-manager-embed-certs-901410" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:52.575903   61115 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-gtr77" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:52.579889   61115 pod_ready.go:92] pod "kube-proxy-gtr77" in "kube-system" namespace has status "Ready":"True"
	I0814 01:06:52.579908   61115 pod_ready.go:81] duration metric: took 3.999715ms for pod "kube-proxy-gtr77" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:52.579916   61115 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-901410" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:52.583481   61115 pod_ready.go:92] pod "kube-scheduler-embed-certs-901410" in "kube-system" namespace has status "Ready":"True"
	I0814 01:06:52.583499   61115 pod_ready.go:81] duration metric: took 3.577393ms for pod "kube-scheduler-embed-certs-901410" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:52.583508   61115 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:54.590479   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:51.781057   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:54.280478   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:54.418737   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:56.917785   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:54.636613   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:55.137191   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:55.637149   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:56.137437   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:56.637155   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:57.136629   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:57.636616   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:58.136691   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:58.637180   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:59.137246   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:57.091108   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:59.590751   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:56.781427   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:59.280620   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:01.281835   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:58.918424   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:01.418091   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:59.636603   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:00.137399   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:00.636477   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:01.136689   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:01.636867   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:02.136874   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:02.636850   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:03.136568   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:03.636915   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:04.137185   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:02.090113   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:04.589929   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:03.780774   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:05.781084   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:03.918432   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:06.417245   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:04.636433   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:05.136514   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:05.637177   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:06.136522   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:06.636384   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:07.136753   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:07.636417   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:08.137158   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:08.636665   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:09.137281   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:07.089678   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:09.590309   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:07.781208   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:10.281385   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:08.917707   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:10.917814   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:09.637102   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:10.136575   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:10.637290   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:11.136999   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:11.636523   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:12.136756   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:12.637369   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:13.136763   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:13.637275   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:14.137363   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:12.090323   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:14.092742   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:12.780837   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:14.781484   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:13.424099   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:15.917599   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:17.918631   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:14.636871   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:15.136819   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:15.636660   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:16.136568   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:16.637322   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:17.137088   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:17.637082   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:18.136469   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:18.637351   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:19.136899   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:16.589319   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:18.590539   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:17.279827   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:19.280727   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:20.418308   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:22.418709   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:19.636984   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:20.137256   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:20.636678   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:21.136871   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:21.637264   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:07:21.637336   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:07:21.674035   61804 cri.go:89] found id: ""
	I0814 01:07:21.674081   61804 logs.go:276] 0 containers: []
	W0814 01:07:21.674091   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:07:21.674100   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:07:21.674150   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:07:21.706567   61804 cri.go:89] found id: ""
	I0814 01:07:21.706594   61804 logs.go:276] 0 containers: []
	W0814 01:07:21.706602   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:07:21.706608   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:07:21.706670   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:07:21.744892   61804 cri.go:89] found id: ""
	I0814 01:07:21.744917   61804 logs.go:276] 0 containers: []
	W0814 01:07:21.744927   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:07:21.744933   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:07:21.744987   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:07:21.780766   61804 cri.go:89] found id: ""
	I0814 01:07:21.780791   61804 logs.go:276] 0 containers: []
	W0814 01:07:21.780799   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:07:21.780805   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:07:21.780861   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:07:21.813710   61804 cri.go:89] found id: ""
	I0814 01:07:21.813737   61804 logs.go:276] 0 containers: []
	W0814 01:07:21.813744   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:07:21.813750   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:07:21.813800   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:07:21.851621   61804 cri.go:89] found id: ""
	I0814 01:07:21.851649   61804 logs.go:276] 0 containers: []
	W0814 01:07:21.851657   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:07:21.851663   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:07:21.851713   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:07:21.885176   61804 cri.go:89] found id: ""
	I0814 01:07:21.885207   61804 logs.go:276] 0 containers: []
	W0814 01:07:21.885218   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:07:21.885226   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:07:21.885293   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:07:21.922273   61804 cri.go:89] found id: ""
	I0814 01:07:21.922303   61804 logs.go:276] 0 containers: []
	W0814 01:07:21.922319   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:07:21.922330   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:07:21.922344   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:07:21.975619   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:07:21.975657   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:07:21.989295   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:07:21.989330   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:07:22.117376   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:07:22.117406   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:07:22.117421   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:07:22.190366   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:07:22.190407   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:07:21.094685   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:23.592014   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:21.781584   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:24.281405   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:24.919338   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:27.417053   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:24.727910   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:24.741649   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:07:24.741722   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:07:24.778658   61804 cri.go:89] found id: ""
	I0814 01:07:24.778684   61804 logs.go:276] 0 containers: []
	W0814 01:07:24.778693   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:07:24.778699   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:07:24.778761   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:07:24.811263   61804 cri.go:89] found id: ""
	I0814 01:07:24.811290   61804 logs.go:276] 0 containers: []
	W0814 01:07:24.811314   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:07:24.811321   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:07:24.811385   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:07:24.847414   61804 cri.go:89] found id: ""
	I0814 01:07:24.847442   61804 logs.go:276] 0 containers: []
	W0814 01:07:24.847450   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:07:24.847456   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:07:24.847512   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:07:24.888714   61804 cri.go:89] found id: ""
	I0814 01:07:24.888737   61804 logs.go:276] 0 containers: []
	W0814 01:07:24.888745   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:07:24.888750   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:07:24.888828   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:07:24.937957   61804 cri.go:89] found id: ""
	I0814 01:07:24.937983   61804 logs.go:276] 0 containers: []
	W0814 01:07:24.937994   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:07:24.938002   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:07:24.938086   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:07:24.990489   61804 cri.go:89] found id: ""
	I0814 01:07:24.990514   61804 logs.go:276] 0 containers: []
	W0814 01:07:24.990522   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:07:24.990530   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:07:24.990592   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:07:25.033458   61804 cri.go:89] found id: ""
	I0814 01:07:25.033489   61804 logs.go:276] 0 containers: []
	W0814 01:07:25.033500   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:07:25.033508   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:07:25.033594   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:07:25.065009   61804 cri.go:89] found id: ""
	I0814 01:07:25.065039   61804 logs.go:276] 0 containers: []
	W0814 01:07:25.065049   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:07:25.065062   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:07:25.065074   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:07:25.116806   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:07:25.116841   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:07:25.131759   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:07:25.131790   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:07:25.206389   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:07:25.206415   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:07:25.206435   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:07:25.284603   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:07:25.284632   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:07:27.823371   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:27.836369   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:07:27.836452   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:07:27.876906   61804 cri.go:89] found id: ""
	I0814 01:07:27.876937   61804 logs.go:276] 0 containers: []
	W0814 01:07:27.876950   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:07:27.876960   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:07:27.877039   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:07:27.912449   61804 cri.go:89] found id: ""
	I0814 01:07:27.912481   61804 logs.go:276] 0 containers: []
	W0814 01:07:27.912494   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:07:27.912501   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:07:27.912568   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:07:27.945584   61804 cri.go:89] found id: ""
	I0814 01:07:27.945611   61804 logs.go:276] 0 containers: []
	W0814 01:07:27.945620   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:07:27.945628   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:07:27.945693   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:07:27.982470   61804 cri.go:89] found id: ""
	I0814 01:07:27.982498   61804 logs.go:276] 0 containers: []
	W0814 01:07:27.982508   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:07:27.982517   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:07:27.982592   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:07:28.020494   61804 cri.go:89] found id: ""
	I0814 01:07:28.020521   61804 logs.go:276] 0 containers: []
	W0814 01:07:28.020529   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:07:28.020535   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:07:28.020604   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:07:28.055810   61804 cri.go:89] found id: ""
	I0814 01:07:28.055835   61804 logs.go:276] 0 containers: []
	W0814 01:07:28.055846   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:07:28.055854   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:07:28.055917   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:07:28.092241   61804 cri.go:89] found id: ""
	I0814 01:07:28.092266   61804 logs.go:276] 0 containers: []
	W0814 01:07:28.092273   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:07:28.092279   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:07:28.092336   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:07:28.128234   61804 cri.go:89] found id: ""
	I0814 01:07:28.128259   61804 logs.go:276] 0 containers: []
	W0814 01:07:28.128266   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:07:28.128275   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:07:28.128292   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:07:28.169651   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:07:28.169682   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:07:28.223578   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:07:28.223614   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:07:28.237283   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:07:28.237317   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:07:28.310610   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:07:28.310633   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:07:28.310657   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:07:26.090425   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:28.090637   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:26.781404   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:29.280644   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:31.281808   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:29.917201   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:31.918087   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:30.892125   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:30.904416   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:07:30.904487   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:07:30.938158   61804 cri.go:89] found id: ""
	I0814 01:07:30.938186   61804 logs.go:276] 0 containers: []
	W0814 01:07:30.938197   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:07:30.938204   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:07:30.938273   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:07:30.969960   61804 cri.go:89] found id: ""
	I0814 01:07:30.969990   61804 logs.go:276] 0 containers: []
	W0814 01:07:30.970000   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:07:30.970006   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:07:30.970094   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:07:31.003442   61804 cri.go:89] found id: ""
	I0814 01:07:31.003472   61804 logs.go:276] 0 containers: []
	W0814 01:07:31.003484   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:07:31.003492   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:07:31.003547   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:07:31.036819   61804 cri.go:89] found id: ""
	I0814 01:07:31.036852   61804 logs.go:276] 0 containers: []
	W0814 01:07:31.036866   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:07:31.036874   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:07:31.036943   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:07:31.070521   61804 cri.go:89] found id: ""
	I0814 01:07:31.070546   61804 logs.go:276] 0 containers: []
	W0814 01:07:31.070556   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:07:31.070570   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:07:31.070627   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:07:31.111200   61804 cri.go:89] found id: ""
	I0814 01:07:31.111223   61804 logs.go:276] 0 containers: []
	W0814 01:07:31.111230   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:07:31.111236   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:07:31.111299   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:07:31.143931   61804 cri.go:89] found id: ""
	I0814 01:07:31.143965   61804 logs.go:276] 0 containers: []
	W0814 01:07:31.143973   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:07:31.143978   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:07:31.144027   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:07:31.176742   61804 cri.go:89] found id: ""
	I0814 01:07:31.176765   61804 logs.go:276] 0 containers: []
	W0814 01:07:31.176773   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:07:31.176782   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:07:31.176800   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:07:31.247117   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:07:31.247145   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:07:31.247159   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:07:31.327763   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:07:31.327797   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:07:31.368715   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:07:31.368753   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:07:31.421802   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:07:31.421833   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:07:33.936162   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:33.949580   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:07:33.949647   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:07:33.982423   61804 cri.go:89] found id: ""
	I0814 01:07:33.982452   61804 logs.go:276] 0 containers: []
	W0814 01:07:33.982464   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:07:33.982472   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:07:33.982532   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:07:34.015547   61804 cri.go:89] found id: ""
	I0814 01:07:34.015580   61804 logs.go:276] 0 containers: []
	W0814 01:07:34.015591   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:07:34.015598   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:07:34.015660   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:07:34.047814   61804 cri.go:89] found id: ""
	I0814 01:07:34.047837   61804 logs.go:276] 0 containers: []
	W0814 01:07:34.047845   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:07:34.047851   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:07:34.047914   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:07:34.080509   61804 cri.go:89] found id: ""
	I0814 01:07:34.080539   61804 logs.go:276] 0 containers: []
	W0814 01:07:34.080552   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:07:34.080561   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:07:34.080629   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:07:34.114693   61804 cri.go:89] found id: ""
	I0814 01:07:34.114723   61804 logs.go:276] 0 containers: []
	W0814 01:07:34.114735   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:07:34.114742   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:07:34.114812   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:07:34.148294   61804 cri.go:89] found id: ""
	I0814 01:07:34.148321   61804 logs.go:276] 0 containers: []
	W0814 01:07:34.148334   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:07:34.148344   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:07:34.148410   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:07:34.182913   61804 cri.go:89] found id: ""
	I0814 01:07:34.182938   61804 logs.go:276] 0 containers: []
	W0814 01:07:34.182947   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:07:34.182953   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:07:34.183002   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:07:34.215609   61804 cri.go:89] found id: ""
	I0814 01:07:34.215639   61804 logs.go:276] 0 containers: []
	W0814 01:07:34.215649   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:07:34.215662   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:07:34.215688   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:07:34.278627   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:07:34.278657   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:07:34.278674   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:07:34.353824   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:07:34.353863   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:07:34.390511   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:07:34.390551   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:07:34.440170   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:07:34.440205   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:07:30.589452   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:33.089231   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:33.780724   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:35.781648   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:34.417300   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:36.418300   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:36.955228   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:36.968676   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:07:36.968752   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:07:37.005738   61804 cri.go:89] found id: ""
	I0814 01:07:37.005770   61804 logs.go:276] 0 containers: []
	W0814 01:07:37.005781   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:07:37.005800   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:07:37.005876   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:07:37.038556   61804 cri.go:89] found id: ""
	I0814 01:07:37.038586   61804 logs.go:276] 0 containers: []
	W0814 01:07:37.038594   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:07:37.038599   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:07:37.038659   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:07:37.073835   61804 cri.go:89] found id: ""
	I0814 01:07:37.073870   61804 logs.go:276] 0 containers: []
	W0814 01:07:37.073881   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:07:37.073890   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:07:37.073952   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:07:37.109720   61804 cri.go:89] found id: ""
	I0814 01:07:37.109754   61804 logs.go:276] 0 containers: []
	W0814 01:07:37.109766   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:07:37.109774   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:07:37.109837   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:07:37.141361   61804 cri.go:89] found id: ""
	I0814 01:07:37.141391   61804 logs.go:276] 0 containers: []
	W0814 01:07:37.141401   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:07:37.141409   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:07:37.141460   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:07:37.172803   61804 cri.go:89] found id: ""
	I0814 01:07:37.172833   61804 logs.go:276] 0 containers: []
	W0814 01:07:37.172841   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:07:37.172847   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:07:37.172898   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:07:37.205074   61804 cri.go:89] found id: ""
	I0814 01:07:37.205101   61804 logs.go:276] 0 containers: []
	W0814 01:07:37.205110   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:07:37.205116   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:07:37.205172   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:07:37.237440   61804 cri.go:89] found id: ""
	I0814 01:07:37.237462   61804 logs.go:276] 0 containers: []
	W0814 01:07:37.237472   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:07:37.237484   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:07:37.237499   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:07:37.286411   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:07:37.286442   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:07:37.299649   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:07:37.299673   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:07:37.363165   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:07:37.363188   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:07:37.363209   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:07:37.440551   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:07:37.440589   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:07:35.090686   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:37.091438   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:39.590158   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:38.281686   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:40.780496   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:38.919024   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:41.417327   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:39.980740   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:39.992656   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:07:39.992724   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:07:40.026980   61804 cri.go:89] found id: ""
	I0814 01:07:40.027009   61804 logs.go:276] 0 containers: []
	W0814 01:07:40.027020   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:07:40.027027   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:07:40.027093   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:07:40.059474   61804 cri.go:89] found id: ""
	I0814 01:07:40.059509   61804 logs.go:276] 0 containers: []
	W0814 01:07:40.059521   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:07:40.059528   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:07:40.059602   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:07:40.092222   61804 cri.go:89] found id: ""
	I0814 01:07:40.092251   61804 logs.go:276] 0 containers: []
	W0814 01:07:40.092260   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:07:40.092265   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:07:40.092314   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:07:40.123458   61804 cri.go:89] found id: ""
	I0814 01:07:40.123487   61804 logs.go:276] 0 containers: []
	W0814 01:07:40.123495   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:07:40.123501   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:07:40.123557   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:07:40.155410   61804 cri.go:89] found id: ""
	I0814 01:07:40.155433   61804 logs.go:276] 0 containers: []
	W0814 01:07:40.155461   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:07:40.155467   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:07:40.155517   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:07:40.186726   61804 cri.go:89] found id: ""
	I0814 01:07:40.186750   61804 logs.go:276] 0 containers: []
	W0814 01:07:40.186774   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:07:40.186782   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:07:40.186842   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:07:40.223940   61804 cri.go:89] found id: ""
	I0814 01:07:40.223964   61804 logs.go:276] 0 containers: []
	W0814 01:07:40.223974   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:07:40.223981   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:07:40.224039   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:07:40.255483   61804 cri.go:89] found id: ""
	I0814 01:07:40.255511   61804 logs.go:276] 0 containers: []
	W0814 01:07:40.255520   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:07:40.255532   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:07:40.255547   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:07:40.307368   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:07:40.307400   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:07:40.320297   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:07:40.320323   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:07:40.382358   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:07:40.382390   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:07:40.382406   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:07:40.464226   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:07:40.464312   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:07:43.001144   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:43.015011   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:07:43.015090   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:07:43.047581   61804 cri.go:89] found id: ""
	I0814 01:07:43.047617   61804 logs.go:276] 0 containers: []
	W0814 01:07:43.047629   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:07:43.047636   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:07:43.047709   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:07:43.081737   61804 cri.go:89] found id: ""
	I0814 01:07:43.081769   61804 logs.go:276] 0 containers: []
	W0814 01:07:43.081780   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:07:43.081788   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:07:43.081858   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:07:43.116828   61804 cri.go:89] found id: ""
	I0814 01:07:43.116851   61804 logs.go:276] 0 containers: []
	W0814 01:07:43.116860   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:07:43.116865   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:07:43.116918   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:07:43.149154   61804 cri.go:89] found id: ""
	I0814 01:07:43.149183   61804 logs.go:276] 0 containers: []
	W0814 01:07:43.149195   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:07:43.149203   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:07:43.149270   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:07:43.183298   61804 cri.go:89] found id: ""
	I0814 01:07:43.183327   61804 logs.go:276] 0 containers: []
	W0814 01:07:43.183335   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:07:43.183341   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:07:43.183402   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:07:43.217844   61804 cri.go:89] found id: ""
	I0814 01:07:43.217875   61804 logs.go:276] 0 containers: []
	W0814 01:07:43.217885   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:07:43.217894   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:07:43.217957   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:07:43.254501   61804 cri.go:89] found id: ""
	I0814 01:07:43.254529   61804 logs.go:276] 0 containers: []
	W0814 01:07:43.254540   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:07:43.254549   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:07:43.254621   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:07:43.288499   61804 cri.go:89] found id: ""
	I0814 01:07:43.288520   61804 logs.go:276] 0 containers: []
	W0814 01:07:43.288528   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:07:43.288538   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:07:43.288553   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:07:43.364920   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:07:43.364957   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:07:43.402536   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:07:43.402563   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:07:43.454370   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:07:43.454403   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:07:43.467972   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:07:43.468000   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:07:43.541823   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:07:42.089879   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:44.090254   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:42.781141   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:45.280856   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:43.418435   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:45.918224   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:47.918468   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:46.042614   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:46.055014   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:07:46.055074   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:07:46.088632   61804 cri.go:89] found id: ""
	I0814 01:07:46.088664   61804 logs.go:276] 0 containers: []
	W0814 01:07:46.088676   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:07:46.088684   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:07:46.088755   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:07:46.121747   61804 cri.go:89] found id: ""
	I0814 01:07:46.121774   61804 logs.go:276] 0 containers: []
	W0814 01:07:46.121782   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:07:46.121788   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:07:46.121837   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:07:46.157301   61804 cri.go:89] found id: ""
	I0814 01:07:46.157329   61804 logs.go:276] 0 containers: []
	W0814 01:07:46.157340   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:07:46.157348   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:07:46.157412   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:07:46.188543   61804 cri.go:89] found id: ""
	I0814 01:07:46.188575   61804 logs.go:276] 0 containers: []
	W0814 01:07:46.188586   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:07:46.188594   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:07:46.188657   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:07:46.219762   61804 cri.go:89] found id: ""
	I0814 01:07:46.219787   61804 logs.go:276] 0 containers: []
	W0814 01:07:46.219795   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:07:46.219801   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:07:46.219849   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:07:46.253187   61804 cri.go:89] found id: ""
	I0814 01:07:46.253223   61804 logs.go:276] 0 containers: []
	W0814 01:07:46.253234   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:07:46.253242   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:07:46.253326   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:07:46.287614   61804 cri.go:89] found id: ""
	I0814 01:07:46.287647   61804 logs.go:276] 0 containers: []
	W0814 01:07:46.287656   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:07:46.287662   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:07:46.287716   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:07:46.323558   61804 cri.go:89] found id: ""
	I0814 01:07:46.323588   61804 logs.go:276] 0 containers: []
	W0814 01:07:46.323599   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:07:46.323611   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:07:46.323628   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:07:46.336110   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:07:46.336139   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:07:46.398541   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:07:46.398568   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:07:46.398584   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:07:46.476132   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:07:46.476166   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:07:46.521433   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:07:46.521470   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:07:49.071324   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:49.083741   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:07:49.083816   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:07:49.117788   61804 cri.go:89] found id: ""
	I0814 01:07:49.117816   61804 logs.go:276] 0 containers: []
	W0814 01:07:49.117828   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:07:49.117836   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:07:49.117903   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:07:49.153363   61804 cri.go:89] found id: ""
	I0814 01:07:49.153398   61804 logs.go:276] 0 containers: []
	W0814 01:07:49.153409   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:07:49.153417   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:07:49.153488   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:07:49.186229   61804 cri.go:89] found id: ""
	I0814 01:07:49.186253   61804 logs.go:276] 0 containers: []
	W0814 01:07:49.186261   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:07:49.186267   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:07:49.186327   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:07:49.218463   61804 cri.go:89] found id: ""
	I0814 01:07:49.218485   61804 logs.go:276] 0 containers: []
	W0814 01:07:49.218492   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:07:49.218498   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:07:49.218559   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:07:49.250172   61804 cri.go:89] found id: ""
	I0814 01:07:49.250204   61804 logs.go:276] 0 containers: []
	W0814 01:07:49.250214   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:07:49.250222   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:07:49.250287   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:07:49.285574   61804 cri.go:89] found id: ""
	I0814 01:07:49.285602   61804 logs.go:276] 0 containers: []
	W0814 01:07:49.285612   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:07:49.285620   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:07:49.285679   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:07:49.317583   61804 cri.go:89] found id: ""
	I0814 01:07:49.317614   61804 logs.go:276] 0 containers: []
	W0814 01:07:49.317625   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:07:49.317632   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:07:49.317690   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:07:49.350486   61804 cri.go:89] found id: ""
	I0814 01:07:49.350513   61804 logs.go:276] 0 containers: []
	W0814 01:07:49.350524   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:07:49.350535   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:07:49.350550   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:07:49.401242   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:07:49.401278   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:07:49.415776   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:07:49.415805   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:07:49.487135   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:07:49.487207   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:07:49.487229   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:07:46.092233   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:48.589232   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:47.780910   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:49.781008   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:50.418178   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:52.917953   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:49.569068   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:07:49.569103   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:07:52.108074   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:52.120495   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:07:52.120568   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:07:52.155022   61804 cri.go:89] found id: ""
	I0814 01:07:52.155047   61804 logs.go:276] 0 containers: []
	W0814 01:07:52.155055   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:07:52.155063   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:07:52.155131   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:07:52.186783   61804 cri.go:89] found id: ""
	I0814 01:07:52.186813   61804 logs.go:276] 0 containers: []
	W0814 01:07:52.186837   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:07:52.186854   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:07:52.186908   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:07:52.219089   61804 cri.go:89] found id: ""
	I0814 01:07:52.219118   61804 logs.go:276] 0 containers: []
	W0814 01:07:52.219129   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:07:52.219136   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:07:52.219200   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:07:52.252343   61804 cri.go:89] found id: ""
	I0814 01:07:52.252378   61804 logs.go:276] 0 containers: []
	W0814 01:07:52.252391   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:07:52.252399   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:07:52.252460   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:07:52.288827   61804 cri.go:89] found id: ""
	I0814 01:07:52.288848   61804 logs.go:276] 0 containers: []
	W0814 01:07:52.288855   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:07:52.288861   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:07:52.288913   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:07:52.322201   61804 cri.go:89] found id: ""
	I0814 01:07:52.322228   61804 logs.go:276] 0 containers: []
	W0814 01:07:52.322240   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:07:52.322247   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:07:52.322327   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:07:52.357482   61804 cri.go:89] found id: ""
	I0814 01:07:52.357508   61804 logs.go:276] 0 containers: []
	W0814 01:07:52.357519   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:07:52.357527   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:07:52.357599   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:07:52.390481   61804 cri.go:89] found id: ""
	I0814 01:07:52.390508   61804 logs.go:276] 0 containers: []
	W0814 01:07:52.390515   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:07:52.390523   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:07:52.390536   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:07:52.403144   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:07:52.403171   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:07:52.474148   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:07:52.474170   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:07:52.474182   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:07:52.555353   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:07:52.555396   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:07:52.592151   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:07:52.592180   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:07:50.589355   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:52.590468   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:52.282598   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:54.780753   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:55.418165   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:57.418294   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:55.143835   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:55.156285   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:07:55.156360   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:07:55.195624   61804 cri.go:89] found id: ""
	I0814 01:07:55.195655   61804 logs.go:276] 0 containers: []
	W0814 01:07:55.195666   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:07:55.195673   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:07:55.195735   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:07:55.230384   61804 cri.go:89] found id: ""
	I0814 01:07:55.230409   61804 logs.go:276] 0 containers: []
	W0814 01:07:55.230419   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:07:55.230426   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:07:55.230491   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:07:55.264774   61804 cri.go:89] found id: ""
	I0814 01:07:55.264802   61804 logs.go:276] 0 containers: []
	W0814 01:07:55.264812   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:07:55.264819   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:07:55.264905   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:07:55.297679   61804 cri.go:89] found id: ""
	I0814 01:07:55.297706   61804 logs.go:276] 0 containers: []
	W0814 01:07:55.297715   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:07:55.297721   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:07:55.297780   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:07:55.331555   61804 cri.go:89] found id: ""
	I0814 01:07:55.331591   61804 logs.go:276] 0 containers: []
	W0814 01:07:55.331602   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:07:55.331609   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:07:55.331685   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:07:55.362351   61804 cri.go:89] found id: ""
	I0814 01:07:55.362374   61804 logs.go:276] 0 containers: []
	W0814 01:07:55.362381   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:07:55.362388   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:07:55.362434   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:07:55.397261   61804 cri.go:89] found id: ""
	I0814 01:07:55.397292   61804 logs.go:276] 0 containers: []
	W0814 01:07:55.397301   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:07:55.397308   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:07:55.397355   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:07:55.431333   61804 cri.go:89] found id: ""
	I0814 01:07:55.431363   61804 logs.go:276] 0 containers: []
	W0814 01:07:55.431376   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:07:55.431388   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:07:55.431403   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:07:55.445865   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:07:55.445901   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:07:55.511474   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:07:55.511494   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:07:55.511505   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:07:55.596934   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:07:55.596966   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:07:55.632440   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:07:55.632477   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:07:58.183656   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:58.196717   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:07:58.196776   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:07:58.231854   61804 cri.go:89] found id: ""
	I0814 01:07:58.231890   61804 logs.go:276] 0 containers: []
	W0814 01:07:58.231902   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:07:58.231910   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:07:58.231972   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:07:58.267169   61804 cri.go:89] found id: ""
	I0814 01:07:58.267201   61804 logs.go:276] 0 containers: []
	W0814 01:07:58.267211   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:07:58.267218   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:07:58.267277   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:07:58.301552   61804 cri.go:89] found id: ""
	I0814 01:07:58.301581   61804 logs.go:276] 0 containers: []
	W0814 01:07:58.301589   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:07:58.301596   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:07:58.301652   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:07:58.334399   61804 cri.go:89] found id: ""
	I0814 01:07:58.334427   61804 logs.go:276] 0 containers: []
	W0814 01:07:58.334434   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:07:58.334440   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:07:58.334490   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:07:58.366748   61804 cri.go:89] found id: ""
	I0814 01:07:58.366777   61804 logs.go:276] 0 containers: []
	W0814 01:07:58.366787   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:07:58.366794   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:07:58.366860   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:07:58.401078   61804 cri.go:89] found id: ""
	I0814 01:07:58.401108   61804 logs.go:276] 0 containers: []
	W0814 01:07:58.401117   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:07:58.401123   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:07:58.401179   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:07:58.433766   61804 cri.go:89] found id: ""
	I0814 01:07:58.433795   61804 logs.go:276] 0 containers: []
	W0814 01:07:58.433807   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:07:58.433813   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:07:58.433863   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:07:58.467187   61804 cri.go:89] found id: ""
	I0814 01:07:58.467211   61804 logs.go:276] 0 containers: []
	W0814 01:07:58.467219   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:07:58.467227   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:07:58.467241   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:07:58.520695   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:07:58.520733   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:07:58.535262   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:07:58.535288   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:07:58.601335   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:07:58.601354   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:07:58.601367   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:07:58.683365   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:07:58.683411   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:07:55.089601   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:57.089754   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:59.590432   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:56.783376   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:59.282603   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:59.917309   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:01.917515   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:01.221305   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:01.233782   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:01.233863   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:01.265991   61804 cri.go:89] found id: ""
	I0814 01:08:01.266019   61804 logs.go:276] 0 containers: []
	W0814 01:08:01.266030   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:01.266048   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:01.266116   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:01.300802   61804 cri.go:89] found id: ""
	I0814 01:08:01.300825   61804 logs.go:276] 0 containers: []
	W0814 01:08:01.300840   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:01.300851   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:01.300918   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:01.334762   61804 cri.go:89] found id: ""
	I0814 01:08:01.334788   61804 logs.go:276] 0 containers: []
	W0814 01:08:01.334796   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:01.334803   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:01.334858   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:01.367051   61804 cri.go:89] found id: ""
	I0814 01:08:01.367075   61804 logs.go:276] 0 containers: []
	W0814 01:08:01.367083   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:01.367089   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:01.367147   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:01.401875   61804 cri.go:89] found id: ""
	I0814 01:08:01.401904   61804 logs.go:276] 0 containers: []
	W0814 01:08:01.401915   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:01.401922   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:01.401982   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:01.435930   61804 cri.go:89] found id: ""
	I0814 01:08:01.435958   61804 logs.go:276] 0 containers: []
	W0814 01:08:01.435975   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:01.435994   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:01.436056   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:01.470913   61804 cri.go:89] found id: ""
	I0814 01:08:01.470943   61804 logs.go:276] 0 containers: []
	W0814 01:08:01.470958   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:01.470966   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:01.471030   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:01.506552   61804 cri.go:89] found id: ""
	I0814 01:08:01.506584   61804 logs.go:276] 0 containers: []
	W0814 01:08:01.506595   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:01.506607   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:01.506621   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:01.557203   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:01.557245   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:01.570729   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:01.570754   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:01.636244   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:01.636268   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:01.636282   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:01.720905   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:01.720937   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:04.261326   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:04.274952   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:04.275020   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:04.309640   61804 cri.go:89] found id: ""
	I0814 01:08:04.309695   61804 logs.go:276] 0 containers: []
	W0814 01:08:04.309708   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:04.309717   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:04.309784   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:04.343333   61804 cri.go:89] found id: ""
	I0814 01:08:04.343368   61804 logs.go:276] 0 containers: []
	W0814 01:08:04.343380   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:04.343388   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:04.343446   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:04.377058   61804 cri.go:89] found id: ""
	I0814 01:08:04.377090   61804 logs.go:276] 0 containers: []
	W0814 01:08:04.377101   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:04.377109   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:04.377170   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:04.411932   61804 cri.go:89] found id: ""
	I0814 01:08:04.411961   61804 logs.go:276] 0 containers: []
	W0814 01:08:04.411973   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:04.411980   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:04.412039   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:04.449523   61804 cri.go:89] found id: ""
	I0814 01:08:04.449557   61804 logs.go:276] 0 containers: []
	W0814 01:08:04.449569   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:04.449577   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:04.449639   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:04.505818   61804 cri.go:89] found id: ""
	I0814 01:08:04.505844   61804 logs.go:276] 0 containers: []
	W0814 01:08:04.505852   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:04.505858   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:04.505911   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:01.594524   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:04.089421   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:01.780659   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:03.780893   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:06.281784   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:03.917861   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:06.417117   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:04.540720   61804 cri.go:89] found id: ""
	I0814 01:08:04.540747   61804 logs.go:276] 0 containers: []
	W0814 01:08:04.540754   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:04.540759   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:04.540822   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:04.575188   61804 cri.go:89] found id: ""
	I0814 01:08:04.575218   61804 logs.go:276] 0 containers: []
	W0814 01:08:04.575230   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:04.575241   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:04.575254   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:04.624557   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:04.624593   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:04.637679   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:04.637707   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:04.707655   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:04.707676   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:04.707690   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:04.792530   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:04.792564   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:07.333726   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:07.346667   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:07.346762   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:07.379773   61804 cri.go:89] found id: ""
	I0814 01:08:07.379809   61804 logs.go:276] 0 containers: []
	W0814 01:08:07.379821   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:07.379832   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:07.379898   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:07.413473   61804 cri.go:89] found id: ""
	I0814 01:08:07.413508   61804 logs.go:276] 0 containers: []
	W0814 01:08:07.413519   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:07.413528   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:07.413592   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:07.448033   61804 cri.go:89] found id: ""
	I0814 01:08:07.448065   61804 logs.go:276] 0 containers: []
	W0814 01:08:07.448076   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:07.448084   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:07.448149   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:07.483015   61804 cri.go:89] found id: ""
	I0814 01:08:07.483043   61804 logs.go:276] 0 containers: []
	W0814 01:08:07.483051   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:07.483057   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:07.483116   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:07.516222   61804 cri.go:89] found id: ""
	I0814 01:08:07.516245   61804 logs.go:276] 0 containers: []
	W0814 01:08:07.516253   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:07.516259   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:07.516309   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:07.552179   61804 cri.go:89] found id: ""
	I0814 01:08:07.552203   61804 logs.go:276] 0 containers: []
	W0814 01:08:07.552211   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:07.552217   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:07.552269   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:07.585804   61804 cri.go:89] found id: ""
	I0814 01:08:07.585832   61804 logs.go:276] 0 containers: []
	W0814 01:08:07.585842   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:07.585850   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:07.585913   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:07.620731   61804 cri.go:89] found id: ""
	I0814 01:08:07.620757   61804 logs.go:276] 0 containers: []
	W0814 01:08:07.620766   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:07.620774   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:07.620786   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:07.662648   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:07.662686   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:07.713380   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:07.713418   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:07.726770   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:07.726801   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:07.794679   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:07.794705   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:07.794720   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:06.090545   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:08.093404   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:08.780821   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:11.281395   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:08.417151   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:10.418613   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:12.916869   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:10.370665   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:10.383986   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:10.384046   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:10.417596   61804 cri.go:89] found id: ""
	I0814 01:08:10.417622   61804 logs.go:276] 0 containers: []
	W0814 01:08:10.417634   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:10.417642   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:10.417703   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:10.453782   61804 cri.go:89] found id: ""
	I0814 01:08:10.453813   61804 logs.go:276] 0 containers: []
	W0814 01:08:10.453824   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:10.453832   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:10.453895   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:10.486795   61804 cri.go:89] found id: ""
	I0814 01:08:10.486821   61804 logs.go:276] 0 containers: []
	W0814 01:08:10.486831   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:10.486839   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:10.486930   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:10.519249   61804 cri.go:89] found id: ""
	I0814 01:08:10.519285   61804 logs.go:276] 0 containers: []
	W0814 01:08:10.519296   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:10.519304   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:10.519369   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:10.551791   61804 cri.go:89] found id: ""
	I0814 01:08:10.551818   61804 logs.go:276] 0 containers: []
	W0814 01:08:10.551825   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:10.551834   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:10.551892   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:10.584630   61804 cri.go:89] found id: ""
	I0814 01:08:10.584658   61804 logs.go:276] 0 containers: []
	W0814 01:08:10.584669   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:10.584679   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:10.584742   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:10.616870   61804 cri.go:89] found id: ""
	I0814 01:08:10.616898   61804 logs.go:276] 0 containers: []
	W0814 01:08:10.616911   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:10.616918   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:10.616984   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:10.650681   61804 cri.go:89] found id: ""
	I0814 01:08:10.650709   61804 logs.go:276] 0 containers: []
	W0814 01:08:10.650721   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:10.650731   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:10.650748   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:10.663021   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:10.663047   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:10.731788   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:10.731813   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:10.731829   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:10.812174   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:10.812213   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:10.854260   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:10.854287   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:13.414862   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:13.428537   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:13.428595   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:13.460800   61804 cri.go:89] found id: ""
	I0814 01:08:13.460836   61804 logs.go:276] 0 containers: []
	W0814 01:08:13.460850   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:13.460859   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:13.460933   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:13.494240   61804 cri.go:89] found id: ""
	I0814 01:08:13.494264   61804 logs.go:276] 0 containers: []
	W0814 01:08:13.494274   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:13.494282   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:13.494370   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:13.526684   61804 cri.go:89] found id: ""
	I0814 01:08:13.526715   61804 logs.go:276] 0 containers: []
	W0814 01:08:13.526726   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:13.526734   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:13.526797   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:13.560258   61804 cri.go:89] found id: ""
	I0814 01:08:13.560281   61804 logs.go:276] 0 containers: []
	W0814 01:08:13.560289   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:13.560296   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:13.560353   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:13.592615   61804 cri.go:89] found id: ""
	I0814 01:08:13.592641   61804 logs.go:276] 0 containers: []
	W0814 01:08:13.592653   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:13.592668   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:13.592732   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:13.627268   61804 cri.go:89] found id: ""
	I0814 01:08:13.627291   61804 logs.go:276] 0 containers: []
	W0814 01:08:13.627299   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:13.627305   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:13.627363   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:13.661932   61804 cri.go:89] found id: ""
	I0814 01:08:13.661955   61804 logs.go:276] 0 containers: []
	W0814 01:08:13.661963   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:13.661968   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:13.662024   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:13.694724   61804 cri.go:89] found id: ""
	I0814 01:08:13.694750   61804 logs.go:276] 0 containers: []
	W0814 01:08:13.694760   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:13.694770   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:13.694785   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:13.759415   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:13.759436   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:13.759449   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:13.835496   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:13.835532   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:13.873749   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:13.873779   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:13.927612   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:13.927647   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:10.590789   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:13.090113   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:13.781937   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:16.281253   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:14.920559   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:17.418625   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:16.440696   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:16.455648   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:16.455734   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:16.490557   61804 cri.go:89] found id: ""
	I0814 01:08:16.490587   61804 logs.go:276] 0 containers: []
	W0814 01:08:16.490599   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:16.490606   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:16.490667   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:16.524268   61804 cri.go:89] found id: ""
	I0814 01:08:16.524294   61804 logs.go:276] 0 containers: []
	W0814 01:08:16.524303   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:16.524315   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:16.524379   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:16.562651   61804 cri.go:89] found id: ""
	I0814 01:08:16.562686   61804 logs.go:276] 0 containers: []
	W0814 01:08:16.562696   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:16.562708   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:16.562771   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:16.598581   61804 cri.go:89] found id: ""
	I0814 01:08:16.598605   61804 logs.go:276] 0 containers: []
	W0814 01:08:16.598613   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:16.598619   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:16.598669   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:16.646849   61804 cri.go:89] found id: ""
	I0814 01:08:16.646872   61804 logs.go:276] 0 containers: []
	W0814 01:08:16.646880   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:16.646886   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:16.646939   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:16.698695   61804 cri.go:89] found id: ""
	I0814 01:08:16.698720   61804 logs.go:276] 0 containers: []
	W0814 01:08:16.698727   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:16.698733   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:16.698793   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:16.748149   61804 cri.go:89] found id: ""
	I0814 01:08:16.748182   61804 logs.go:276] 0 containers: []
	W0814 01:08:16.748193   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:16.748201   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:16.748263   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:16.783334   61804 cri.go:89] found id: ""
	I0814 01:08:16.783362   61804 logs.go:276] 0 containers: []
	W0814 01:08:16.783371   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:16.783378   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:16.783389   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:16.833178   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:16.833211   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:16.845843   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:16.845873   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:16.916728   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:16.916754   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:16.916770   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:17.001194   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:17.001236   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:15.588888   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:17.589309   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:19.593806   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:18.780869   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:20.780899   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:19.918779   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:22.417464   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:19.540300   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:19.554740   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:19.554823   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:19.590452   61804 cri.go:89] found id: ""
	I0814 01:08:19.590478   61804 logs.go:276] 0 containers: []
	W0814 01:08:19.590489   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:19.590498   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:19.590559   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:19.623643   61804 cri.go:89] found id: ""
	I0814 01:08:19.623673   61804 logs.go:276] 0 containers: []
	W0814 01:08:19.623683   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:19.623691   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:19.623759   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:19.659205   61804 cri.go:89] found id: ""
	I0814 01:08:19.659228   61804 logs.go:276] 0 containers: []
	W0814 01:08:19.659236   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:19.659243   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:19.659312   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:19.695038   61804 cri.go:89] found id: ""
	I0814 01:08:19.695061   61804 logs.go:276] 0 containers: []
	W0814 01:08:19.695068   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:19.695075   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:19.695132   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:19.728525   61804 cri.go:89] found id: ""
	I0814 01:08:19.728555   61804 logs.go:276] 0 containers: []
	W0814 01:08:19.728568   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:19.728585   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:19.728652   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:19.764153   61804 cri.go:89] found id: ""
	I0814 01:08:19.764180   61804 logs.go:276] 0 containers: []
	W0814 01:08:19.764191   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:19.764198   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:19.764261   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:19.803346   61804 cri.go:89] found id: ""
	I0814 01:08:19.803382   61804 logs.go:276] 0 containers: []
	W0814 01:08:19.803392   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:19.803400   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:19.803462   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:19.835783   61804 cri.go:89] found id: ""
	I0814 01:08:19.835811   61804 logs.go:276] 0 containers: []
	W0814 01:08:19.835818   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:19.835827   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:19.835839   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:19.889917   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:19.889961   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:19.903826   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:19.903858   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:19.977790   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:19.977813   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:19.977832   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:20.053634   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:20.053672   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:22.598821   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:22.612128   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:22.612209   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:22.647840   61804 cri.go:89] found id: ""
	I0814 01:08:22.647864   61804 logs.go:276] 0 containers: []
	W0814 01:08:22.647873   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:22.647880   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:22.647942   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:22.681572   61804 cri.go:89] found id: ""
	I0814 01:08:22.681594   61804 logs.go:276] 0 containers: []
	W0814 01:08:22.681601   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:22.681606   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:22.681670   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:22.715737   61804 cri.go:89] found id: ""
	I0814 01:08:22.715785   61804 logs.go:276] 0 containers: []
	W0814 01:08:22.715793   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:22.715799   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:22.715856   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:22.750605   61804 cri.go:89] found id: ""
	I0814 01:08:22.750628   61804 logs.go:276] 0 containers: []
	W0814 01:08:22.750636   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:22.750643   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:22.750693   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:22.786410   61804 cri.go:89] found id: ""
	I0814 01:08:22.786434   61804 logs.go:276] 0 containers: []
	W0814 01:08:22.786442   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:22.786447   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:22.786502   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:22.821799   61804 cri.go:89] found id: ""
	I0814 01:08:22.821830   61804 logs.go:276] 0 containers: []
	W0814 01:08:22.821840   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:22.821846   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:22.821923   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:22.861218   61804 cri.go:89] found id: ""
	I0814 01:08:22.861243   61804 logs.go:276] 0 containers: []
	W0814 01:08:22.861254   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:22.861261   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:22.861324   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:22.896371   61804 cri.go:89] found id: ""
	I0814 01:08:22.896398   61804 logs.go:276] 0 containers: []
	W0814 01:08:22.896408   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:22.896419   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:22.896434   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:22.951998   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:22.952035   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:22.966214   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:22.966239   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:23.035790   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:23.035812   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:23.035824   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:23.119675   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:23.119708   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:22.090427   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:24.100671   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:22.781758   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:25.280556   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:24.419130   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:26.918236   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:25.657771   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:25.671521   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:25.671607   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:25.708419   61804 cri.go:89] found id: ""
	I0814 01:08:25.708451   61804 logs.go:276] 0 containers: []
	W0814 01:08:25.708460   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:25.708466   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:25.708514   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:25.743263   61804 cri.go:89] found id: ""
	I0814 01:08:25.743296   61804 logs.go:276] 0 containers: []
	W0814 01:08:25.743309   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:25.743318   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:25.743384   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:25.773544   61804 cri.go:89] found id: ""
	I0814 01:08:25.773570   61804 logs.go:276] 0 containers: []
	W0814 01:08:25.773580   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:25.773588   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:25.773649   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:25.805316   61804 cri.go:89] found id: ""
	I0814 01:08:25.805339   61804 logs.go:276] 0 containers: []
	W0814 01:08:25.805347   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:25.805353   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:25.805404   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:25.837785   61804 cri.go:89] found id: ""
	I0814 01:08:25.837810   61804 logs.go:276] 0 containers: []
	W0814 01:08:25.837818   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:25.837824   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:25.837893   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:25.877145   61804 cri.go:89] found id: ""
	I0814 01:08:25.877171   61804 logs.go:276] 0 containers: []
	W0814 01:08:25.877182   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:25.877190   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:25.877236   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:25.913823   61804 cri.go:89] found id: ""
	I0814 01:08:25.913861   61804 logs.go:276] 0 containers: []
	W0814 01:08:25.913872   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:25.913880   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:25.913946   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:25.947397   61804 cri.go:89] found id: ""
	I0814 01:08:25.947419   61804 logs.go:276] 0 containers: []
	W0814 01:08:25.947427   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:25.947435   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:25.947446   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:26.023754   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:26.023812   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:26.060030   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:26.060068   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:26.110625   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:26.110663   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:26.123952   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:26.123991   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:26.194210   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:28.694490   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:28.706976   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:28.707040   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:28.739739   61804 cri.go:89] found id: ""
	I0814 01:08:28.739768   61804 logs.go:276] 0 containers: []
	W0814 01:08:28.739775   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:28.739781   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:28.739831   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:28.771179   61804 cri.go:89] found id: ""
	I0814 01:08:28.771217   61804 logs.go:276] 0 containers: []
	W0814 01:08:28.771228   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:28.771237   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:28.771303   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:28.805634   61804 cri.go:89] found id: ""
	I0814 01:08:28.805661   61804 logs.go:276] 0 containers: []
	W0814 01:08:28.805670   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:28.805675   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:28.805727   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:28.840796   61804 cri.go:89] found id: ""
	I0814 01:08:28.840819   61804 logs.go:276] 0 containers: []
	W0814 01:08:28.840827   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:28.840833   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:28.840893   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:28.879627   61804 cri.go:89] found id: ""
	I0814 01:08:28.879656   61804 logs.go:276] 0 containers: []
	W0814 01:08:28.879668   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:28.879675   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:28.879734   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:28.916568   61804 cri.go:89] found id: ""
	I0814 01:08:28.916588   61804 logs.go:276] 0 containers: []
	W0814 01:08:28.916597   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:28.916602   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:28.916658   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:28.952959   61804 cri.go:89] found id: ""
	I0814 01:08:28.952986   61804 logs.go:276] 0 containers: []
	W0814 01:08:28.952996   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:28.953003   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:28.953061   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:28.993496   61804 cri.go:89] found id: ""
	I0814 01:08:28.993527   61804 logs.go:276] 0 containers: []
	W0814 01:08:28.993538   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:28.993550   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:28.993565   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:29.079181   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:29.079219   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:29.121692   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:29.121718   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:29.174008   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:29.174068   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:29.188872   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:29.188904   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:29.254381   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:26.589068   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:28.590266   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:27.281232   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:29.781697   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:28.918512   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:31.418087   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:31.754986   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:31.767581   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:31.767656   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:31.803826   61804 cri.go:89] found id: ""
	I0814 01:08:31.803853   61804 logs.go:276] 0 containers: []
	W0814 01:08:31.803861   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:31.803867   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:31.803927   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:31.837958   61804 cri.go:89] found id: ""
	I0814 01:08:31.837986   61804 logs.go:276] 0 containers: []
	W0814 01:08:31.837996   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:31.838004   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:31.838077   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:31.869567   61804 cri.go:89] found id: ""
	I0814 01:08:31.869595   61804 logs.go:276] 0 containers: []
	W0814 01:08:31.869604   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:31.869612   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:31.869680   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:31.906943   61804 cri.go:89] found id: ""
	I0814 01:08:31.906973   61804 logs.go:276] 0 containers: []
	W0814 01:08:31.906985   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:31.906992   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:31.907059   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:31.940969   61804 cri.go:89] found id: ""
	I0814 01:08:31.941006   61804 logs.go:276] 0 containers: []
	W0814 01:08:31.941017   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:31.941025   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:31.941081   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:31.974546   61804 cri.go:89] found id: ""
	I0814 01:08:31.974578   61804 logs.go:276] 0 containers: []
	W0814 01:08:31.974588   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:31.974596   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:31.974657   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:32.007586   61804 cri.go:89] found id: ""
	I0814 01:08:32.007619   61804 logs.go:276] 0 containers: []
	W0814 01:08:32.007633   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:32.007641   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:32.007703   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:32.040073   61804 cri.go:89] found id: ""
	I0814 01:08:32.040104   61804 logs.go:276] 0 containers: []
	W0814 01:08:32.040116   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:32.040128   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:32.040142   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:32.094938   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:32.094978   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:32.107967   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:32.108002   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:32.176290   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:32.176314   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:32.176326   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:32.251231   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:32.251269   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:30.590569   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:33.089507   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:32.287689   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:34.781273   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:33.918103   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:36.417197   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:34.791693   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:34.804519   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:34.804582   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:34.838907   61804 cri.go:89] found id: ""
	I0814 01:08:34.838933   61804 logs.go:276] 0 containers: []
	W0814 01:08:34.838941   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:34.838947   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:34.839008   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:34.869650   61804 cri.go:89] found id: ""
	I0814 01:08:34.869676   61804 logs.go:276] 0 containers: []
	W0814 01:08:34.869684   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:34.869689   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:34.869739   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:34.903598   61804 cri.go:89] found id: ""
	I0814 01:08:34.903635   61804 logs.go:276] 0 containers: []
	W0814 01:08:34.903648   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:34.903655   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:34.903719   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:34.937101   61804 cri.go:89] found id: ""
	I0814 01:08:34.937131   61804 logs.go:276] 0 containers: []
	W0814 01:08:34.937143   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:34.937151   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:34.937214   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:34.969880   61804 cri.go:89] found id: ""
	I0814 01:08:34.969913   61804 logs.go:276] 0 containers: []
	W0814 01:08:34.969925   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:34.969933   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:34.969990   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:35.004158   61804 cri.go:89] found id: ""
	I0814 01:08:35.004185   61804 logs.go:276] 0 containers: []
	W0814 01:08:35.004194   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:35.004200   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:35.004267   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:35.037368   61804 cri.go:89] found id: ""
	I0814 01:08:35.037397   61804 logs.go:276] 0 containers: []
	W0814 01:08:35.037407   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:35.037415   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:35.037467   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:35.071051   61804 cri.go:89] found id: ""
	I0814 01:08:35.071080   61804 logs.go:276] 0 containers: []
	W0814 01:08:35.071089   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:35.071102   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:35.071116   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:35.147845   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:35.147879   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:35.189235   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:35.189271   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:35.242094   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:35.242132   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:35.255405   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:35.255430   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:35.325820   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:37.826188   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:37.839036   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:37.839117   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:37.876368   61804 cri.go:89] found id: ""
	I0814 01:08:37.876397   61804 logs.go:276] 0 containers: []
	W0814 01:08:37.876406   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:37.876411   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:37.876468   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:37.916680   61804 cri.go:89] found id: ""
	I0814 01:08:37.916717   61804 logs.go:276] 0 containers: []
	W0814 01:08:37.916727   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:37.916735   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:37.916802   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:37.951025   61804 cri.go:89] found id: ""
	I0814 01:08:37.951048   61804 logs.go:276] 0 containers: []
	W0814 01:08:37.951056   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:37.951062   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:37.951122   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:37.984837   61804 cri.go:89] found id: ""
	I0814 01:08:37.984865   61804 logs.go:276] 0 containers: []
	W0814 01:08:37.984873   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:37.984878   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:37.984928   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:38.018722   61804 cri.go:89] found id: ""
	I0814 01:08:38.018744   61804 logs.go:276] 0 containers: []
	W0814 01:08:38.018752   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:38.018757   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:38.018815   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:38.052306   61804 cri.go:89] found id: ""
	I0814 01:08:38.052337   61804 logs.go:276] 0 containers: []
	W0814 01:08:38.052350   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:38.052358   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:38.052419   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:38.086752   61804 cri.go:89] found id: ""
	I0814 01:08:38.086784   61804 logs.go:276] 0 containers: []
	W0814 01:08:38.086801   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:38.086811   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:38.086877   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:38.119201   61804 cri.go:89] found id: ""
	I0814 01:08:38.119228   61804 logs.go:276] 0 containers: []
	W0814 01:08:38.119235   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:38.119243   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:38.119255   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:38.171460   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:38.171492   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:38.184712   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:38.184739   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:38.248529   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:38.248552   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:38.248568   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:38.324517   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:38.324556   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:35.092682   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:37.590633   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:39.590761   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:37.280984   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:39.780961   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:38.417262   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:40.417822   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:42.918615   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:40.865218   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:40.877772   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:40.877847   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:40.910171   61804 cri.go:89] found id: ""
	I0814 01:08:40.910197   61804 logs.go:276] 0 containers: []
	W0814 01:08:40.910204   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:40.910210   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:40.910257   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:40.947205   61804 cri.go:89] found id: ""
	I0814 01:08:40.947234   61804 logs.go:276] 0 containers: []
	W0814 01:08:40.947244   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:40.947249   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:40.947304   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:40.979404   61804 cri.go:89] found id: ""
	I0814 01:08:40.979428   61804 logs.go:276] 0 containers: []
	W0814 01:08:40.979436   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:40.979442   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:40.979500   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:41.017710   61804 cri.go:89] found id: ""
	I0814 01:08:41.017737   61804 logs.go:276] 0 containers: []
	W0814 01:08:41.017746   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:41.017752   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:41.017799   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:41.052240   61804 cri.go:89] found id: ""
	I0814 01:08:41.052269   61804 logs.go:276] 0 containers: []
	W0814 01:08:41.052278   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:41.052286   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:41.052353   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:41.084124   61804 cri.go:89] found id: ""
	I0814 01:08:41.084151   61804 logs.go:276] 0 containers: []
	W0814 01:08:41.084159   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:41.084165   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:41.084230   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:41.120994   61804 cri.go:89] found id: ""
	I0814 01:08:41.121027   61804 logs.go:276] 0 containers: []
	W0814 01:08:41.121039   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:41.121047   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:41.121106   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:41.155794   61804 cri.go:89] found id: ""
	I0814 01:08:41.155829   61804 logs.go:276] 0 containers: []
	W0814 01:08:41.155842   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:41.155854   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:41.155873   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:41.209146   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:41.209191   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:41.222112   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:41.222141   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:41.298512   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:41.298533   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:41.298550   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:41.378609   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:41.378645   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:43.924469   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:43.936857   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:43.936935   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:43.969234   61804 cri.go:89] found id: ""
	I0814 01:08:43.969267   61804 logs.go:276] 0 containers: []
	W0814 01:08:43.969276   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:43.969282   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:43.969348   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:44.003814   61804 cri.go:89] found id: ""
	I0814 01:08:44.003841   61804 logs.go:276] 0 containers: []
	W0814 01:08:44.003852   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:44.003860   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:44.003929   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:44.037828   61804 cri.go:89] found id: ""
	I0814 01:08:44.037858   61804 logs.go:276] 0 containers: []
	W0814 01:08:44.037869   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:44.037877   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:44.037931   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:44.077084   61804 cri.go:89] found id: ""
	I0814 01:08:44.077110   61804 logs.go:276] 0 containers: []
	W0814 01:08:44.077118   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:44.077124   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:44.077174   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:44.111028   61804 cri.go:89] found id: ""
	I0814 01:08:44.111054   61804 logs.go:276] 0 containers: []
	W0814 01:08:44.111063   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:44.111070   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:44.111122   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:44.143178   61804 cri.go:89] found id: ""
	I0814 01:08:44.143211   61804 logs.go:276] 0 containers: []
	W0814 01:08:44.143222   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:44.143229   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:44.143293   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:44.177606   61804 cri.go:89] found id: ""
	I0814 01:08:44.177636   61804 logs.go:276] 0 containers: []
	W0814 01:08:44.177648   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:44.177657   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:44.177723   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:44.210941   61804 cri.go:89] found id: ""
	I0814 01:08:44.210965   61804 logs.go:276] 0 containers: []
	W0814 01:08:44.210973   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:44.210982   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:44.210995   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:44.224219   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:44.224248   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:44.289411   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:44.289431   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:44.289442   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:44.369680   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:44.369720   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:44.407705   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:44.407742   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:42.088924   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:44.090237   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:41.781814   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:44.281794   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:45.418397   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:47.419132   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:46.962321   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:46.975711   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:46.975843   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:47.008529   61804 cri.go:89] found id: ""
	I0814 01:08:47.008642   61804 logs.go:276] 0 containers: []
	W0814 01:08:47.008651   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:47.008657   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:47.008707   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:47.042469   61804 cri.go:89] found id: ""
	I0814 01:08:47.042498   61804 logs.go:276] 0 containers: []
	W0814 01:08:47.042509   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:47.042518   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:47.042586   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:47.081186   61804 cri.go:89] found id: ""
	I0814 01:08:47.081214   61804 logs.go:276] 0 containers: []
	W0814 01:08:47.081222   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:47.081229   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:47.081286   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:47.117727   61804 cri.go:89] found id: ""
	I0814 01:08:47.117754   61804 logs.go:276] 0 containers: []
	W0814 01:08:47.117765   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:47.117773   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:47.117858   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:47.151247   61804 cri.go:89] found id: ""
	I0814 01:08:47.151283   61804 logs.go:276] 0 containers: []
	W0814 01:08:47.151298   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:47.151307   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:47.151370   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:47.185640   61804 cri.go:89] found id: ""
	I0814 01:08:47.185671   61804 logs.go:276] 0 containers: []
	W0814 01:08:47.185681   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:47.185689   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:47.185755   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:47.220597   61804 cri.go:89] found id: ""
	I0814 01:08:47.220625   61804 logs.go:276] 0 containers: []
	W0814 01:08:47.220633   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:47.220641   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:47.220714   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:47.257099   61804 cri.go:89] found id: ""
	I0814 01:08:47.257131   61804 logs.go:276] 0 containers: []
	W0814 01:08:47.257147   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:47.257162   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:47.257179   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:47.307503   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:47.307538   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:47.320882   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:47.320907   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:47.394519   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:47.394553   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:47.394567   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:47.475998   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:47.476058   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:46.091154   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:48.590382   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:46.780699   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:48.780773   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:51.281235   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:49.421293   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:51.918374   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:50.019454   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:50.033470   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:50.033550   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:50.070782   61804 cri.go:89] found id: ""
	I0814 01:08:50.070806   61804 logs.go:276] 0 containers: []
	W0814 01:08:50.070813   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:50.070819   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:50.070881   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:50.104047   61804 cri.go:89] found id: ""
	I0814 01:08:50.104083   61804 logs.go:276] 0 containers: []
	W0814 01:08:50.104092   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:50.104101   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:50.104172   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:50.139445   61804 cri.go:89] found id: ""
	I0814 01:08:50.139472   61804 logs.go:276] 0 containers: []
	W0814 01:08:50.139480   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:50.139487   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:50.139545   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:50.173077   61804 cri.go:89] found id: ""
	I0814 01:08:50.173109   61804 logs.go:276] 0 containers: []
	W0814 01:08:50.173118   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:50.173126   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:50.173189   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:50.204234   61804 cri.go:89] found id: ""
	I0814 01:08:50.204264   61804 logs.go:276] 0 containers: []
	W0814 01:08:50.204273   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:50.204281   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:50.204342   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:50.237005   61804 cri.go:89] found id: ""
	I0814 01:08:50.237034   61804 logs.go:276] 0 containers: []
	W0814 01:08:50.237044   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:50.237052   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:50.237107   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:50.270171   61804 cri.go:89] found id: ""
	I0814 01:08:50.270197   61804 logs.go:276] 0 containers: []
	W0814 01:08:50.270204   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:50.270209   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:50.270274   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:50.304932   61804 cri.go:89] found id: ""
	I0814 01:08:50.304959   61804 logs.go:276] 0 containers: []
	W0814 01:08:50.304968   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:50.304980   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:50.305000   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:50.317524   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:50.317552   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:50.384790   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:50.384817   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:50.384833   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:50.461398   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:50.461432   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:50.518516   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:50.518545   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:53.069835   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:53.082707   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:53.082777   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:53.119053   61804 cri.go:89] found id: ""
	I0814 01:08:53.119075   61804 logs.go:276] 0 containers: []
	W0814 01:08:53.119083   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:53.119089   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:53.119138   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:53.154565   61804 cri.go:89] found id: ""
	I0814 01:08:53.154598   61804 logs.go:276] 0 containers: []
	W0814 01:08:53.154610   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:53.154618   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:53.154690   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:53.187144   61804 cri.go:89] found id: ""
	I0814 01:08:53.187171   61804 logs.go:276] 0 containers: []
	W0814 01:08:53.187178   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:53.187184   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:53.187236   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:53.220965   61804 cri.go:89] found id: ""
	I0814 01:08:53.220989   61804 logs.go:276] 0 containers: []
	W0814 01:08:53.220998   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:53.221004   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:53.221062   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:53.256825   61804 cri.go:89] found id: ""
	I0814 01:08:53.256857   61804 logs.go:276] 0 containers: []
	W0814 01:08:53.256868   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:53.256875   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:53.256941   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:53.295733   61804 cri.go:89] found id: ""
	I0814 01:08:53.295761   61804 logs.go:276] 0 containers: []
	W0814 01:08:53.295768   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:53.295774   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:53.295822   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:53.328928   61804 cri.go:89] found id: ""
	I0814 01:08:53.328959   61804 logs.go:276] 0 containers: []
	W0814 01:08:53.328970   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:53.328979   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:53.329049   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:53.362866   61804 cri.go:89] found id: ""
	I0814 01:08:53.362896   61804 logs.go:276] 0 containers: []
	W0814 01:08:53.362907   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:53.362919   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:53.362934   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:53.375681   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:53.375718   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:53.439108   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:53.439132   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:53.439148   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:53.524801   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:53.524838   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:53.560832   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:53.560866   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:51.091445   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:53.589472   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:53.780960   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:56.281731   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:54.417207   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:56.417442   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:56.117383   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:56.129668   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:56.129729   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:56.161928   61804 cri.go:89] found id: ""
	I0814 01:08:56.161953   61804 logs.go:276] 0 containers: []
	W0814 01:08:56.161966   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:56.161971   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:56.162017   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:56.192303   61804 cri.go:89] found id: ""
	I0814 01:08:56.192332   61804 logs.go:276] 0 containers: []
	W0814 01:08:56.192343   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:56.192360   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:56.192428   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:56.226668   61804 cri.go:89] found id: ""
	I0814 01:08:56.226696   61804 logs.go:276] 0 containers: []
	W0814 01:08:56.226707   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:56.226715   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:56.226776   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:56.284959   61804 cri.go:89] found id: ""
	I0814 01:08:56.284987   61804 logs.go:276] 0 containers: []
	W0814 01:08:56.284998   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:56.285006   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:56.285066   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:56.317591   61804 cri.go:89] found id: ""
	I0814 01:08:56.317623   61804 logs.go:276] 0 containers: []
	W0814 01:08:56.317633   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:56.317640   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:56.317707   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:56.350119   61804 cri.go:89] found id: ""
	I0814 01:08:56.350146   61804 logs.go:276] 0 containers: []
	W0814 01:08:56.350157   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:56.350165   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:56.350223   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:56.382204   61804 cri.go:89] found id: ""
	I0814 01:08:56.382231   61804 logs.go:276] 0 containers: []
	W0814 01:08:56.382239   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:56.382244   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:56.382295   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:56.415098   61804 cri.go:89] found id: ""
	I0814 01:08:56.415130   61804 logs.go:276] 0 containers: []
	W0814 01:08:56.415140   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:56.415160   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:56.415174   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:56.466056   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:56.466094   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:56.480989   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:56.481019   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:56.550348   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:56.550371   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:56.550387   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:56.629331   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:56.629371   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:59.166791   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:59.179818   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:59.179907   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:59.212759   61804 cri.go:89] found id: ""
	I0814 01:08:59.212781   61804 logs.go:276] 0 containers: []
	W0814 01:08:59.212789   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:59.212796   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:59.212851   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:59.248330   61804 cri.go:89] found id: ""
	I0814 01:08:59.248354   61804 logs.go:276] 0 containers: []
	W0814 01:08:59.248362   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:59.248368   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:59.248420   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:59.282101   61804 cri.go:89] found id: ""
	I0814 01:08:59.282123   61804 logs.go:276] 0 containers: []
	W0814 01:08:59.282136   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:59.282142   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:59.282190   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:59.318477   61804 cri.go:89] found id: ""
	I0814 01:08:59.318502   61804 logs.go:276] 0 containers: []
	W0814 01:08:59.318510   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:59.318516   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:59.318566   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:59.352473   61804 cri.go:89] found id: ""
	I0814 01:08:59.352499   61804 logs.go:276] 0 containers: []
	W0814 01:08:59.352507   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:59.352514   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:59.352583   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:59.386004   61804 cri.go:89] found id: ""
	I0814 01:08:59.386032   61804 logs.go:276] 0 containers: []
	W0814 01:08:59.386056   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:59.386065   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:59.386127   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:59.424280   61804 cri.go:89] found id: ""
	I0814 01:08:59.424309   61804 logs.go:276] 0 containers: []
	W0814 01:08:59.424334   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:59.424340   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:59.424390   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:59.461555   61804 cri.go:89] found id: ""
	I0814 01:08:59.461579   61804 logs.go:276] 0 containers: []
	W0814 01:08:59.461587   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:59.461596   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:59.461608   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:59.501997   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:59.502032   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:56.089181   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:58.089349   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:58.780740   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:01.280817   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:58.417590   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:00.417914   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:02.418923   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:59.554228   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:59.554276   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:59.569169   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:59.569201   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:59.635758   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:59.635779   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:59.635793   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:02.211233   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:02.223647   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:02.223733   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:02.257172   61804 cri.go:89] found id: ""
	I0814 01:09:02.257204   61804 logs.go:276] 0 containers: []
	W0814 01:09:02.257215   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:02.257222   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:02.257286   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:02.290090   61804 cri.go:89] found id: ""
	I0814 01:09:02.290123   61804 logs.go:276] 0 containers: []
	W0814 01:09:02.290132   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:02.290139   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:02.290207   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:02.324436   61804 cri.go:89] found id: ""
	I0814 01:09:02.324461   61804 logs.go:276] 0 containers: []
	W0814 01:09:02.324469   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:02.324474   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:02.324531   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:02.357092   61804 cri.go:89] found id: ""
	I0814 01:09:02.357116   61804 logs.go:276] 0 containers: []
	W0814 01:09:02.357124   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:02.357130   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:02.357191   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:02.390237   61804 cri.go:89] found id: ""
	I0814 01:09:02.390265   61804 logs.go:276] 0 containers: []
	W0814 01:09:02.390278   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:02.390287   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:02.390357   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:02.425960   61804 cri.go:89] found id: ""
	I0814 01:09:02.425988   61804 logs.go:276] 0 containers: []
	W0814 01:09:02.425996   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:02.426002   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:02.426072   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:02.459644   61804 cri.go:89] found id: ""
	I0814 01:09:02.459683   61804 logs.go:276] 0 containers: []
	W0814 01:09:02.459694   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:02.459702   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:02.459764   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:02.496147   61804 cri.go:89] found id: ""
	I0814 01:09:02.496169   61804 logs.go:276] 0 containers: []
	W0814 01:09:02.496182   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:02.496190   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:02.496202   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:02.576512   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:02.576547   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:02.612410   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:02.612440   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:02.665810   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:02.665850   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:02.680992   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:02.681020   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:02.781868   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:00.089915   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:02.090971   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:04.589030   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:03.780689   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:05.784928   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:04.917086   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:06.918108   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:05.282001   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:05.294986   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:05.295064   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:05.326520   61804 cri.go:89] found id: ""
	I0814 01:09:05.326547   61804 logs.go:276] 0 containers: []
	W0814 01:09:05.326555   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:05.326562   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:05.326618   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:05.358458   61804 cri.go:89] found id: ""
	I0814 01:09:05.358482   61804 logs.go:276] 0 containers: []
	W0814 01:09:05.358490   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:05.358497   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:05.358556   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:05.393729   61804 cri.go:89] found id: ""
	I0814 01:09:05.393763   61804 logs.go:276] 0 containers: []
	W0814 01:09:05.393771   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:05.393777   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:05.393824   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:05.433291   61804 cri.go:89] found id: ""
	I0814 01:09:05.433319   61804 logs.go:276] 0 containers: []
	W0814 01:09:05.433327   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:05.433334   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:05.433384   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:05.467163   61804 cri.go:89] found id: ""
	I0814 01:09:05.467187   61804 logs.go:276] 0 containers: []
	W0814 01:09:05.467198   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:05.467206   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:05.467284   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:05.499718   61804 cri.go:89] found id: ""
	I0814 01:09:05.499747   61804 logs.go:276] 0 containers: []
	W0814 01:09:05.499758   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:05.499768   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:05.499819   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:05.532818   61804 cri.go:89] found id: ""
	I0814 01:09:05.532847   61804 logs.go:276] 0 containers: []
	W0814 01:09:05.532859   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:05.532867   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:05.532920   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:05.566908   61804 cri.go:89] found id: ""
	I0814 01:09:05.566936   61804 logs.go:276] 0 containers: []
	W0814 01:09:05.566947   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:05.566957   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:05.566969   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:05.621247   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:05.621283   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:05.635566   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:05.635606   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:05.698579   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:05.698606   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:05.698622   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:05.780861   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:05.780897   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:08.322931   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:08.336836   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:08.336918   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:08.369802   61804 cri.go:89] found id: ""
	I0814 01:09:08.369833   61804 logs.go:276] 0 containers: []
	W0814 01:09:08.369842   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:08.369849   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:08.369899   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:08.415414   61804 cri.go:89] found id: ""
	I0814 01:09:08.415441   61804 logs.go:276] 0 containers: []
	W0814 01:09:08.415451   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:08.415459   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:08.415525   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:08.477026   61804 cri.go:89] found id: ""
	I0814 01:09:08.477058   61804 logs.go:276] 0 containers: []
	W0814 01:09:08.477069   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:08.477077   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:08.477145   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:08.522385   61804 cri.go:89] found id: ""
	I0814 01:09:08.522417   61804 logs.go:276] 0 containers: []
	W0814 01:09:08.522429   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:08.522438   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:08.522502   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:08.555803   61804 cri.go:89] found id: ""
	I0814 01:09:08.555839   61804 logs.go:276] 0 containers: []
	W0814 01:09:08.555848   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:08.555855   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:08.555922   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:08.589910   61804 cri.go:89] found id: ""
	I0814 01:09:08.589932   61804 logs.go:276] 0 containers: []
	W0814 01:09:08.589939   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:08.589945   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:08.589992   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:08.622278   61804 cri.go:89] found id: ""
	I0814 01:09:08.622313   61804 logs.go:276] 0 containers: []
	W0814 01:09:08.622321   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:08.622328   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:08.622381   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:08.655221   61804 cri.go:89] found id: ""
	I0814 01:09:08.655248   61804 logs.go:276] 0 containers: []
	W0814 01:09:08.655257   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:08.655266   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:08.655280   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:08.691932   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:08.691965   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:08.742551   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:08.742586   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:08.755590   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:08.755619   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:08.822365   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:08.822384   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:08.822401   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:06.589889   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:09.089601   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:08.281249   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:10.781156   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:09.418153   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:11.418222   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:11.397107   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:11.409425   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:11.409498   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:11.442680   61804 cri.go:89] found id: ""
	I0814 01:09:11.442711   61804 logs.go:276] 0 containers: []
	W0814 01:09:11.442724   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:11.442732   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:11.442791   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:11.482991   61804 cri.go:89] found id: ""
	I0814 01:09:11.483016   61804 logs.go:276] 0 containers: []
	W0814 01:09:11.483023   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:11.483034   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:11.483099   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:11.516069   61804 cri.go:89] found id: ""
	I0814 01:09:11.516091   61804 logs.go:276] 0 containers: []
	W0814 01:09:11.516100   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:11.516105   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:11.516154   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:11.549745   61804 cri.go:89] found id: ""
	I0814 01:09:11.549773   61804 logs.go:276] 0 containers: []
	W0814 01:09:11.549780   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:11.549787   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:11.549851   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:11.582542   61804 cri.go:89] found id: ""
	I0814 01:09:11.582569   61804 logs.go:276] 0 containers: []
	W0814 01:09:11.582577   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:11.582583   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:11.582642   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:11.616238   61804 cri.go:89] found id: ""
	I0814 01:09:11.616261   61804 logs.go:276] 0 containers: []
	W0814 01:09:11.616269   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:11.616275   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:11.616330   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:11.650238   61804 cri.go:89] found id: ""
	I0814 01:09:11.650286   61804 logs.go:276] 0 containers: []
	W0814 01:09:11.650301   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:11.650311   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:11.650384   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:11.683100   61804 cri.go:89] found id: ""
	I0814 01:09:11.683128   61804 logs.go:276] 0 containers: []
	W0814 01:09:11.683139   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:11.683149   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:11.683169   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:11.760248   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:11.760292   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:11.798965   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:11.798996   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:11.853109   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:11.853145   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:11.865645   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:11.865682   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:11.935478   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:14.436076   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:14.448846   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:14.448927   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:14.483833   61804 cri.go:89] found id: ""
	I0814 01:09:14.483873   61804 logs.go:276] 0 containers: []
	W0814 01:09:14.483882   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:14.483887   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:14.483940   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:11.089723   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:13.090681   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:12.781680   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:14.782443   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:13.918681   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:16.417982   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:14.522643   61804 cri.go:89] found id: ""
	I0814 01:09:14.522670   61804 logs.go:276] 0 containers: []
	W0814 01:09:14.522678   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:14.522683   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:14.522783   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:14.564084   61804 cri.go:89] found id: ""
	I0814 01:09:14.564111   61804 logs.go:276] 0 containers: []
	W0814 01:09:14.564121   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:14.564129   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:14.564193   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:14.603532   61804 cri.go:89] found id: ""
	I0814 01:09:14.603560   61804 logs.go:276] 0 containers: []
	W0814 01:09:14.603571   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:14.603578   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:14.603641   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:14.644420   61804 cri.go:89] found id: ""
	I0814 01:09:14.644443   61804 logs.go:276] 0 containers: []
	W0814 01:09:14.644450   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:14.644455   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:14.644503   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:14.681652   61804 cri.go:89] found id: ""
	I0814 01:09:14.681685   61804 logs.go:276] 0 containers: []
	W0814 01:09:14.681693   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:14.681701   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:14.681757   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:14.715830   61804 cri.go:89] found id: ""
	I0814 01:09:14.715852   61804 logs.go:276] 0 containers: []
	W0814 01:09:14.715860   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:14.715866   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:14.715912   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:14.752305   61804 cri.go:89] found id: ""
	I0814 01:09:14.752336   61804 logs.go:276] 0 containers: []
	W0814 01:09:14.752343   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:14.752352   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:14.752367   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:14.765250   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:14.765287   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:14.834427   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:14.834453   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:14.834470   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:14.914683   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:14.914721   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:14.959497   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:14.959534   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:17.513077   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:17.526300   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:17.526409   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:17.563670   61804 cri.go:89] found id: ""
	I0814 01:09:17.563700   61804 logs.go:276] 0 containers: []
	W0814 01:09:17.563709   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:17.563715   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:17.563768   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:17.599019   61804 cri.go:89] found id: ""
	I0814 01:09:17.599048   61804 logs.go:276] 0 containers: []
	W0814 01:09:17.599070   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:17.599078   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:17.599158   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:17.633378   61804 cri.go:89] found id: ""
	I0814 01:09:17.633407   61804 logs.go:276] 0 containers: []
	W0814 01:09:17.633422   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:17.633430   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:17.633494   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:17.667180   61804 cri.go:89] found id: ""
	I0814 01:09:17.667213   61804 logs.go:276] 0 containers: []
	W0814 01:09:17.667225   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:17.667233   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:17.667293   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:17.704552   61804 cri.go:89] found id: ""
	I0814 01:09:17.704582   61804 logs.go:276] 0 containers: []
	W0814 01:09:17.704595   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:17.704603   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:17.704670   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:17.735937   61804 cri.go:89] found id: ""
	I0814 01:09:17.735966   61804 logs.go:276] 0 containers: []
	W0814 01:09:17.735978   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:17.735987   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:17.736057   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:17.772223   61804 cri.go:89] found id: ""
	I0814 01:09:17.772251   61804 logs.go:276] 0 containers: []
	W0814 01:09:17.772263   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:17.772271   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:17.772335   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:17.807432   61804 cri.go:89] found id: ""
	I0814 01:09:17.807462   61804 logs.go:276] 0 containers: []
	W0814 01:09:17.807474   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:17.807485   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:17.807499   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:17.860093   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:17.860135   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:17.874608   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:17.874644   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:17.948791   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:17.948812   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:17.948827   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:18.024743   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:18.024778   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:15.590951   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:18.090491   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:17.296200   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:19.780540   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:18.419867   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:20.917387   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:22.918933   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:20.559854   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:20.572920   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:20.573004   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:20.609163   61804 cri.go:89] found id: ""
	I0814 01:09:20.609189   61804 logs.go:276] 0 containers: []
	W0814 01:09:20.609200   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:20.609205   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:20.609253   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:20.646826   61804 cri.go:89] found id: ""
	I0814 01:09:20.646852   61804 logs.go:276] 0 containers: []
	W0814 01:09:20.646859   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:20.646865   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:20.646913   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:20.682403   61804 cri.go:89] found id: ""
	I0814 01:09:20.682432   61804 logs.go:276] 0 containers: []
	W0814 01:09:20.682443   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:20.682452   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:20.682515   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:20.717678   61804 cri.go:89] found id: ""
	I0814 01:09:20.717700   61804 logs.go:276] 0 containers: []
	W0814 01:09:20.717708   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:20.717713   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:20.717761   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:20.748451   61804 cri.go:89] found id: ""
	I0814 01:09:20.748481   61804 logs.go:276] 0 containers: []
	W0814 01:09:20.748492   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:20.748501   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:20.748567   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:20.785684   61804 cri.go:89] found id: ""
	I0814 01:09:20.785712   61804 logs.go:276] 0 containers: []
	W0814 01:09:20.785722   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:20.785729   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:20.785792   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:20.826195   61804 cri.go:89] found id: ""
	I0814 01:09:20.826225   61804 logs.go:276] 0 containers: []
	W0814 01:09:20.826233   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:20.826239   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:20.826305   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:20.860155   61804 cri.go:89] found id: ""
	I0814 01:09:20.860181   61804 logs.go:276] 0 containers: []
	W0814 01:09:20.860190   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:20.860198   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:20.860209   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:20.909428   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:20.909464   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:20.923178   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:20.923208   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:20.994502   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:20.994537   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:20.994556   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:21.074097   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:21.074138   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:23.615557   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:23.628906   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:23.628976   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:23.661923   61804 cri.go:89] found id: ""
	I0814 01:09:23.661954   61804 logs.go:276] 0 containers: []
	W0814 01:09:23.661966   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:23.661973   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:23.662033   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:23.693786   61804 cri.go:89] found id: ""
	I0814 01:09:23.693815   61804 logs.go:276] 0 containers: []
	W0814 01:09:23.693828   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:23.693844   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:23.693938   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:23.726707   61804 cri.go:89] found id: ""
	I0814 01:09:23.726739   61804 logs.go:276] 0 containers: []
	W0814 01:09:23.726750   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:23.726758   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:23.726823   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:23.757433   61804 cri.go:89] found id: ""
	I0814 01:09:23.757457   61804 logs.go:276] 0 containers: []
	W0814 01:09:23.757465   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:23.757471   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:23.757521   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:23.789493   61804 cri.go:89] found id: ""
	I0814 01:09:23.789516   61804 logs.go:276] 0 containers: []
	W0814 01:09:23.789523   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:23.789529   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:23.789589   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:23.824641   61804 cri.go:89] found id: ""
	I0814 01:09:23.824668   61804 logs.go:276] 0 containers: []
	W0814 01:09:23.824676   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:23.824685   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:23.824758   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:23.857651   61804 cri.go:89] found id: ""
	I0814 01:09:23.857678   61804 logs.go:276] 0 containers: []
	W0814 01:09:23.857688   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:23.857697   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:23.857761   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:23.898116   61804 cri.go:89] found id: ""
	I0814 01:09:23.898138   61804 logs.go:276] 0 containers: []
	W0814 01:09:23.898145   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:23.898154   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:23.898169   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:23.982086   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:23.982121   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:24.018340   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:24.018372   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:24.067264   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:24.067300   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:24.081648   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:24.081681   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:24.156566   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:20.590620   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:23.090160   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:21.781174   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:23.782333   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:26.282145   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:25.417101   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:27.417596   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:26.656930   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:26.669540   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:26.669616   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:26.701786   61804 cri.go:89] found id: ""
	I0814 01:09:26.701819   61804 logs.go:276] 0 containers: []
	W0814 01:09:26.701828   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:26.701834   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:26.701897   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:26.734372   61804 cri.go:89] found id: ""
	I0814 01:09:26.734397   61804 logs.go:276] 0 containers: []
	W0814 01:09:26.734405   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:26.734410   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:26.734463   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:26.767100   61804 cri.go:89] found id: ""
	I0814 01:09:26.767125   61804 logs.go:276] 0 containers: []
	W0814 01:09:26.767140   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:26.767148   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:26.767210   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:26.802145   61804 cri.go:89] found id: ""
	I0814 01:09:26.802168   61804 logs.go:276] 0 containers: []
	W0814 01:09:26.802177   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:26.802182   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:26.802230   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:26.835588   61804 cri.go:89] found id: ""
	I0814 01:09:26.835616   61804 logs.go:276] 0 containers: []
	W0814 01:09:26.835624   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:26.835630   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:26.835685   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:26.868104   61804 cri.go:89] found id: ""
	I0814 01:09:26.868130   61804 logs.go:276] 0 containers: []
	W0814 01:09:26.868138   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:26.868144   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:26.868209   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:26.899709   61804 cri.go:89] found id: ""
	I0814 01:09:26.899736   61804 logs.go:276] 0 containers: []
	W0814 01:09:26.899755   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:26.899764   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:26.899824   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:26.934964   61804 cri.go:89] found id: ""
	I0814 01:09:26.934989   61804 logs.go:276] 0 containers: []
	W0814 01:09:26.934996   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:26.935005   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:26.935023   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:26.970832   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:26.970859   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:27.022349   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:27.022390   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:27.035656   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:27.035683   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:27.115414   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:27.115441   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:27.115458   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:25.090543   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:27.590088   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:29.590449   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:28.781004   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:30.781622   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:29.920036   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:32.417796   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:29.701338   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:29.713890   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:29.713947   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:29.745724   61804 cri.go:89] found id: ""
	I0814 01:09:29.745749   61804 logs.go:276] 0 containers: []
	W0814 01:09:29.745756   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:29.745763   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:29.745816   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:29.777020   61804 cri.go:89] found id: ""
	I0814 01:09:29.777047   61804 logs.go:276] 0 containers: []
	W0814 01:09:29.777057   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:29.777065   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:29.777130   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:29.813355   61804 cri.go:89] found id: ""
	I0814 01:09:29.813386   61804 logs.go:276] 0 containers: []
	W0814 01:09:29.813398   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:29.813406   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:29.813464   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:29.845184   61804 cri.go:89] found id: ""
	I0814 01:09:29.845212   61804 logs.go:276] 0 containers: []
	W0814 01:09:29.845222   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:29.845227   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:29.845288   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:29.881128   61804 cri.go:89] found id: ""
	I0814 01:09:29.881158   61804 logs.go:276] 0 containers: []
	W0814 01:09:29.881169   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:29.881177   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:29.881249   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:29.912034   61804 cri.go:89] found id: ""
	I0814 01:09:29.912078   61804 logs.go:276] 0 containers: []
	W0814 01:09:29.912091   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:29.912100   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:29.912173   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:29.950345   61804 cri.go:89] found id: ""
	I0814 01:09:29.950378   61804 logs.go:276] 0 containers: []
	W0814 01:09:29.950386   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:29.950392   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:29.950454   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:29.989118   61804 cri.go:89] found id: ""
	I0814 01:09:29.989150   61804 logs.go:276] 0 containers: []
	W0814 01:09:29.989161   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:29.989172   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:29.989186   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:30.042231   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:30.042262   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:30.056231   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:30.056262   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:30.130840   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:30.130871   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:30.130891   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:30.209332   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:30.209372   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:32.751036   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:32.765011   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:32.765072   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:32.802505   61804 cri.go:89] found id: ""
	I0814 01:09:32.802533   61804 logs.go:276] 0 containers: []
	W0814 01:09:32.802543   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:32.802548   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:32.802600   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:32.835127   61804 cri.go:89] found id: ""
	I0814 01:09:32.835165   61804 logs.go:276] 0 containers: []
	W0814 01:09:32.835174   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:32.835179   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:32.835230   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:32.871768   61804 cri.go:89] found id: ""
	I0814 01:09:32.871793   61804 logs.go:276] 0 containers: []
	W0814 01:09:32.871800   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:32.871814   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:32.871865   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:32.907601   61804 cri.go:89] found id: ""
	I0814 01:09:32.907625   61804 logs.go:276] 0 containers: []
	W0814 01:09:32.907634   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:32.907640   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:32.907693   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:32.942615   61804 cri.go:89] found id: ""
	I0814 01:09:32.942640   61804 logs.go:276] 0 containers: []
	W0814 01:09:32.942649   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:32.942655   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:32.942707   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:32.975436   61804 cri.go:89] found id: ""
	I0814 01:09:32.975467   61804 logs.go:276] 0 containers: []
	W0814 01:09:32.975478   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:32.975486   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:32.975546   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:33.008982   61804 cri.go:89] found id: ""
	I0814 01:09:33.009013   61804 logs.go:276] 0 containers: []
	W0814 01:09:33.009021   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:33.009027   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:33.009077   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:33.042312   61804 cri.go:89] found id: ""
	I0814 01:09:33.042345   61804 logs.go:276] 0 containers: []
	W0814 01:09:33.042362   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:33.042371   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:33.042383   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:33.102102   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:33.102145   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:33.116497   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:33.116527   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:33.191821   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:33.191847   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:33.191862   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:33.272510   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:33.272562   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:32.090206   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:34.589260   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:33.280565   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:35.280918   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:34.417839   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:36.417950   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:35.813246   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:35.826224   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:35.826304   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:35.859220   61804 cri.go:89] found id: ""
	I0814 01:09:35.859252   61804 logs.go:276] 0 containers: []
	W0814 01:09:35.859263   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:35.859274   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:35.859349   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:35.896460   61804 cri.go:89] found id: ""
	I0814 01:09:35.896485   61804 logs.go:276] 0 containers: []
	W0814 01:09:35.896494   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:35.896500   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:35.896559   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:35.929796   61804 cri.go:89] found id: ""
	I0814 01:09:35.929819   61804 logs.go:276] 0 containers: []
	W0814 01:09:35.929827   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:35.929832   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:35.929883   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:35.963928   61804 cri.go:89] found id: ""
	I0814 01:09:35.963954   61804 logs.go:276] 0 containers: []
	W0814 01:09:35.963965   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:35.963972   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:35.964033   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:36.004613   61804 cri.go:89] found id: ""
	I0814 01:09:36.004644   61804 logs.go:276] 0 containers: []
	W0814 01:09:36.004654   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:36.004660   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:36.004729   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:36.039212   61804 cri.go:89] found id: ""
	I0814 01:09:36.039241   61804 logs.go:276] 0 containers: []
	W0814 01:09:36.039249   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:36.039256   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:36.039311   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:36.072917   61804 cri.go:89] found id: ""
	I0814 01:09:36.072945   61804 logs.go:276] 0 containers: []
	W0814 01:09:36.072954   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:36.072960   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:36.073020   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:36.113542   61804 cri.go:89] found id: ""
	I0814 01:09:36.113573   61804 logs.go:276] 0 containers: []
	W0814 01:09:36.113584   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:36.113594   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:36.113610   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:36.152043   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:36.152071   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:36.203163   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:36.203200   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:36.216733   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:36.216764   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:36.288171   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:36.288193   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:36.288206   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:38.868008   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:38.881009   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:38.881089   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:38.914485   61804 cri.go:89] found id: ""
	I0814 01:09:38.914515   61804 logs.go:276] 0 containers: []
	W0814 01:09:38.914527   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:38.914535   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:38.914595   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:38.950810   61804 cri.go:89] found id: ""
	I0814 01:09:38.950841   61804 logs.go:276] 0 containers: []
	W0814 01:09:38.950852   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:38.950860   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:38.950913   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:38.984938   61804 cri.go:89] found id: ""
	I0814 01:09:38.984964   61804 logs.go:276] 0 containers: []
	W0814 01:09:38.984972   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:38.984980   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:38.985050   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:39.017383   61804 cri.go:89] found id: ""
	I0814 01:09:39.017408   61804 logs.go:276] 0 containers: []
	W0814 01:09:39.017415   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:39.017421   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:39.017467   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:39.050669   61804 cri.go:89] found id: ""
	I0814 01:09:39.050694   61804 logs.go:276] 0 containers: []
	W0814 01:09:39.050706   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:39.050712   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:39.050777   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:39.083840   61804 cri.go:89] found id: ""
	I0814 01:09:39.083870   61804 logs.go:276] 0 containers: []
	W0814 01:09:39.083882   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:39.083903   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:39.083973   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:39.117880   61804 cri.go:89] found id: ""
	I0814 01:09:39.117905   61804 logs.go:276] 0 containers: []
	W0814 01:09:39.117913   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:39.117920   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:39.117989   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:39.151956   61804 cri.go:89] found id: ""
	I0814 01:09:39.151981   61804 logs.go:276] 0 containers: []
	W0814 01:09:39.151991   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:39.152002   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:39.152017   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:39.229820   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:39.229860   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:39.266989   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:39.267023   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:39.317673   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:39.317709   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:39.332968   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:39.332997   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:39.401164   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:36.591033   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:39.089990   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:37.282218   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:39.781653   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:38.918816   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:41.417142   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:41.901891   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:41.914735   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:41.914810   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:41.950605   61804 cri.go:89] found id: ""
	I0814 01:09:41.950633   61804 logs.go:276] 0 containers: []
	W0814 01:09:41.950641   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:41.950648   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:41.950699   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:41.984517   61804 cri.go:89] found id: ""
	I0814 01:09:41.984541   61804 logs.go:276] 0 containers: []
	W0814 01:09:41.984549   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:41.984555   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:41.984609   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:42.018378   61804 cri.go:89] found id: ""
	I0814 01:09:42.018405   61804 logs.go:276] 0 containers: []
	W0814 01:09:42.018413   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:42.018418   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:42.018475   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:42.057088   61804 cri.go:89] found id: ""
	I0814 01:09:42.057126   61804 logs.go:276] 0 containers: []
	W0814 01:09:42.057134   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:42.057140   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:42.057208   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:42.093523   61804 cri.go:89] found id: ""
	I0814 01:09:42.093548   61804 logs.go:276] 0 containers: []
	W0814 01:09:42.093564   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:42.093569   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:42.093621   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:42.127036   61804 cri.go:89] found id: ""
	I0814 01:09:42.127059   61804 logs.go:276] 0 containers: []
	W0814 01:09:42.127067   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:42.127072   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:42.127123   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:42.161194   61804 cri.go:89] found id: ""
	I0814 01:09:42.161218   61804 logs.go:276] 0 containers: []
	W0814 01:09:42.161226   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:42.161231   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:42.161279   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:42.195595   61804 cri.go:89] found id: ""
	I0814 01:09:42.195624   61804 logs.go:276] 0 containers: []
	W0814 01:09:42.195633   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:42.195643   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:42.195656   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:42.251942   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:42.251974   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:42.309142   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:42.309179   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:42.322696   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:42.322724   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:42.389877   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:42.389903   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:42.389918   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:41.589650   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:43.589804   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:42.281108   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:44.780495   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:43.417531   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:45.419122   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:47.918282   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:44.974486   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:44.986981   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:44.987044   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:45.023400   61804 cri.go:89] found id: ""
	I0814 01:09:45.023426   61804 logs.go:276] 0 containers: []
	W0814 01:09:45.023435   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:45.023441   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:45.023492   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:45.057923   61804 cri.go:89] found id: ""
	I0814 01:09:45.057948   61804 logs.go:276] 0 containers: []
	W0814 01:09:45.057961   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:45.057968   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:45.058024   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:45.092882   61804 cri.go:89] found id: ""
	I0814 01:09:45.092908   61804 logs.go:276] 0 containers: []
	W0814 01:09:45.092918   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:45.092924   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:45.092987   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:45.128802   61804 cri.go:89] found id: ""
	I0814 01:09:45.128832   61804 logs.go:276] 0 containers: []
	W0814 01:09:45.128840   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:45.128846   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:45.128909   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:45.164528   61804 cri.go:89] found id: ""
	I0814 01:09:45.164556   61804 logs.go:276] 0 containers: []
	W0814 01:09:45.164564   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:45.164571   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:45.164619   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:45.198115   61804 cri.go:89] found id: ""
	I0814 01:09:45.198145   61804 logs.go:276] 0 containers: []
	W0814 01:09:45.198157   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:45.198164   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:45.198231   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:45.230356   61804 cri.go:89] found id: ""
	I0814 01:09:45.230389   61804 logs.go:276] 0 containers: []
	W0814 01:09:45.230401   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:45.230409   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:45.230471   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:45.268342   61804 cri.go:89] found id: ""
	I0814 01:09:45.268367   61804 logs.go:276] 0 containers: []
	W0814 01:09:45.268376   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:45.268384   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:45.268398   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:45.321257   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:45.321294   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:45.334182   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:45.334206   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:45.409140   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:45.409162   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:45.409178   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:45.493974   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:45.494013   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:48.032466   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:48.045704   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:48.045783   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:48.084634   61804 cri.go:89] found id: ""
	I0814 01:09:48.084663   61804 logs.go:276] 0 containers: []
	W0814 01:09:48.084674   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:48.084683   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:48.084743   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:48.121917   61804 cri.go:89] found id: ""
	I0814 01:09:48.121941   61804 logs.go:276] 0 containers: []
	W0814 01:09:48.121948   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:48.121953   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:48.122014   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:48.156005   61804 cri.go:89] found id: ""
	I0814 01:09:48.156029   61804 logs.go:276] 0 containers: []
	W0814 01:09:48.156038   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:48.156046   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:48.156104   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:48.190105   61804 cri.go:89] found id: ""
	I0814 01:09:48.190127   61804 logs.go:276] 0 containers: []
	W0814 01:09:48.190136   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:48.190141   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:48.190202   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:48.222617   61804 cri.go:89] found id: ""
	I0814 01:09:48.222641   61804 logs.go:276] 0 containers: []
	W0814 01:09:48.222649   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:48.222655   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:48.222727   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:48.256198   61804 cri.go:89] found id: ""
	I0814 01:09:48.256222   61804 logs.go:276] 0 containers: []
	W0814 01:09:48.256230   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:48.256236   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:48.256294   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:48.294389   61804 cri.go:89] found id: ""
	I0814 01:09:48.294420   61804 logs.go:276] 0 containers: []
	W0814 01:09:48.294428   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:48.294434   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:48.294496   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:48.331503   61804 cri.go:89] found id: ""
	I0814 01:09:48.331540   61804 logs.go:276] 0 containers: []
	W0814 01:09:48.331553   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:48.331565   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:48.331585   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:48.407092   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:48.407134   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:48.446890   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:48.446920   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:48.498523   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:48.498559   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:48.511540   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:48.511578   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:48.576299   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:45.590239   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:48.090689   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:46.781816   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:49.280840   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:51.281638   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:50.418154   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:52.917611   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:51.076974   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:51.089440   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:51.089508   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:51.122770   61804 cri.go:89] found id: ""
	I0814 01:09:51.122794   61804 logs.go:276] 0 containers: []
	W0814 01:09:51.122806   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:51.122814   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:51.122873   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:51.159045   61804 cri.go:89] found id: ""
	I0814 01:09:51.159075   61804 logs.go:276] 0 containers: []
	W0814 01:09:51.159084   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:51.159090   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:51.159144   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:51.192983   61804 cri.go:89] found id: ""
	I0814 01:09:51.193013   61804 logs.go:276] 0 containers: []
	W0814 01:09:51.193022   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:51.193028   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:51.193087   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:51.225112   61804 cri.go:89] found id: ""
	I0814 01:09:51.225137   61804 logs.go:276] 0 containers: []
	W0814 01:09:51.225145   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:51.225151   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:51.225204   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:51.257785   61804 cri.go:89] found id: ""
	I0814 01:09:51.257813   61804 logs.go:276] 0 containers: []
	W0814 01:09:51.257822   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:51.257828   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:51.257879   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:51.289863   61804 cri.go:89] found id: ""
	I0814 01:09:51.289891   61804 logs.go:276] 0 containers: []
	W0814 01:09:51.289902   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:51.289910   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:51.289963   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:51.321834   61804 cri.go:89] found id: ""
	I0814 01:09:51.321860   61804 logs.go:276] 0 containers: []
	W0814 01:09:51.321870   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:51.321880   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:51.321949   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:51.354494   61804 cri.go:89] found id: ""
	I0814 01:09:51.354517   61804 logs.go:276] 0 containers: []
	W0814 01:09:51.354526   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:51.354535   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:51.354556   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:51.424704   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:51.424726   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:51.424741   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:51.505301   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:51.505337   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:51.544567   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:51.544603   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:51.598924   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:51.598954   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:54.113501   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:54.128000   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:54.128075   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:54.162230   61804 cri.go:89] found id: ""
	I0814 01:09:54.162260   61804 logs.go:276] 0 containers: []
	W0814 01:09:54.162270   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:54.162277   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:54.162327   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:54.196395   61804 cri.go:89] found id: ""
	I0814 01:09:54.196421   61804 logs.go:276] 0 containers: []
	W0814 01:09:54.196432   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:54.196440   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:54.196500   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:54.229685   61804 cri.go:89] found id: ""
	I0814 01:09:54.229718   61804 logs.go:276] 0 containers: []
	W0814 01:09:54.229730   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:54.229738   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:54.229825   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:54.263141   61804 cri.go:89] found id: ""
	I0814 01:09:54.263174   61804 logs.go:276] 0 containers: []
	W0814 01:09:54.263185   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:54.263193   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:54.263257   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:54.298658   61804 cri.go:89] found id: ""
	I0814 01:09:54.298689   61804 logs.go:276] 0 containers: []
	W0814 01:09:54.298700   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:54.298708   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:54.298794   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:54.331254   61804 cri.go:89] found id: ""
	I0814 01:09:54.331278   61804 logs.go:276] 0 containers: []
	W0814 01:09:54.331287   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:54.331294   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:54.331348   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:54.362930   61804 cri.go:89] found id: ""
	I0814 01:09:54.362954   61804 logs.go:276] 0 containers: []
	W0814 01:09:54.362961   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:54.362967   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:54.363017   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:54.403663   61804 cri.go:89] found id: ""
	I0814 01:09:54.403690   61804 logs.go:276] 0 containers: []
	W0814 01:09:54.403697   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:54.403706   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:54.403725   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:54.460623   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:54.460661   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:54.478728   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:54.478757   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 01:09:50.589697   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:53.089733   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:53.781208   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:56.282166   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:54.918107   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:56.918502   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	W0814 01:09:54.548615   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:54.548640   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:54.548654   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:54.624350   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:54.624385   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:57.164202   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:57.176107   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:57.176174   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:57.211204   61804 cri.go:89] found id: ""
	I0814 01:09:57.211230   61804 logs.go:276] 0 containers: []
	W0814 01:09:57.211238   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:57.211245   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:57.211305   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:57.243004   61804 cri.go:89] found id: ""
	I0814 01:09:57.243035   61804 logs.go:276] 0 containers: []
	W0814 01:09:57.243046   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:57.243052   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:57.243113   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:57.275315   61804 cri.go:89] found id: ""
	I0814 01:09:57.275346   61804 logs.go:276] 0 containers: []
	W0814 01:09:57.275357   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:57.275365   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:57.275435   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:57.311856   61804 cri.go:89] found id: ""
	I0814 01:09:57.311878   61804 logs.go:276] 0 containers: []
	W0814 01:09:57.311885   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:57.311890   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:57.311944   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:57.345305   61804 cri.go:89] found id: ""
	I0814 01:09:57.345335   61804 logs.go:276] 0 containers: []
	W0814 01:09:57.345347   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:57.345355   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:57.345419   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:57.378001   61804 cri.go:89] found id: ""
	I0814 01:09:57.378033   61804 logs.go:276] 0 containers: []
	W0814 01:09:57.378058   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:57.378066   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:57.378127   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:57.410664   61804 cri.go:89] found id: ""
	I0814 01:09:57.410691   61804 logs.go:276] 0 containers: []
	W0814 01:09:57.410700   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:57.410706   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:57.410766   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:57.443477   61804 cri.go:89] found id: ""
	I0814 01:09:57.443505   61804 logs.go:276] 0 containers: []
	W0814 01:09:57.443514   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:57.443523   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:57.443538   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:57.497674   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:57.497710   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:57.511347   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:57.511380   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:57.580111   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:57.580137   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:57.580153   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:57.660119   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:57.660166   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:55.089771   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:57.090272   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:59.591289   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:58.780363   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:00.781165   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:59.417990   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:01.419950   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:00.203685   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:10:00.224480   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:10:00.224552   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:10:00.265353   61804 cri.go:89] found id: ""
	I0814 01:10:00.265379   61804 logs.go:276] 0 containers: []
	W0814 01:10:00.265388   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:10:00.265395   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:10:00.265449   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:10:00.301086   61804 cri.go:89] found id: ""
	I0814 01:10:00.301112   61804 logs.go:276] 0 containers: []
	W0814 01:10:00.301122   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:10:00.301129   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:10:00.301203   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:10:00.335369   61804 cri.go:89] found id: ""
	I0814 01:10:00.335400   61804 logs.go:276] 0 containers: []
	W0814 01:10:00.335422   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:10:00.335442   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:10:00.335501   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:10:00.369341   61804 cri.go:89] found id: ""
	I0814 01:10:00.369367   61804 logs.go:276] 0 containers: []
	W0814 01:10:00.369377   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:10:00.369384   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:10:00.369446   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:10:00.403958   61804 cri.go:89] found id: ""
	I0814 01:10:00.403985   61804 logs.go:276] 0 containers: []
	W0814 01:10:00.403993   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:10:00.403998   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:10:00.404059   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:10:00.437921   61804 cri.go:89] found id: ""
	I0814 01:10:00.437944   61804 logs.go:276] 0 containers: []
	W0814 01:10:00.437952   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:10:00.437958   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:10:00.438020   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:10:00.471076   61804 cri.go:89] found id: ""
	I0814 01:10:00.471116   61804 logs.go:276] 0 containers: []
	W0814 01:10:00.471127   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:10:00.471135   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:10:00.471194   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:10:00.506002   61804 cri.go:89] found id: ""
	I0814 01:10:00.506026   61804 logs.go:276] 0 containers: []
	W0814 01:10:00.506034   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:10:00.506056   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:10:00.506074   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:10:00.576627   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:10:00.576653   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:10:00.576668   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:10:00.661108   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:10:00.661150   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:10:00.699083   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:10:00.699111   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:10:00.748944   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:10:00.748981   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:10:03.262338   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:10:03.274831   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:10:03.274909   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:10:03.308413   61804 cri.go:89] found id: ""
	I0814 01:10:03.308445   61804 logs.go:276] 0 containers: []
	W0814 01:10:03.308456   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:10:03.308463   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:10:03.308530   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:10:03.340763   61804 cri.go:89] found id: ""
	I0814 01:10:03.340789   61804 logs.go:276] 0 containers: []
	W0814 01:10:03.340798   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:10:03.340804   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:10:03.340872   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:10:03.375914   61804 cri.go:89] found id: ""
	I0814 01:10:03.375945   61804 logs.go:276] 0 containers: []
	W0814 01:10:03.375956   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:10:03.375964   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:10:03.376028   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:10:03.408904   61804 cri.go:89] found id: ""
	I0814 01:10:03.408934   61804 logs.go:276] 0 containers: []
	W0814 01:10:03.408944   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:10:03.408951   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:10:03.409015   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:10:03.443664   61804 cri.go:89] found id: ""
	I0814 01:10:03.443694   61804 logs.go:276] 0 containers: []
	W0814 01:10:03.443704   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:10:03.443712   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:10:03.443774   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:10:03.475742   61804 cri.go:89] found id: ""
	I0814 01:10:03.475775   61804 logs.go:276] 0 containers: []
	W0814 01:10:03.475786   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:10:03.475794   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:10:03.475856   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:10:03.509252   61804 cri.go:89] found id: ""
	I0814 01:10:03.509297   61804 logs.go:276] 0 containers: []
	W0814 01:10:03.509309   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:10:03.509315   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:10:03.509380   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:10:03.544311   61804 cri.go:89] found id: ""
	I0814 01:10:03.544332   61804 logs.go:276] 0 containers: []
	W0814 01:10:03.544341   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:10:03.544350   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:10:03.544365   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:10:03.620425   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:10:03.620459   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:10:03.658574   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:10:03.658601   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:10:03.708154   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:10:03.708187   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:10:03.721959   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:10:03.721986   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:10:03.789903   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:10:02.088526   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:04.092427   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:02.781595   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:05.280678   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:03.917268   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:05.917774   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:07.918699   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:06.290301   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:10:06.301935   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:10:06.301994   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:10:06.336211   61804 cri.go:89] found id: ""
	I0814 01:10:06.336231   61804 logs.go:276] 0 containers: []
	W0814 01:10:06.336239   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:10:06.336245   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:10:06.336293   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:10:06.369489   61804 cri.go:89] found id: ""
	I0814 01:10:06.369517   61804 logs.go:276] 0 containers: []
	W0814 01:10:06.369526   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:10:06.369532   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:10:06.369590   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:10:06.401142   61804 cri.go:89] found id: ""
	I0814 01:10:06.401167   61804 logs.go:276] 0 containers: []
	W0814 01:10:06.401176   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:10:06.401183   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:10:06.401233   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:10:06.432265   61804 cri.go:89] found id: ""
	I0814 01:10:06.432294   61804 logs.go:276] 0 containers: []
	W0814 01:10:06.432303   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:10:06.432308   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:10:06.432368   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:10:06.464786   61804 cri.go:89] found id: ""
	I0814 01:10:06.464815   61804 logs.go:276] 0 containers: []
	W0814 01:10:06.464826   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:10:06.464834   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:10:06.464928   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:10:06.497984   61804 cri.go:89] found id: ""
	I0814 01:10:06.498013   61804 logs.go:276] 0 containers: []
	W0814 01:10:06.498021   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:10:06.498027   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:10:06.498122   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:10:06.528722   61804 cri.go:89] found id: ""
	I0814 01:10:06.528750   61804 logs.go:276] 0 containers: []
	W0814 01:10:06.528760   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:10:06.528768   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:10:06.528836   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:10:06.559920   61804 cri.go:89] found id: ""
	I0814 01:10:06.559947   61804 logs.go:276] 0 containers: []
	W0814 01:10:06.559955   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:10:06.559964   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:10:06.559976   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:10:06.609227   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:10:06.609256   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:10:06.621627   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:10:06.621652   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:10:06.686110   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:10:06.686132   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:10:06.686145   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:10:06.767163   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:10:06.767201   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:10:09.302611   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:10:09.314804   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:10:09.314863   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:10:09.347222   61804 cri.go:89] found id: ""
	I0814 01:10:09.347248   61804 logs.go:276] 0 containers: []
	W0814 01:10:09.347257   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:10:09.347262   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:10:09.347311   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:10:09.382005   61804 cri.go:89] found id: ""
	I0814 01:10:09.382035   61804 logs.go:276] 0 containers: []
	W0814 01:10:09.382059   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:10:09.382067   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:10:09.382129   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:10:09.413728   61804 cri.go:89] found id: ""
	I0814 01:10:09.413759   61804 logs.go:276] 0 containers: []
	W0814 01:10:09.413771   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:10:09.413778   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:10:09.413843   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:10:09.446389   61804 cri.go:89] found id: ""
	I0814 01:10:09.446422   61804 logs.go:276] 0 containers: []
	W0814 01:10:09.446435   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:10:09.446455   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:10:09.446511   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:10:09.482224   61804 cri.go:89] found id: ""
	I0814 01:10:09.482253   61804 logs.go:276] 0 containers: []
	W0814 01:10:09.482261   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:10:09.482267   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:10:09.482330   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:10:06.589791   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:09.089933   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:07.782212   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:07.782245   61447 pod_ready.go:81] duration metric: took 4m0.007594209s for pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace to be "Ready" ...
	E0814 01:10:07.782257   61447 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0814 01:10:07.782267   61447 pod_ready.go:38] duration metric: took 4m3.607931792s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 01:10:07.782286   61447 api_server.go:52] waiting for apiserver process to appear ...
	I0814 01:10:07.782318   61447 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:10:07.782382   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:10:07.840346   61447 cri.go:89] found id: "ddba3ebb8413d6647bf6c8fe0bff16a1a10b35bcd7219cad5b66979c372cd92e"
	I0814 01:10:07.840370   61447 cri.go:89] found id: ""
	I0814 01:10:07.840378   61447 logs.go:276] 1 containers: [ddba3ebb8413d6647bf6c8fe0bff16a1a10b35bcd7219cad5b66979c372cd92e]
	I0814 01:10:07.840426   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:07.844721   61447 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:10:07.844775   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:10:07.879720   61447 cri.go:89] found id: "1632d4b88f7f0e7e802d1848acf6c67950cfa7cdfbe5b4dd7f6fbc467a969388"
	I0814 01:10:07.879748   61447 cri.go:89] found id: ""
	I0814 01:10:07.879756   61447 logs.go:276] 1 containers: [1632d4b88f7f0e7e802d1848acf6c67950cfa7cdfbe5b4dd7f6fbc467a969388]
	I0814 01:10:07.879805   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:07.883392   61447 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:10:07.883455   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:10:07.919395   61447 cri.go:89] found id: "7d3cb1d648607a62a84856164fa12f6d400c0d4316d7bda5d83a448870c145fc"
	I0814 01:10:07.919414   61447 cri.go:89] found id: ""
	I0814 01:10:07.919423   61447 logs.go:276] 1 containers: [7d3cb1d648607a62a84856164fa12f6d400c0d4316d7bda5d83a448870c145fc]
	I0814 01:10:07.919481   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:07.923650   61447 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:10:07.923715   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:10:07.960706   61447 cri.go:89] found id: "89953f1dc813e32748579ac4c2541c0f99b1443ae3956d85a367db06810107b2"
	I0814 01:10:07.960734   61447 cri.go:89] found id: ""
	I0814 01:10:07.960744   61447 logs.go:276] 1 containers: [89953f1dc813e32748579ac4c2541c0f99b1443ae3956d85a367db06810107b2]
	I0814 01:10:07.960792   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:07.964923   61447 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:10:07.964984   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:10:08.000107   61447 cri.go:89] found id: "0ec88a5a7a9d566fabdf1393667a95e5eac0ed7f2a2aaa326ee51d3e99a72b12"
	I0814 01:10:08.000127   61447 cri.go:89] found id: ""
	I0814 01:10:08.000134   61447 logs.go:276] 1 containers: [0ec88a5a7a9d566fabdf1393667a95e5eac0ed7f2a2aaa326ee51d3e99a72b12]
	I0814 01:10:08.000187   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:08.004313   61447 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:10:08.004375   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:10:08.039317   61447 cri.go:89] found id: "3ef9bf666bbbc4c4c8b8036d2fa7e15e882f2bb301fe7d3b63a483d8b38e6091"
	I0814 01:10:08.039346   61447 cri.go:89] found id: ""
	I0814 01:10:08.039356   61447 logs.go:276] 1 containers: [3ef9bf666bbbc4c4c8b8036d2fa7e15e882f2bb301fe7d3b63a483d8b38e6091]
	I0814 01:10:08.039433   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:08.043054   61447 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:10:08.043122   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:10:08.078708   61447 cri.go:89] found id: ""
	I0814 01:10:08.078745   61447 logs.go:276] 0 containers: []
	W0814 01:10:08.078756   61447 logs.go:278] No container was found matching "kindnet"
	I0814 01:10:08.078764   61447 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0814 01:10:08.078826   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0814 01:10:08.119964   61447 cri.go:89] found id: "d4d7da10edbe3c5e4d9a0997c7f8909523ed34eacac6b44dc982bf6ab7504eff"
	I0814 01:10:08.119989   61447 cri.go:89] found id: "bacb411cbea2000ee28b8a5eb1c371d4094e22dea31e33892860568e08ee9768"
	I0814 01:10:08.119995   61447 cri.go:89] found id: ""
	I0814 01:10:08.120004   61447 logs.go:276] 2 containers: [d4d7da10edbe3c5e4d9a0997c7f8909523ed34eacac6b44dc982bf6ab7504eff bacb411cbea2000ee28b8a5eb1c371d4094e22dea31e33892860568e08ee9768]
	I0814 01:10:08.120067   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:08.123852   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:08.127530   61447 logs.go:123] Gathering logs for dmesg ...
	I0814 01:10:08.127553   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:10:08.144431   61447 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:10:08.144466   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 01:10:08.267719   61447 logs.go:123] Gathering logs for coredns [7d3cb1d648607a62a84856164fa12f6d400c0d4316d7bda5d83a448870c145fc] ...
	I0814 01:10:08.267751   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7d3cb1d648607a62a84856164fa12f6d400c0d4316d7bda5d83a448870c145fc"
	I0814 01:10:08.308901   61447 logs.go:123] Gathering logs for kube-controller-manager [3ef9bf666bbbc4c4c8b8036d2fa7e15e882f2bb301fe7d3b63a483d8b38e6091] ...
	I0814 01:10:08.308936   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ef9bf666bbbc4c4c8b8036d2fa7e15e882f2bb301fe7d3b63a483d8b38e6091"
	I0814 01:10:08.357837   61447 logs.go:123] Gathering logs for storage-provisioner [d4d7da10edbe3c5e4d9a0997c7f8909523ed34eacac6b44dc982bf6ab7504eff] ...
	I0814 01:10:08.357868   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d4d7da10edbe3c5e4d9a0997c7f8909523ed34eacac6b44dc982bf6ab7504eff"
	I0814 01:10:08.393863   61447 logs.go:123] Gathering logs for storage-provisioner [bacb411cbea2000ee28b8a5eb1c371d4094e22dea31e33892860568e08ee9768] ...
	I0814 01:10:08.393890   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bacb411cbea2000ee28b8a5eb1c371d4094e22dea31e33892860568e08ee9768"
	I0814 01:10:08.430599   61447 logs.go:123] Gathering logs for kubelet ...
	I0814 01:10:08.430631   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:10:08.512420   61447 logs.go:123] Gathering logs for etcd [1632d4b88f7f0e7e802d1848acf6c67950cfa7cdfbe5b4dd7f6fbc467a969388] ...
	I0814 01:10:08.512460   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1632d4b88f7f0e7e802d1848acf6c67950cfa7cdfbe5b4dd7f6fbc467a969388"
	I0814 01:10:08.561482   61447 logs.go:123] Gathering logs for kube-scheduler [89953f1dc813e32748579ac4c2541c0f99b1443ae3956d85a367db06810107b2] ...
	I0814 01:10:08.561512   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 89953f1dc813e32748579ac4c2541c0f99b1443ae3956d85a367db06810107b2"
	I0814 01:10:08.598681   61447 logs.go:123] Gathering logs for kube-proxy [0ec88a5a7a9d566fabdf1393667a95e5eac0ed7f2a2aaa326ee51d3e99a72b12] ...
	I0814 01:10:08.598705   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ec88a5a7a9d566fabdf1393667a95e5eac0ed7f2a2aaa326ee51d3e99a72b12"
	I0814 01:10:08.634798   61447 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:10:08.634835   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:10:09.113197   61447 logs.go:123] Gathering logs for container status ...
	I0814 01:10:09.113249   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:10:09.166264   61447 logs.go:123] Gathering logs for kube-apiserver [ddba3ebb8413d6647bf6c8fe0bff16a1a10b35bcd7219cad5b66979c372cd92e] ...
	I0814 01:10:09.166294   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ddba3ebb8413d6647bf6c8fe0bff16a1a10b35bcd7219cad5b66979c372cd92e"
	I0814 01:10:10.417612   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:12.418303   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:12.911546   61689 pod_ready.go:81] duration metric: took 4m0.00009953s for pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace to be "Ready" ...
	E0814 01:10:12.911580   61689 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0814 01:10:12.911610   61689 pod_ready.go:38] duration metric: took 4m7.021956674s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 01:10:12.911643   61689 kubeadm.go:597] duration metric: took 4m14.591841657s to restartPrimaryControlPlane
	W0814 01:10:12.911710   61689 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0814 01:10:12.911741   61689 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0814 01:10:09.517482   61804 cri.go:89] found id: ""
	I0814 01:10:09.517511   61804 logs.go:276] 0 containers: []
	W0814 01:10:09.517529   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:10:09.517538   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:10:09.517600   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:10:09.550825   61804 cri.go:89] found id: ""
	I0814 01:10:09.550849   61804 logs.go:276] 0 containers: []
	W0814 01:10:09.550857   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:10:09.550863   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:10:09.550923   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:10:09.585090   61804 cri.go:89] found id: ""
	I0814 01:10:09.585122   61804 logs.go:276] 0 containers: []
	W0814 01:10:09.585129   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:10:09.585137   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:10:09.585148   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:10:09.636337   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:10:09.636367   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:10:09.649807   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:10:09.649837   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:10:09.720720   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:10:09.720743   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:10:09.720759   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:10:09.805985   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:10:09.806027   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:10:12.350767   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:10:12.364446   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:10:12.364516   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:10:12.396353   61804 cri.go:89] found id: ""
	I0814 01:10:12.396387   61804 logs.go:276] 0 containers: []
	W0814 01:10:12.396400   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:10:12.396409   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:10:12.396478   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:10:12.427988   61804 cri.go:89] found id: ""
	I0814 01:10:12.428010   61804 logs.go:276] 0 containers: []
	W0814 01:10:12.428022   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:10:12.428033   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:10:12.428094   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:10:12.461269   61804 cri.go:89] found id: ""
	I0814 01:10:12.461295   61804 logs.go:276] 0 containers: []
	W0814 01:10:12.461304   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:10:12.461310   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:10:12.461364   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:10:12.495746   61804 cri.go:89] found id: ""
	I0814 01:10:12.495772   61804 logs.go:276] 0 containers: []
	W0814 01:10:12.495783   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:10:12.495791   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:10:12.495850   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:10:12.528862   61804 cri.go:89] found id: ""
	I0814 01:10:12.528891   61804 logs.go:276] 0 containers: []
	W0814 01:10:12.528901   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:10:12.528909   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:10:12.528969   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:10:12.562169   61804 cri.go:89] found id: ""
	I0814 01:10:12.562196   61804 logs.go:276] 0 containers: []
	W0814 01:10:12.562206   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:10:12.562214   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:10:12.562279   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:10:12.601089   61804 cri.go:89] found id: ""
	I0814 01:10:12.601118   61804 logs.go:276] 0 containers: []
	W0814 01:10:12.601129   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:10:12.601137   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:10:12.601200   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:10:12.635250   61804 cri.go:89] found id: ""
	I0814 01:10:12.635276   61804 logs.go:276] 0 containers: []
	W0814 01:10:12.635285   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:10:12.635293   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:10:12.635306   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:10:12.686904   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:10:12.686937   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:10:12.702218   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:10:12.702244   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:10:12.767008   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:10:12.767034   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:10:12.767051   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:10:12.849601   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:10:12.849639   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:10:11.090068   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:13.090518   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:11.715364   61447 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:10:11.731610   61447 api_server.go:72] duration metric: took 4m15.320142444s to wait for apiserver process to appear ...
	I0814 01:10:11.731645   61447 api_server.go:88] waiting for apiserver healthz status ...
	I0814 01:10:11.731689   61447 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:10:11.731748   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:10:11.769722   61447 cri.go:89] found id: "ddba3ebb8413d6647bf6c8fe0bff16a1a10b35bcd7219cad5b66979c372cd92e"
	I0814 01:10:11.769754   61447 cri.go:89] found id: ""
	I0814 01:10:11.769763   61447 logs.go:276] 1 containers: [ddba3ebb8413d6647bf6c8fe0bff16a1a10b35bcd7219cad5b66979c372cd92e]
	I0814 01:10:11.769824   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:11.774334   61447 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:10:11.774403   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:10:11.808392   61447 cri.go:89] found id: "1632d4b88f7f0e7e802d1848acf6c67950cfa7cdfbe5b4dd7f6fbc467a969388"
	I0814 01:10:11.808412   61447 cri.go:89] found id: ""
	I0814 01:10:11.808419   61447 logs.go:276] 1 containers: [1632d4b88f7f0e7e802d1848acf6c67950cfa7cdfbe5b4dd7f6fbc467a969388]
	I0814 01:10:11.808464   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:11.812100   61447 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:10:11.812154   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:10:11.846105   61447 cri.go:89] found id: "7d3cb1d648607a62a84856164fa12f6d400c0d4316d7bda5d83a448870c145fc"
	I0814 01:10:11.846133   61447 cri.go:89] found id: ""
	I0814 01:10:11.846144   61447 logs.go:276] 1 containers: [7d3cb1d648607a62a84856164fa12f6d400c0d4316d7bda5d83a448870c145fc]
	I0814 01:10:11.846202   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:11.850271   61447 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:10:11.850330   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:10:11.889364   61447 cri.go:89] found id: "89953f1dc813e32748579ac4c2541c0f99b1443ae3956d85a367db06810107b2"
	I0814 01:10:11.889389   61447 cri.go:89] found id: ""
	I0814 01:10:11.889399   61447 logs.go:276] 1 containers: [89953f1dc813e32748579ac4c2541c0f99b1443ae3956d85a367db06810107b2]
	I0814 01:10:11.889446   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:11.893413   61447 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:10:11.893483   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:10:11.929675   61447 cri.go:89] found id: "0ec88a5a7a9d566fabdf1393667a95e5eac0ed7f2a2aaa326ee51d3e99a72b12"
	I0814 01:10:11.929696   61447 cri.go:89] found id: ""
	I0814 01:10:11.929704   61447 logs.go:276] 1 containers: [0ec88a5a7a9d566fabdf1393667a95e5eac0ed7f2a2aaa326ee51d3e99a72b12]
	I0814 01:10:11.929764   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:11.933454   61447 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:10:11.933513   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:10:11.971708   61447 cri.go:89] found id: "3ef9bf666bbbc4c4c8b8036d2fa7e15e882f2bb301fe7d3b63a483d8b38e6091"
	I0814 01:10:11.971734   61447 cri.go:89] found id: ""
	I0814 01:10:11.971743   61447 logs.go:276] 1 containers: [3ef9bf666bbbc4c4c8b8036d2fa7e15e882f2bb301fe7d3b63a483d8b38e6091]
	I0814 01:10:11.971801   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:11.975943   61447 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:10:11.976005   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:10:12.010171   61447 cri.go:89] found id: ""
	I0814 01:10:12.010198   61447 logs.go:276] 0 containers: []
	W0814 01:10:12.010209   61447 logs.go:278] No container was found matching "kindnet"
	I0814 01:10:12.010217   61447 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0814 01:10:12.010277   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0814 01:10:12.045333   61447 cri.go:89] found id: "d4d7da10edbe3c5e4d9a0997c7f8909523ed34eacac6b44dc982bf6ab7504eff"
	I0814 01:10:12.045354   61447 cri.go:89] found id: "bacb411cbea2000ee28b8a5eb1c371d4094e22dea31e33892860568e08ee9768"
	I0814 01:10:12.045359   61447 cri.go:89] found id: ""
	I0814 01:10:12.045367   61447 logs.go:276] 2 containers: [d4d7da10edbe3c5e4d9a0997c7f8909523ed34eacac6b44dc982bf6ab7504eff bacb411cbea2000ee28b8a5eb1c371d4094e22dea31e33892860568e08ee9768]
	I0814 01:10:12.045431   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:12.049476   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:12.053712   61447 logs.go:123] Gathering logs for kube-controller-manager [3ef9bf666bbbc4c4c8b8036d2fa7e15e882f2bb301fe7d3b63a483d8b38e6091] ...
	I0814 01:10:12.053732   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ef9bf666bbbc4c4c8b8036d2fa7e15e882f2bb301fe7d3b63a483d8b38e6091"
	I0814 01:10:12.109678   61447 logs.go:123] Gathering logs for storage-provisioner [d4d7da10edbe3c5e4d9a0997c7f8909523ed34eacac6b44dc982bf6ab7504eff] ...
	I0814 01:10:12.109706   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d4d7da10edbe3c5e4d9a0997c7f8909523ed34eacac6b44dc982bf6ab7504eff"
	I0814 01:10:12.146300   61447 logs.go:123] Gathering logs for storage-provisioner [bacb411cbea2000ee28b8a5eb1c371d4094e22dea31e33892860568e08ee9768] ...
	I0814 01:10:12.146327   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bacb411cbea2000ee28b8a5eb1c371d4094e22dea31e33892860568e08ee9768"
	I0814 01:10:12.186556   61447 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:10:12.186585   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:10:12.660273   61447 logs.go:123] Gathering logs for kubelet ...
	I0814 01:10:12.660307   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:10:12.739687   61447 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:10:12.739723   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 01:10:12.859358   61447 logs.go:123] Gathering logs for kube-apiserver [ddba3ebb8413d6647bf6c8fe0bff16a1a10b35bcd7219cad5b66979c372cd92e] ...
	I0814 01:10:12.859388   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ddba3ebb8413d6647bf6c8fe0bff16a1a10b35bcd7219cad5b66979c372cd92e"
	I0814 01:10:12.908682   61447 logs.go:123] Gathering logs for kube-proxy [0ec88a5a7a9d566fabdf1393667a95e5eac0ed7f2a2aaa326ee51d3e99a72b12] ...
	I0814 01:10:12.908712   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ec88a5a7a9d566fabdf1393667a95e5eac0ed7f2a2aaa326ee51d3e99a72b12"
	I0814 01:10:12.943374   61447 logs.go:123] Gathering logs for container status ...
	I0814 01:10:12.943403   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:10:12.985875   61447 logs.go:123] Gathering logs for dmesg ...
	I0814 01:10:12.985915   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:10:13.001173   61447 logs.go:123] Gathering logs for etcd [1632d4b88f7f0e7e802d1848acf6c67950cfa7cdfbe5b4dd7f6fbc467a969388] ...
	I0814 01:10:13.001206   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1632d4b88f7f0e7e802d1848acf6c67950cfa7cdfbe5b4dd7f6fbc467a969388"
	I0814 01:10:13.048387   61447 logs.go:123] Gathering logs for coredns [7d3cb1d648607a62a84856164fa12f6d400c0d4316d7bda5d83a448870c145fc] ...
	I0814 01:10:13.048419   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7d3cb1d648607a62a84856164fa12f6d400c0d4316d7bda5d83a448870c145fc"
	I0814 01:10:13.088258   61447 logs.go:123] Gathering logs for kube-scheduler [89953f1dc813e32748579ac4c2541c0f99b1443ae3956d85a367db06810107b2] ...
	I0814 01:10:13.088295   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 89953f1dc813e32748579ac4c2541c0f99b1443ae3956d85a367db06810107b2"
	I0814 01:10:15.634029   61447 api_server.go:253] Checking apiserver healthz at https://192.168.72.94:8443/healthz ...
	I0814 01:10:15.639313   61447 api_server.go:279] https://192.168.72.94:8443/healthz returned 200:
	ok
	I0814 01:10:15.640756   61447 api_server.go:141] control plane version: v1.31.0
	I0814 01:10:15.640778   61447 api_server.go:131] duration metric: took 3.909125329s to wait for apiserver health ...
	I0814 01:10:15.640785   61447 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 01:10:15.640808   61447 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:10:15.640853   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:10:15.687350   61447 cri.go:89] found id: "ddba3ebb8413d6647bf6c8fe0bff16a1a10b35bcd7219cad5b66979c372cd92e"
	I0814 01:10:15.687373   61447 cri.go:89] found id: ""
	I0814 01:10:15.687381   61447 logs.go:276] 1 containers: [ddba3ebb8413d6647bf6c8fe0bff16a1a10b35bcd7219cad5b66979c372cd92e]
	I0814 01:10:15.687460   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:15.691407   61447 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:10:15.691473   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:10:15.730526   61447 cri.go:89] found id: "1632d4b88f7f0e7e802d1848acf6c67950cfa7cdfbe5b4dd7f6fbc467a969388"
	I0814 01:10:15.730551   61447 cri.go:89] found id: ""
	I0814 01:10:15.730560   61447 logs.go:276] 1 containers: [1632d4b88f7f0e7e802d1848acf6c67950cfa7cdfbe5b4dd7f6fbc467a969388]
	I0814 01:10:15.730618   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:15.734328   61447 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:10:15.734390   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:10:15.773166   61447 cri.go:89] found id: "7d3cb1d648607a62a84856164fa12f6d400c0d4316d7bda5d83a448870c145fc"
	I0814 01:10:15.773185   61447 cri.go:89] found id: ""
	I0814 01:10:15.773192   61447 logs.go:276] 1 containers: [7d3cb1d648607a62a84856164fa12f6d400c0d4316d7bda5d83a448870c145fc]
	I0814 01:10:15.773236   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:15.778757   61447 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:10:15.778815   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:10:15.813960   61447 cri.go:89] found id: "89953f1dc813e32748579ac4c2541c0f99b1443ae3956d85a367db06810107b2"
	I0814 01:10:15.813984   61447 cri.go:89] found id: ""
	I0814 01:10:15.813993   61447 logs.go:276] 1 containers: [89953f1dc813e32748579ac4c2541c0f99b1443ae3956d85a367db06810107b2]
	I0814 01:10:15.814068   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:15.818154   61447 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:10:15.818206   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:10:15.859408   61447 cri.go:89] found id: "0ec88a5a7a9d566fabdf1393667a95e5eac0ed7f2a2aaa326ee51d3e99a72b12"
	I0814 01:10:15.859432   61447 cri.go:89] found id: ""
	I0814 01:10:15.859440   61447 logs.go:276] 1 containers: [0ec88a5a7a9d566fabdf1393667a95e5eac0ed7f2a2aaa326ee51d3e99a72b12]
	I0814 01:10:15.859487   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:15.864494   61447 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:10:15.864583   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:10:15.900903   61447 cri.go:89] found id: "3ef9bf666bbbc4c4c8b8036d2fa7e15e882f2bb301fe7d3b63a483d8b38e6091"
	I0814 01:10:15.900922   61447 cri.go:89] found id: ""
	I0814 01:10:15.900932   61447 logs.go:276] 1 containers: [3ef9bf666bbbc4c4c8b8036d2fa7e15e882f2bb301fe7d3b63a483d8b38e6091]
	I0814 01:10:15.900982   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:15.905238   61447 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:10:15.905298   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:10:15.941185   61447 cri.go:89] found id: ""
	I0814 01:10:15.941215   61447 logs.go:276] 0 containers: []
	W0814 01:10:15.941226   61447 logs.go:278] No container was found matching "kindnet"
	I0814 01:10:15.941233   61447 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0814 01:10:15.941293   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0814 01:10:15.980737   61447 cri.go:89] found id: "d4d7da10edbe3c5e4d9a0997c7f8909523ed34eacac6b44dc982bf6ab7504eff"
	I0814 01:10:15.980756   61447 cri.go:89] found id: "bacb411cbea2000ee28b8a5eb1c371d4094e22dea31e33892860568e08ee9768"
	I0814 01:10:15.980760   61447 cri.go:89] found id: ""
	I0814 01:10:15.980766   61447 logs.go:276] 2 containers: [d4d7da10edbe3c5e4d9a0997c7f8909523ed34eacac6b44dc982bf6ab7504eff bacb411cbea2000ee28b8a5eb1c371d4094e22dea31e33892860568e08ee9768]
	I0814 01:10:15.980809   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:15.985209   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:15.989469   61447 logs.go:123] Gathering logs for coredns [7d3cb1d648607a62a84856164fa12f6d400c0d4316d7bda5d83a448870c145fc] ...
	I0814 01:10:15.989492   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7d3cb1d648607a62a84856164fa12f6d400c0d4316d7bda5d83a448870c145fc"
	I0814 01:10:16.026888   61447 logs.go:123] Gathering logs for kube-proxy [0ec88a5a7a9d566fabdf1393667a95e5eac0ed7f2a2aaa326ee51d3e99a72b12] ...
	I0814 01:10:16.026917   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ec88a5a7a9d566fabdf1393667a95e5eac0ed7f2a2aaa326ee51d3e99a72b12"
	I0814 01:10:16.071726   61447 logs.go:123] Gathering logs for storage-provisioner [d4d7da10edbe3c5e4d9a0997c7f8909523ed34eacac6b44dc982bf6ab7504eff] ...
	I0814 01:10:16.071754   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d4d7da10edbe3c5e4d9a0997c7f8909523ed34eacac6b44dc982bf6ab7504eff"
	I0814 01:10:16.109685   61447 logs.go:123] Gathering logs for storage-provisioner [bacb411cbea2000ee28b8a5eb1c371d4094e22dea31e33892860568e08ee9768] ...
	I0814 01:10:16.109710   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bacb411cbea2000ee28b8a5eb1c371d4094e22dea31e33892860568e08ee9768"
	I0814 01:10:16.145898   61447 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:10:16.145928   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:10:15.387785   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:10:15.401850   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:10:15.401916   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:10:15.441217   61804 cri.go:89] found id: ""
	I0814 01:10:15.441240   61804 logs.go:276] 0 containers: []
	W0814 01:10:15.441255   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:10:15.441261   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:10:15.441312   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:10:15.475123   61804 cri.go:89] found id: ""
	I0814 01:10:15.475158   61804 logs.go:276] 0 containers: []
	W0814 01:10:15.475167   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:10:15.475172   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:10:15.475234   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:10:15.509696   61804 cri.go:89] found id: ""
	I0814 01:10:15.509725   61804 logs.go:276] 0 containers: []
	W0814 01:10:15.509733   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:10:15.509739   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:10:15.509797   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:10:15.542584   61804 cri.go:89] found id: ""
	I0814 01:10:15.542615   61804 logs.go:276] 0 containers: []
	W0814 01:10:15.542625   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:10:15.542632   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:10:15.542701   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:10:15.576508   61804 cri.go:89] found id: ""
	I0814 01:10:15.576540   61804 logs.go:276] 0 containers: []
	W0814 01:10:15.576552   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:10:15.576558   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:10:15.576622   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:10:15.613618   61804 cri.go:89] found id: ""
	I0814 01:10:15.613649   61804 logs.go:276] 0 containers: []
	W0814 01:10:15.613660   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:10:15.613669   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:10:15.613732   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:10:15.646153   61804 cri.go:89] found id: ""
	I0814 01:10:15.646173   61804 logs.go:276] 0 containers: []
	W0814 01:10:15.646182   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:10:15.646189   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:10:15.646241   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:10:15.681417   61804 cri.go:89] found id: ""
	I0814 01:10:15.681444   61804 logs.go:276] 0 containers: []
	W0814 01:10:15.681455   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:10:15.681466   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:10:15.681483   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:10:15.763989   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:10:15.764026   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:10:15.803304   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:10:15.803337   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:10:15.872591   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:10:15.872630   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:10:15.886469   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:10:15.886504   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:10:15.956403   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:10:18.457103   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:10:18.470059   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:10:18.470138   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:10:18.505369   61804 cri.go:89] found id: ""
	I0814 01:10:18.505399   61804 logs.go:276] 0 containers: []
	W0814 01:10:18.505410   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:10:18.505419   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:10:18.505481   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:10:18.536719   61804 cri.go:89] found id: ""
	I0814 01:10:18.536750   61804 logs.go:276] 0 containers: []
	W0814 01:10:18.536781   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:10:18.536790   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:10:18.536845   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:10:18.571048   61804 cri.go:89] found id: ""
	I0814 01:10:18.571077   61804 logs.go:276] 0 containers: []
	W0814 01:10:18.571089   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:10:18.571096   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:10:18.571161   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:10:18.605547   61804 cri.go:89] found id: ""
	I0814 01:10:18.605569   61804 logs.go:276] 0 containers: []
	W0814 01:10:18.605578   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:10:18.605585   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:10:18.605645   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:10:18.637177   61804 cri.go:89] found id: ""
	I0814 01:10:18.637199   61804 logs.go:276] 0 containers: []
	W0814 01:10:18.637207   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:10:18.637213   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:10:18.637275   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:10:18.674976   61804 cri.go:89] found id: ""
	I0814 01:10:18.675003   61804 logs.go:276] 0 containers: []
	W0814 01:10:18.675012   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:10:18.675017   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:10:18.675066   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:10:18.709808   61804 cri.go:89] found id: ""
	I0814 01:10:18.709832   61804 logs.go:276] 0 containers: []
	W0814 01:10:18.709840   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:10:18.709846   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:10:18.709902   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:10:18.743577   61804 cri.go:89] found id: ""
	I0814 01:10:18.743601   61804 logs.go:276] 0 containers: []
	W0814 01:10:18.743607   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:10:18.743615   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:10:18.743635   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:10:18.794913   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:10:18.794944   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:10:18.807665   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:10:18.807692   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:10:18.877814   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:10:18.877835   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:10:18.877847   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:10:18.962319   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:10:18.962356   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:10:16.533474   61447 logs.go:123] Gathering logs for container status ...
	I0814 01:10:16.533523   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:10:16.579098   61447 logs.go:123] Gathering logs for kube-apiserver [ddba3ebb8413d6647bf6c8fe0bff16a1a10b35bcd7219cad5b66979c372cd92e] ...
	I0814 01:10:16.579129   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ddba3ebb8413d6647bf6c8fe0bff16a1a10b35bcd7219cad5b66979c372cd92e"
	I0814 01:10:16.620711   61447 logs.go:123] Gathering logs for dmesg ...
	I0814 01:10:16.620744   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:10:16.633968   61447 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:10:16.634005   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 01:10:16.733947   61447 logs.go:123] Gathering logs for etcd [1632d4b88f7f0e7e802d1848acf6c67950cfa7cdfbe5b4dd7f6fbc467a969388] ...
	I0814 01:10:16.733985   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1632d4b88f7f0e7e802d1848acf6c67950cfa7cdfbe5b4dd7f6fbc467a969388"
	I0814 01:10:16.785475   61447 logs.go:123] Gathering logs for kube-scheduler [89953f1dc813e32748579ac4c2541c0f99b1443ae3956d85a367db06810107b2] ...
	I0814 01:10:16.785512   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 89953f1dc813e32748579ac4c2541c0f99b1443ae3956d85a367db06810107b2"
	I0814 01:10:16.826307   61447 logs.go:123] Gathering logs for kube-controller-manager [3ef9bf666bbbc4c4c8b8036d2fa7e15e882f2bb301fe7d3b63a483d8b38e6091] ...
	I0814 01:10:16.826334   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ef9bf666bbbc4c4c8b8036d2fa7e15e882f2bb301fe7d3b63a483d8b38e6091"
	I0814 01:10:16.879391   61447 logs.go:123] Gathering logs for kubelet ...
	I0814 01:10:16.879422   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:10:19.453998   61447 system_pods.go:59] 8 kube-system pods found
	I0814 01:10:19.454028   61447 system_pods.go:61] "coredns-6f6b679f8f-dz9zk" [67e29ce3-7f67-4b96-8030-c980773b5772] Running
	I0814 01:10:19.454034   61447 system_pods.go:61] "etcd-no-preload-776907" [b81b7341-dcd8-4374-8241-8797eb33d707] Running
	I0814 01:10:19.454050   61447 system_pods.go:61] "kube-apiserver-no-preload-776907" [33b066e2-28ef-46a7-95d7-b17806cdbde6] Running
	I0814 01:10:19.454056   61447 system_pods.go:61] "kube-controller-manager-no-preload-776907" [1de07b1f-7e0d-4704-84dc-fbb1280fc3bf] Running
	I0814 01:10:19.454060   61447 system_pods.go:61] "kube-proxy-pgm9t" [efad60b0-c62e-4c47-974b-98fdca9d3496] Running
	I0814 01:10:19.454065   61447 system_pods.go:61] "kube-scheduler-no-preload-776907" [6a57c2f5-6194-4e84-bfd3-985a6ff2333d] Running
	I0814 01:10:19.454074   61447 system_pods.go:61] "metrics-server-6867b74b74-gb2dt" [c950c58e-c5c3-4535-b10f-f4379ff03409] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 01:10:19.454079   61447 system_pods.go:61] "storage-provisioner" [d0ba9510-e0a5-4558-98e3-a9510920f93a] Running
	I0814 01:10:19.454090   61447 system_pods.go:74] duration metric: took 3.813297982s to wait for pod list to return data ...
	I0814 01:10:19.454101   61447 default_sa.go:34] waiting for default service account to be created ...
	I0814 01:10:19.456941   61447 default_sa.go:45] found service account: "default"
	I0814 01:10:19.456969   61447 default_sa.go:55] duration metric: took 2.858057ms for default service account to be created ...
	I0814 01:10:19.456980   61447 system_pods.go:116] waiting for k8s-apps to be running ...
	I0814 01:10:19.461101   61447 system_pods.go:86] 8 kube-system pods found
	I0814 01:10:19.461125   61447 system_pods.go:89] "coredns-6f6b679f8f-dz9zk" [67e29ce3-7f67-4b96-8030-c980773b5772] Running
	I0814 01:10:19.461133   61447 system_pods.go:89] "etcd-no-preload-776907" [b81b7341-dcd8-4374-8241-8797eb33d707] Running
	I0814 01:10:19.461138   61447 system_pods.go:89] "kube-apiserver-no-preload-776907" [33b066e2-28ef-46a7-95d7-b17806cdbde6] Running
	I0814 01:10:19.461144   61447 system_pods.go:89] "kube-controller-manager-no-preload-776907" [1de07b1f-7e0d-4704-84dc-fbb1280fc3bf] Running
	I0814 01:10:19.461150   61447 system_pods.go:89] "kube-proxy-pgm9t" [efad60b0-c62e-4c47-974b-98fdca9d3496] Running
	I0814 01:10:19.461155   61447 system_pods.go:89] "kube-scheduler-no-preload-776907" [6a57c2f5-6194-4e84-bfd3-985a6ff2333d] Running
	I0814 01:10:19.461166   61447 system_pods.go:89] "metrics-server-6867b74b74-gb2dt" [c950c58e-c5c3-4535-b10f-f4379ff03409] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 01:10:19.461178   61447 system_pods.go:89] "storage-provisioner" [d0ba9510-e0a5-4558-98e3-a9510920f93a] Running
	I0814 01:10:19.461191   61447 system_pods.go:126] duration metric: took 4.203785ms to wait for k8s-apps to be running ...
	I0814 01:10:19.461203   61447 system_svc.go:44] waiting for kubelet service to be running ....
	I0814 01:10:19.461253   61447 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 01:10:19.476698   61447 system_svc.go:56] duration metric: took 15.486945ms WaitForService to wait for kubelet
	I0814 01:10:19.476735   61447 kubeadm.go:582] duration metric: took 4m23.065272349s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 01:10:19.476762   61447 node_conditions.go:102] verifying NodePressure condition ...
	I0814 01:10:19.480352   61447 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0814 01:10:19.480377   61447 node_conditions.go:123] node cpu capacity is 2
	I0814 01:10:19.480392   61447 node_conditions.go:105] duration metric: took 3.624166ms to run NodePressure ...
	I0814 01:10:19.480407   61447 start.go:241] waiting for startup goroutines ...
	I0814 01:10:19.480426   61447 start.go:246] waiting for cluster config update ...
	I0814 01:10:19.480440   61447 start.go:255] writing updated cluster config ...
	I0814 01:10:19.480790   61447 ssh_runner.go:195] Run: rm -f paused
	I0814 01:10:19.529809   61447 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0814 01:10:19.531666   61447 out.go:177] * Done! kubectl is now configured to use "no-preload-776907" cluster and "default" namespace by default
	I0814 01:10:15.590230   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:18.089286   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:21.500596   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:10:21.513404   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:10:21.513479   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:10:21.554150   61804 cri.go:89] found id: ""
	I0814 01:10:21.554179   61804 logs.go:276] 0 containers: []
	W0814 01:10:21.554188   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:10:21.554194   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:10:21.554251   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:10:21.588785   61804 cri.go:89] found id: ""
	I0814 01:10:21.588807   61804 logs.go:276] 0 containers: []
	W0814 01:10:21.588815   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:10:21.588820   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:10:21.588870   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:10:21.621537   61804 cri.go:89] found id: ""
	I0814 01:10:21.621572   61804 logs.go:276] 0 containers: []
	W0814 01:10:21.621581   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:10:21.621587   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:10:21.621640   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:10:21.660651   61804 cri.go:89] found id: ""
	I0814 01:10:21.660680   61804 logs.go:276] 0 containers: []
	W0814 01:10:21.660690   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:10:21.660698   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:10:21.660763   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:10:21.697233   61804 cri.go:89] found id: ""
	I0814 01:10:21.697259   61804 logs.go:276] 0 containers: []
	W0814 01:10:21.697269   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:10:21.697276   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:10:21.697347   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:10:21.728389   61804 cri.go:89] found id: ""
	I0814 01:10:21.728416   61804 logs.go:276] 0 containers: []
	W0814 01:10:21.728428   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:10:21.728435   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:10:21.728498   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:10:21.761502   61804 cri.go:89] found id: ""
	I0814 01:10:21.761534   61804 logs.go:276] 0 containers: []
	W0814 01:10:21.761546   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:10:21.761552   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:10:21.761624   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:10:21.796569   61804 cri.go:89] found id: ""
	I0814 01:10:21.796598   61804 logs.go:276] 0 containers: []
	W0814 01:10:21.796610   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:10:21.796621   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:10:21.796637   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:10:21.845444   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:10:21.845483   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:10:21.858017   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:10:21.858057   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:10:21.930417   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:10:21.930443   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:10:21.930460   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:10:22.005912   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:10:22.005951   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:10:20.089593   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:22.089797   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:24.591315   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:24.545241   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:10:24.559341   61804 kubeadm.go:597] duration metric: took 4m4.643567639s to restartPrimaryControlPlane
	W0814 01:10:24.559407   61804 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0814 01:10:24.559430   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0814 01:10:28.294241   61804 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (3.734785326s)
	I0814 01:10:28.294319   61804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 01:10:28.311148   61804 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 01:10:28.321145   61804 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 01:10:28.335025   61804 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 01:10:28.335042   61804 kubeadm.go:157] found existing configuration files:
	
	I0814 01:10:28.335084   61804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 01:10:28.348778   61804 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 01:10:28.348838   61804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 01:10:28.362209   61804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 01:10:28.374981   61804 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 01:10:28.375054   61804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 01:10:28.385686   61804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 01:10:28.396608   61804 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 01:10:28.396681   61804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 01:10:28.410155   61804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 01:10:28.419462   61804 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 01:10:28.419524   61804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 01:10:28.429089   61804 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0814 01:10:28.506715   61804 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0814 01:10:28.506816   61804 kubeadm.go:310] [preflight] Running pre-flight checks
	I0814 01:10:28.668770   61804 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0814 01:10:28.668908   61804 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0814 01:10:28.669020   61804 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0814 01:10:28.865442   61804 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0814 01:10:28.866971   61804 out.go:204]   - Generating certificates and keys ...
	I0814 01:10:28.867065   61804 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0814 01:10:28.867151   61804 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0814 01:10:28.867270   61804 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0814 01:10:28.867370   61804 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0814 01:10:28.867486   61804 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0814 01:10:28.867575   61804 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0814 01:10:28.867668   61804 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0814 01:10:28.867762   61804 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0814 01:10:28.867854   61804 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0814 01:10:28.867969   61804 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0814 01:10:28.868026   61804 kubeadm.go:310] [certs] Using the existing "sa" key
	I0814 01:10:28.868095   61804 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0814 01:10:29.109820   61804 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0814 01:10:29.305485   61804 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0814 01:10:29.447627   61804 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0814 01:10:29.519749   61804 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0814 01:10:29.534507   61804 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0814 01:10:29.535858   61804 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0814 01:10:29.535915   61804 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0814 01:10:29.679100   61804 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0814 01:10:27.089933   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:29.590579   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:29.681457   61804 out.go:204]   - Booting up control plane ...
	I0814 01:10:29.681596   61804 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0814 01:10:29.686193   61804 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0814 01:10:29.690458   61804 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0814 01:10:29.690602   61804 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0814 01:10:29.692526   61804 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0814 01:10:32.089926   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:34.090129   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:39.266092   61689 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.354324468s)
	I0814 01:10:39.266176   61689 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 01:10:39.281039   61689 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 01:10:39.290328   61689 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 01:10:39.299179   61689 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 01:10:39.299200   61689 kubeadm.go:157] found existing configuration files:
	
	I0814 01:10:39.299240   61689 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0814 01:10:39.307972   61689 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 01:10:39.308029   61689 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 01:10:39.316639   61689 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0814 01:10:39.324834   61689 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 01:10:39.324907   61689 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 01:10:39.333911   61689 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0814 01:10:39.342294   61689 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 01:10:39.342358   61689 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 01:10:39.351209   61689 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0814 01:10:39.361364   61689 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 01:10:39.361429   61689 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 01:10:39.370737   61689 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0814 01:10:39.422751   61689 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0814 01:10:39.422819   61689 kubeadm.go:310] [preflight] Running pre-flight checks
	I0814 01:10:39.536672   61689 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0814 01:10:39.536827   61689 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0814 01:10:39.536965   61689 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0814 01:10:39.546793   61689 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0814 01:10:36.590409   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:39.090160   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:39.548749   61689 out.go:204]   - Generating certificates and keys ...
	I0814 01:10:39.548852   61689 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0814 01:10:39.548936   61689 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0814 01:10:39.549054   61689 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0814 01:10:39.549147   61689 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0814 01:10:39.549236   61689 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0814 01:10:39.549354   61689 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0814 01:10:39.549454   61689 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0814 01:10:39.549540   61689 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0814 01:10:39.549647   61689 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0814 01:10:39.549725   61689 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0814 01:10:39.549779   61689 kubeadm.go:310] [certs] Using the existing "sa" key
	I0814 01:10:39.549857   61689 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0814 01:10:39.626351   61689 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0814 01:10:39.760278   61689 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0814 01:10:39.866008   61689 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0814 01:10:39.999161   61689 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0814 01:10:40.196721   61689 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0814 01:10:40.197188   61689 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0814 01:10:40.199882   61689 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0814 01:10:40.201618   61689 out.go:204]   - Booting up control plane ...
	I0814 01:10:40.201746   61689 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0814 01:10:40.201813   61689 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0814 01:10:40.201869   61689 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0814 01:10:40.219199   61689 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0814 01:10:40.227902   61689 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0814 01:10:40.227973   61689 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0814 01:10:40.361233   61689 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0814 01:10:40.361348   61689 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0814 01:10:40.862332   61689 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.269742ms
	I0814 01:10:40.862432   61689 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0814 01:10:41.590443   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:43.590766   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:45.864038   61689 kubeadm.go:310] [api-check] The API server is healthy after 5.001460061s
	I0814 01:10:45.878388   61689 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0814 01:10:45.896709   61689 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0814 01:10:45.940134   61689 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0814 01:10:45.940348   61689 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-585256 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0814 01:10:45.955748   61689 kubeadm.go:310] [bootstrap-token] Using token: 8dipep.54emqs990as2h2yu
	I0814 01:10:45.957107   61689 out.go:204]   - Configuring RBAC rules ...
	I0814 01:10:45.957260   61689 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0814 01:10:45.967198   61689 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0814 01:10:45.981109   61689 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0814 01:10:45.984971   61689 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0814 01:10:45.990218   61689 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0814 01:10:45.994132   61689 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0814 01:10:46.271392   61689 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0814 01:10:46.713198   61689 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0814 01:10:47.271788   61689 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0814 01:10:47.271821   61689 kubeadm.go:310] 
	I0814 01:10:47.271873   61689 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0814 01:10:47.271880   61689 kubeadm.go:310] 
	I0814 01:10:47.271970   61689 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0814 01:10:47.271983   61689 kubeadm.go:310] 
	I0814 01:10:47.272035   61689 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0814 01:10:47.272118   61689 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0814 01:10:47.272195   61689 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0814 01:10:47.272219   61689 kubeadm.go:310] 
	I0814 01:10:47.272313   61689 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0814 01:10:47.272340   61689 kubeadm.go:310] 
	I0814 01:10:47.272418   61689 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0814 01:10:47.272431   61689 kubeadm.go:310] 
	I0814 01:10:47.272493   61689 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0814 01:10:47.272603   61689 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0814 01:10:47.272718   61689 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0814 01:10:47.272736   61689 kubeadm.go:310] 
	I0814 01:10:47.272851   61689 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0814 01:10:47.272978   61689 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0814 01:10:47.272988   61689 kubeadm.go:310] 
	I0814 01:10:47.273093   61689 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 8dipep.54emqs990as2h2yu \
	I0814 01:10:47.273238   61689 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f608549e9c065598e2d494ba753eb8a7bbe7260a53a76decd30e6e17b5fe24c3 \
	I0814 01:10:47.273276   61689 kubeadm.go:310] 	--control-plane 
	I0814 01:10:47.273290   61689 kubeadm.go:310] 
	I0814 01:10:47.273405   61689 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0814 01:10:47.273413   61689 kubeadm.go:310] 
	I0814 01:10:47.273513   61689 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 8dipep.54emqs990as2h2yu \
	I0814 01:10:47.273659   61689 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f608549e9c065598e2d494ba753eb8a7bbe7260a53a76decd30e6e17b5fe24c3 
	I0814 01:10:47.274832   61689 kubeadm.go:310] W0814 01:10:39.407507    2549 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0814 01:10:47.275253   61689 kubeadm.go:310] W0814 01:10:39.408398    2549 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0814 01:10:47.275402   61689 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0814 01:10:47.275444   61689 cni.go:84] Creating CNI manager for ""
	I0814 01:10:47.275455   61689 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 01:10:47.277239   61689 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0814 01:10:47.278570   61689 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0814 01:10:47.289683   61689 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0814 01:10:47.306392   61689 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0814 01:10:47.306474   61689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:10:47.306474   61689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-585256 minikube.k8s.io/updated_at=2024_08_14T01_10_47_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a5ac17629aabe8562ef9220195ba6559cf416caf minikube.k8s.io/name=default-k8s-diff-port-585256 minikube.k8s.io/primary=true
	I0814 01:10:47.471053   61689 ops.go:34] apiserver oom_adj: -16
	I0814 01:10:47.471227   61689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:10:47.971669   61689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:10:46.089776   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:48.589378   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:48.472147   61689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:10:48.971874   61689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:10:49.471867   61689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:10:49.972002   61689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:10:50.471298   61689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:10:50.971656   61689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:10:51.471610   61689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:10:51.548562   61689 kubeadm.go:1113] duration metric: took 4.24215834s to wait for elevateKubeSystemPrivileges
	I0814 01:10:51.548600   61689 kubeadm.go:394] duration metric: took 4m53.28604263s to StartCluster
	I0814 01:10:51.548621   61689 settings.go:142] acquiring lock: {Name:mkb0f793aa2a6618ff3457f9cd2d34beec5f1b5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 01:10:51.548708   61689 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19429-9425/kubeconfig
	I0814 01:10:51.551834   61689 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19429-9425/kubeconfig: {Name:mkd99f966962f24d3ec49056c344f5320df43dba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 01:10:51.552154   61689 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.110 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0814 01:10:51.552236   61689 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0814 01:10:51.552311   61689 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-585256"
	I0814 01:10:51.552343   61689 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-585256"
	I0814 01:10:51.552341   61689 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-585256"
	W0814 01:10:51.552354   61689 addons.go:243] addon storage-provisioner should already be in state true
	I0814 01:10:51.552384   61689 host.go:66] Checking if "default-k8s-diff-port-585256" exists ...
	I0814 01:10:51.552387   61689 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-585256"
	W0814 01:10:51.552396   61689 addons.go:243] addon metrics-server should already be in state true
	I0814 01:10:51.552416   61689 host.go:66] Checking if "default-k8s-diff-port-585256" exists ...
	I0814 01:10:51.552423   61689 config.go:182] Loaded profile config "default-k8s-diff-port-585256": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 01:10:51.552805   61689 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:10:51.552842   61689 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:10:51.552855   61689 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:10:51.552865   61689 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:10:51.553056   61689 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-585256"
	I0814 01:10:51.553092   61689 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-585256"
	I0814 01:10:51.553476   61689 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:10:51.553519   61689 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:10:51.553870   61689 out.go:177] * Verifying Kubernetes components...
	I0814 01:10:51.555358   61689 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 01:10:51.569380   61689 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36961
	I0814 01:10:51.569570   61689 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38335
	I0814 01:10:51.569920   61689 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:10:51.570057   61689 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:10:51.570516   61689 main.go:141] libmachine: Using API Version  1
	I0814 01:10:51.570536   61689 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:10:51.570648   61689 main.go:141] libmachine: Using API Version  1
	I0814 01:10:51.570672   61689 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:10:51.570891   61689 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:10:51.570981   61689 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:10:51.571148   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetState
	I0814 01:10:51.571564   61689 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:10:51.571600   61689 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:10:51.572161   61689 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40351
	I0814 01:10:51.572637   61689 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:10:51.573134   61689 main.go:141] libmachine: Using API Version  1
	I0814 01:10:51.573153   61689 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:10:51.574142   61689 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:10:51.574576   61689 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:10:51.574600   61689 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:10:51.575008   61689 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-585256"
	W0814 01:10:51.575026   61689 addons.go:243] addon default-storageclass should already be in state true
	I0814 01:10:51.575056   61689 host.go:66] Checking if "default-k8s-diff-port-585256" exists ...
	I0814 01:10:51.575459   61689 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:10:51.575500   61689 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:10:51.587910   61689 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35335
	I0814 01:10:51.588640   61689 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:10:51.589298   61689 main.go:141] libmachine: Using API Version  1
	I0814 01:10:51.589318   61689 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:10:51.589938   61689 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:10:51.590198   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetState
	I0814 01:10:51.591151   61689 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40625
	I0814 01:10:51.591786   61689 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:10:51.592257   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .DriverName
	I0814 01:10:51.592427   61689 main.go:141] libmachine: Using API Version  1
	I0814 01:10:51.592444   61689 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:10:51.592742   61689 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:10:51.592959   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetState
	I0814 01:10:51.594517   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .DriverName
	I0814 01:10:51.594851   61689 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0814 01:10:51.596245   61689 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 01:10:51.596263   61689 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0814 01:10:51.596277   61689 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0814 01:10:51.596296   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHHostname
	I0814 01:10:51.597335   61689 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 01:10:51.597351   61689 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0814 01:10:51.597365   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHHostname
	I0814 01:10:51.599147   61689 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40567
	I0814 01:10:51.599559   61689 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:10:51.600041   61689 main.go:141] libmachine: Using API Version  1
	I0814 01:10:51.600062   61689 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:10:51.600442   61689 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:10:51.601105   61689 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:10:51.601131   61689 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:10:51.601316   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:10:51.601345   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:10:51.601367   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:10:51.601408   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHPort
	I0814 01:10:51.601889   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:10:51.601893   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 01:10:51.602060   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHUsername
	I0814 01:10:51.602226   61689 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/default-k8s-diff-port-585256/id_rsa Username:docker}
	I0814 01:10:51.606415   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:10:51.606437   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:10:51.606582   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHPort
	I0814 01:10:51.606793   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 01:10:51.607035   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHUsername
	I0814 01:10:51.607200   61689 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/default-k8s-diff-port-585256/id_rsa Username:docker}
	I0814 01:10:51.623773   61689 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33265
	I0814 01:10:51.624272   61689 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:10:51.624752   61689 main.go:141] libmachine: Using API Version  1
	I0814 01:10:51.624772   61689 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:10:51.625130   61689 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:10:51.625309   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetState
	I0814 01:10:51.627055   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .DriverName
	I0814 01:10:51.627259   61689 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0814 01:10:51.627272   61689 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0814 01:10:51.627284   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHHostname
	I0814 01:10:51.630492   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:10:51.630890   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:10:51.630904   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:10:51.631066   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHPort
	I0814 01:10:51.631226   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 01:10:51.631389   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHUsername
	I0814 01:10:51.631501   61689 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/default-k8s-diff-port-585256/id_rsa Username:docker}
	I0814 01:10:51.744471   61689 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 01:10:51.762256   61689 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-585256" to be "Ready" ...
	I0814 01:10:51.782968   61689 node_ready.go:49] node "default-k8s-diff-port-585256" has status "Ready":"True"
	I0814 01:10:51.782999   61689 node_ready.go:38] duration metric: took 20.706198ms for node "default-k8s-diff-port-585256" to be "Ready" ...
	I0814 01:10:51.783011   61689 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 01:10:51.796967   61689 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-hngz9" in "kube-system" namespace to be "Ready" ...
	I0814 01:10:51.866263   61689 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 01:10:51.867193   61689 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0814 01:10:51.880992   61689 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0814 01:10:51.881017   61689 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0814 01:10:51.927059   61689 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0814 01:10:51.927081   61689 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0814 01:10:51.987114   61689 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0814 01:10:51.987134   61689 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0814 01:10:52.053818   61689 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0814 01:10:52.977726   61689 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.111426777s)
	I0814 01:10:52.977791   61689 main.go:141] libmachine: Making call to close driver server
	I0814 01:10:52.977789   61689 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.110564484s)
	I0814 01:10:52.977844   61689 main.go:141] libmachine: Making call to close driver server
	I0814 01:10:52.977863   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .Close
	I0814 01:10:52.977805   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .Close
	I0814 01:10:52.978191   61689 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:10:52.978210   61689 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:10:52.978217   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | Closing plugin on server side
	I0814 01:10:52.978222   61689 main.go:141] libmachine: Making call to close driver server
	I0814 01:10:52.978230   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | Closing plugin on server side
	I0814 01:10:52.978236   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .Close
	I0814 01:10:52.978282   61689 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:10:52.978310   61689 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:10:52.978325   61689 main.go:141] libmachine: Making call to close driver server
	I0814 01:10:52.978335   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .Close
	I0814 01:10:52.978869   61689 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:10:52.978909   61689 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:10:52.979017   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | Closing plugin on server side
	I0814 01:10:52.981465   61689 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:10:52.981488   61689 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:10:53.039845   61689 main.go:141] libmachine: Making call to close driver server
	I0814 01:10:53.039866   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .Close
	I0814 01:10:53.040156   61689 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:10:53.040174   61689 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:10:53.040217   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | Closing plugin on server side
	I0814 01:10:53.239968   61689 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.186108272s)
	I0814 01:10:53.240018   61689 main.go:141] libmachine: Making call to close driver server
	I0814 01:10:53.240035   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .Close
	I0814 01:10:53.240360   61689 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:10:53.240378   61689 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:10:53.240387   61689 main.go:141] libmachine: Making call to close driver server
	I0814 01:10:53.240395   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .Close
	I0814 01:10:53.240672   61689 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:10:53.240686   61689 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:10:53.240696   61689 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-585256"
	I0814 01:10:53.242401   61689 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0814 01:10:50.591245   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:52.584492   61115 pod_ready.go:81] duration metric: took 4m0.000968161s for pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace to be "Ready" ...
	E0814 01:10:52.584532   61115 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace to be "Ready" (will not retry!)
	I0814 01:10:52.584557   61115 pod_ready.go:38] duration metric: took 4m8.538973262s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 01:10:52.584585   61115 kubeadm.go:597] duration metric: took 4m16.433276087s to restartPrimaryControlPlane
	W0814 01:10:52.584639   61115 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0814 01:10:52.584666   61115 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0814 01:10:53.243906   61689 addons.go:510] duration metric: took 1.691669156s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0814 01:10:53.804696   61689 pod_ready.go:102] pod "coredns-6f6b679f8f-hngz9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:56.305075   61689 pod_ready.go:102] pod "coredns-6f6b679f8f-hngz9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:57.805174   61689 pod_ready.go:92] pod "coredns-6f6b679f8f-hngz9" in "kube-system" namespace has status "Ready":"True"
	I0814 01:10:57.805202   61689 pod_ready.go:81] duration metric: took 6.008208867s for pod "coredns-6f6b679f8f-hngz9" in "kube-system" namespace to be "Ready" ...
	I0814 01:10:57.805214   61689 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-jmqk7" in "kube-system" namespace to be "Ready" ...
	I0814 01:10:57.809693   61689 pod_ready.go:92] pod "coredns-6f6b679f8f-jmqk7" in "kube-system" namespace has status "Ready":"True"
	I0814 01:10:57.809714   61689 pod_ready.go:81] duration metric: took 4.491999ms for pod "coredns-6f6b679f8f-jmqk7" in "kube-system" namespace to be "Ready" ...
	I0814 01:10:57.809726   61689 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-585256" in "kube-system" namespace to be "Ready" ...
	I0814 01:10:59.816199   61689 pod_ready.go:92] pod "etcd-default-k8s-diff-port-585256" in "kube-system" namespace has status "Ready":"True"
	I0814 01:10:59.816228   61689 pod_ready.go:81] duration metric: took 2.006493576s for pod "etcd-default-k8s-diff-port-585256" in "kube-system" namespace to be "Ready" ...
	I0814 01:10:59.816241   61689 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-585256" in "kube-system" namespace to be "Ready" ...
	I0814 01:10:59.821351   61689 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-585256" in "kube-system" namespace has status "Ready":"True"
	I0814 01:10:59.821374   61689 pod_ready.go:81] duration metric: took 5.126272ms for pod "kube-apiserver-default-k8s-diff-port-585256" in "kube-system" namespace to be "Ready" ...
	I0814 01:10:59.821384   61689 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-585256" in "kube-system" namespace to be "Ready" ...
	I0814 01:10:59.825182   61689 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-585256" in "kube-system" namespace has status "Ready":"True"
	I0814 01:10:59.825200   61689 pod_ready.go:81] duration metric: took 3.810193ms for pod "kube-controller-manager-default-k8s-diff-port-585256" in "kube-system" namespace to be "Ready" ...
	I0814 01:10:59.825209   61689 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rg8h9" in "kube-system" namespace to be "Ready" ...
	I0814 01:10:59.829240   61689 pod_ready.go:92] pod "kube-proxy-rg8h9" in "kube-system" namespace has status "Ready":"True"
	I0814 01:10:59.829259   61689 pod_ready.go:81] duration metric: took 4.043044ms for pod "kube-proxy-rg8h9" in "kube-system" namespace to be "Ready" ...
	I0814 01:10:59.829269   61689 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-585256" in "kube-system" namespace to be "Ready" ...
	I0814 01:11:00.602253   61689 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-585256" in "kube-system" namespace has status "Ready":"True"
	I0814 01:11:00.602276   61689 pod_ready.go:81] duration metric: took 773.000181ms for pod "kube-scheduler-default-k8s-diff-port-585256" in "kube-system" namespace to be "Ready" ...
	I0814 01:11:00.602285   61689 pod_ready.go:38] duration metric: took 8.819260447s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 01:11:00.602301   61689 api_server.go:52] waiting for apiserver process to appear ...
	I0814 01:11:00.602352   61689 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:11:00.620930   61689 api_server.go:72] duration metric: took 9.068741768s to wait for apiserver process to appear ...
	I0814 01:11:00.620954   61689 api_server.go:88] waiting for apiserver healthz status ...
	I0814 01:11:00.620973   61689 api_server.go:253] Checking apiserver healthz at https://192.168.39.110:8444/healthz ...
	I0814 01:11:00.625960   61689 api_server.go:279] https://192.168.39.110:8444/healthz returned 200:
	ok
	I0814 01:11:00.626930   61689 api_server.go:141] control plane version: v1.31.0
	I0814 01:11:00.626948   61689 api_server.go:131] duration metric: took 5.98825ms to wait for apiserver health ...
	I0814 01:11:00.626956   61689 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 01:11:00.805157   61689 system_pods.go:59] 9 kube-system pods found
	I0814 01:11:00.805183   61689 system_pods.go:61] "coredns-6f6b679f8f-hngz9" [213f9a45-596b-47b3-9c37-ceae021433ea] Running
	I0814 01:11:00.805187   61689 system_pods.go:61] "coredns-6f6b679f8f-jmqk7" [397fb54b-40cd-4c4e-9503-c077f814c6e5] Running
	I0814 01:11:00.805190   61689 system_pods.go:61] "etcd-default-k8s-diff-port-585256" [2fa04b3c-b311-4f0f-82e5-e512db3dd11b] Running
	I0814 01:11:00.805194   61689 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-585256" [ef1c1aeb-9cee-47d6-8cf5-14535208af62] Running
	I0814 01:11:00.805197   61689 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-585256" [ff5c5123-b01f-4023-b8ec-169065ddb88a] Running
	I0814 01:11:00.805200   61689 system_pods.go:61] "kube-proxy-rg8h9" [b2601104-a6f5-4065-87d5-c027d583f647] Running
	I0814 01:11:00.805203   61689 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-585256" [31e655e4-00c7-443a-9ee8-058a4020852d] Running
	I0814 01:11:00.805209   61689 system_pods.go:61] "metrics-server-6867b74b74-lzfpz" [2dd31ad2-c384-4edd-8d5c-561bc2fa72e4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 01:11:00.805213   61689 system_pods.go:61] "storage-provisioner" [1636777b-2347-4c48-b72a-3b5445c4862a] Running
	I0814 01:11:00.805219   61689 system_pods.go:74] duration metric: took 178.259422ms to wait for pod list to return data ...
	I0814 01:11:00.805226   61689 default_sa.go:34] waiting for default service account to be created ...
	I0814 01:11:01.001973   61689 default_sa.go:45] found service account: "default"
	I0814 01:11:01.002000   61689 default_sa.go:55] duration metric: took 196.764266ms for default service account to be created ...
	I0814 01:11:01.002010   61689 system_pods.go:116] waiting for k8s-apps to be running ...
	I0814 01:11:01.203660   61689 system_pods.go:86] 9 kube-system pods found
	I0814 01:11:01.203683   61689 system_pods.go:89] "coredns-6f6b679f8f-hngz9" [213f9a45-596b-47b3-9c37-ceae021433ea] Running
	I0814 01:11:01.203688   61689 system_pods.go:89] "coredns-6f6b679f8f-jmqk7" [397fb54b-40cd-4c4e-9503-c077f814c6e5] Running
	I0814 01:11:01.203695   61689 system_pods.go:89] "etcd-default-k8s-diff-port-585256" [2fa04b3c-b311-4f0f-82e5-e512db3dd11b] Running
	I0814 01:11:01.203702   61689 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-585256" [ef1c1aeb-9cee-47d6-8cf5-14535208af62] Running
	I0814 01:11:01.203708   61689 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-585256" [ff5c5123-b01f-4023-b8ec-169065ddb88a] Running
	I0814 01:11:01.203713   61689 system_pods.go:89] "kube-proxy-rg8h9" [b2601104-a6f5-4065-87d5-c027d583f647] Running
	I0814 01:11:01.203719   61689 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-585256" [31e655e4-00c7-443a-9ee8-058a4020852d] Running
	I0814 01:11:01.203727   61689 system_pods.go:89] "metrics-server-6867b74b74-lzfpz" [2dd31ad2-c384-4edd-8d5c-561bc2fa72e4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 01:11:01.203733   61689 system_pods.go:89] "storage-provisioner" [1636777b-2347-4c48-b72a-3b5445c4862a] Running
	I0814 01:11:01.203744   61689 system_pods.go:126] duration metric: took 201.72785ms to wait for k8s-apps to be running ...
	I0814 01:11:01.203752   61689 system_svc.go:44] waiting for kubelet service to be running ....
	I0814 01:11:01.203810   61689 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 01:11:01.218903   61689 system_svc.go:56] duration metric: took 15.144054ms WaitForService to wait for kubelet
	I0814 01:11:01.218925   61689 kubeadm.go:582] duration metric: took 9.666741267s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 01:11:01.218950   61689 node_conditions.go:102] verifying NodePressure condition ...
	I0814 01:11:01.403320   61689 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0814 01:11:01.403350   61689 node_conditions.go:123] node cpu capacity is 2
	I0814 01:11:01.403363   61689 node_conditions.go:105] duration metric: took 184.40754ms to run NodePressure ...
	I0814 01:11:01.403377   61689 start.go:241] waiting for startup goroutines ...
	I0814 01:11:01.403385   61689 start.go:246] waiting for cluster config update ...
	I0814 01:11:01.403398   61689 start.go:255] writing updated cluster config ...
	I0814 01:11:01.403690   61689 ssh_runner.go:195] Run: rm -f paused
	I0814 01:11:01.451211   61689 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0814 01:11:01.453288   61689 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-585256" cluster and "default" namespace by default
	I0814 01:11:09.693028   61804 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0814 01:11:09.693700   61804 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 01:11:09.693975   61804 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 01:11:18.892614   61115 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.307924274s)
	I0814 01:11:18.892692   61115 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 01:11:18.907571   61115 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 01:11:18.917775   61115 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 01:11:18.927492   61115 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 01:11:18.927521   61115 kubeadm.go:157] found existing configuration files:
	
	I0814 01:11:18.927588   61115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 01:11:18.936787   61115 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 01:11:18.936840   61115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 01:11:18.946163   61115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 01:11:18.954567   61115 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 01:11:18.954613   61115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 01:11:18.963437   61115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 01:11:18.971647   61115 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 01:11:18.971691   61115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 01:11:18.980676   61115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 01:11:18.989638   61115 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 01:11:18.989681   61115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 01:11:18.998834   61115 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0814 01:11:19.044209   61115 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0814 01:11:19.044286   61115 kubeadm.go:310] [preflight] Running pre-flight checks
	I0814 01:11:19.152983   61115 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0814 01:11:19.153147   61115 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0814 01:11:19.153253   61115 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0814 01:11:19.160933   61115 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0814 01:11:14.694223   61804 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 01:11:14.694446   61804 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 01:11:19.162856   61115 out.go:204]   - Generating certificates and keys ...
	I0814 01:11:19.162972   61115 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0814 01:11:19.163044   61115 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0814 01:11:19.163121   61115 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0814 01:11:19.163213   61115 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0814 01:11:19.163322   61115 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0814 01:11:19.163396   61115 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0814 01:11:19.163467   61115 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0814 01:11:19.163527   61115 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0814 01:11:19.163755   61115 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0814 01:11:19.163860   61115 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0814 01:11:19.163917   61115 kubeadm.go:310] [certs] Using the existing "sa" key
	I0814 01:11:19.163987   61115 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0814 01:11:19.615014   61115 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0814 01:11:19.777877   61115 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0814 01:11:19.917278   61115 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0814 01:11:20.190113   61115 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0814 01:11:20.351945   61115 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0814 01:11:20.352522   61115 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0814 01:11:20.355239   61115 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0814 01:11:20.356550   61115 out.go:204]   - Booting up control plane ...
	I0814 01:11:20.356683   61115 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0814 01:11:20.356784   61115 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0814 01:11:20.356993   61115 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0814 01:11:20.376382   61115 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0814 01:11:20.381926   61115 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0814 01:11:20.382001   61115 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0814 01:11:20.510283   61115 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0814 01:11:20.510394   61115 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0814 01:11:21.016575   61115 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 505.997518ms
	I0814 01:11:21.016716   61115 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0814 01:11:26.018203   61115 kubeadm.go:310] [api-check] The API server is healthy after 5.00166081s
	I0814 01:11:26.035867   61115 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0814 01:11:26.053660   61115 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0814 01:11:26.084727   61115 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0814 01:11:26.084987   61115 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-901410 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0814 01:11:26.100115   61115 kubeadm.go:310] [bootstrap-token] Using token: t7ews1.hirn7pq8otu9l2lh
	I0814 01:11:26.101532   61115 out.go:204]   - Configuring RBAC rules ...
	I0814 01:11:26.101691   61115 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0814 01:11:26.107165   61115 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0814 01:11:26.117715   61115 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0814 01:11:26.121222   61115 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0814 01:11:26.124371   61115 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0814 01:11:26.128216   61115 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0814 01:11:26.426496   61115 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0814 01:11:26.868163   61115 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0814 01:11:27.426401   61115 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0814 01:11:27.427484   61115 kubeadm.go:310] 
	I0814 01:11:27.427587   61115 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0814 01:11:27.427604   61115 kubeadm.go:310] 
	I0814 01:11:27.427727   61115 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0814 01:11:27.427743   61115 kubeadm.go:310] 
	I0814 01:11:27.427770   61115 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0814 01:11:27.427846   61115 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0814 01:11:27.427928   61115 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0814 01:11:27.427939   61115 kubeadm.go:310] 
	I0814 01:11:27.428020   61115 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0814 01:11:27.428027   61115 kubeadm.go:310] 
	I0814 01:11:27.428109   61115 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0814 01:11:27.428116   61115 kubeadm.go:310] 
	I0814 01:11:27.428192   61115 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0814 01:11:27.428289   61115 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0814 01:11:27.428389   61115 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0814 01:11:27.428397   61115 kubeadm.go:310] 
	I0814 01:11:27.428511   61115 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0814 01:11:27.428625   61115 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0814 01:11:27.428640   61115 kubeadm.go:310] 
	I0814 01:11:27.428778   61115 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token t7ews1.hirn7pq8otu9l2lh \
	I0814 01:11:27.428920   61115 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f608549e9c065598e2d494ba753eb8a7bbe7260a53a76decd30e6e17b5fe24c3 \
	I0814 01:11:27.428964   61115 kubeadm.go:310] 	--control-plane 
	I0814 01:11:27.428971   61115 kubeadm.go:310] 
	I0814 01:11:27.429085   61115 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0814 01:11:27.429097   61115 kubeadm.go:310] 
	I0814 01:11:27.429229   61115 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token t7ews1.hirn7pq8otu9l2lh \
	I0814 01:11:27.429381   61115 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f608549e9c065598e2d494ba753eb8a7bbe7260a53a76decd30e6e17b5fe24c3 
	I0814 01:11:27.430485   61115 kubeadm.go:310] W0814 01:11:19.012996    2597 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0814 01:11:27.430895   61115 kubeadm.go:310] W0814 01:11:19.013634    2597 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0814 01:11:27.431062   61115 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0814 01:11:27.431092   61115 cni.go:84] Creating CNI manager for ""
	I0814 01:11:27.431102   61115 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 01:11:27.432987   61115 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0814 01:11:24.694861   61804 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 01:11:24.695123   61804 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 01:11:27.434183   61115 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0814 01:11:27.446168   61115 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0814 01:11:27.466651   61115 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0814 01:11:27.466760   61115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-901410 minikube.k8s.io/updated_at=2024_08_14T01_11_27_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a5ac17629aabe8562ef9220195ba6559cf416caf minikube.k8s.io/name=embed-certs-901410 minikube.k8s.io/primary=true
	I0814 01:11:27.466760   61115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:11:27.495784   61115 ops.go:34] apiserver oom_adj: -16
	I0814 01:11:27.670097   61115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:11:28.170891   61115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:11:28.670320   61115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:11:29.170197   61115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:11:29.670157   61115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:11:30.170664   61115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:11:30.670254   61115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:11:31.170767   61115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:11:31.671004   61115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:11:31.762872   61115 kubeadm.go:1113] duration metric: took 4.296174293s to wait for elevateKubeSystemPrivileges
	I0814 01:11:31.762902   61115 kubeadm.go:394] duration metric: took 4m55.664668706s to StartCluster
	I0814 01:11:31.762924   61115 settings.go:142] acquiring lock: {Name:mkb0f793aa2a6618ff3457f9cd2d34beec5f1b5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 01:11:31.763010   61115 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19429-9425/kubeconfig
	I0814 01:11:31.764625   61115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19429-9425/kubeconfig: {Name:mkd99f966962f24d3ec49056c344f5320df43dba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 01:11:31.764876   61115 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.210 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0814 01:11:31.764951   61115 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0814 01:11:31.765038   61115 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-901410"
	I0814 01:11:31.765052   61115 addons.go:69] Setting default-storageclass=true in profile "embed-certs-901410"
	I0814 01:11:31.765070   61115 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-901410"
	I0814 01:11:31.765068   61115 addons.go:69] Setting metrics-server=true in profile "embed-certs-901410"
	I0814 01:11:31.765086   61115 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-901410"
	I0814 01:11:31.765092   61115 config.go:182] Loaded profile config "embed-certs-901410": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 01:11:31.765111   61115 addons.go:234] Setting addon metrics-server=true in "embed-certs-901410"
	W0814 01:11:31.765126   61115 addons.go:243] addon metrics-server should already be in state true
	I0814 01:11:31.765163   61115 host.go:66] Checking if "embed-certs-901410" exists ...
	W0814 01:11:31.765083   61115 addons.go:243] addon storage-provisioner should already be in state true
	I0814 01:11:31.765199   61115 host.go:66] Checking if "embed-certs-901410" exists ...
	I0814 01:11:31.765481   61115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:11:31.765516   61115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:11:31.765554   61115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:11:31.765570   61115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:11:31.765588   61115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:11:31.765614   61115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:11:31.766459   61115 out.go:177] * Verifying Kubernetes components...
	I0814 01:11:31.767835   61115 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 01:11:31.781637   61115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34599
	I0814 01:11:31.782146   61115 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:11:31.782517   61115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32983
	I0814 01:11:31.782700   61115 main.go:141] libmachine: Using API Version  1
	I0814 01:11:31.782732   61115 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:11:31.783038   61115 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:11:31.783052   61115 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:11:31.783213   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetState
	I0814 01:11:31.783540   61115 main.go:141] libmachine: Using API Version  1
	I0814 01:11:31.783569   61115 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:11:31.783897   61115 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:11:31.784326   61115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39503
	I0814 01:11:31.784458   61115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:11:31.784487   61115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:11:31.784791   61115 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:11:31.785281   61115 main.go:141] libmachine: Using API Version  1
	I0814 01:11:31.785306   61115 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:11:31.785665   61115 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:11:31.786175   61115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:11:31.786218   61115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:11:31.786466   61115 addons.go:234] Setting addon default-storageclass=true in "embed-certs-901410"
	W0814 01:11:31.786484   61115 addons.go:243] addon default-storageclass should already be in state true
	I0814 01:11:31.786513   61115 host.go:66] Checking if "embed-certs-901410" exists ...
	I0814 01:11:31.786853   61115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:11:31.786881   61115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:11:31.801208   61115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41561
	I0814 01:11:31.801592   61115 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:11:31.802016   61115 main.go:141] libmachine: Using API Version  1
	I0814 01:11:31.802032   61115 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:11:31.802382   61115 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:11:31.802555   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetState
	I0814 01:11:31.803106   61115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40669
	I0814 01:11:31.803589   61115 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:11:31.804133   61115 main.go:141] libmachine: Using API Version  1
	I0814 01:11:31.804159   61115 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:11:31.804462   61115 main.go:141] libmachine: (embed-certs-901410) Calling .DriverName
	I0814 01:11:31.804532   61115 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:11:31.804716   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetState
	I0814 01:11:31.805759   61115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39529
	I0814 01:11:31.806197   61115 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:11:31.806546   61115 main.go:141] libmachine: (embed-certs-901410) Calling .DriverName
	I0814 01:11:31.806590   61115 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0814 01:11:31.806667   61115 main.go:141] libmachine: Using API Version  1
	I0814 01:11:31.806692   61115 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:11:31.806982   61115 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:11:31.807572   61115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:11:31.807609   61115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:11:31.808223   61115 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 01:11:31.808225   61115 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0814 01:11:31.808301   61115 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0814 01:11:31.808335   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHHostname
	I0814 01:11:31.810018   61115 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 01:11:31.810057   61115 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0814 01:11:31.810125   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHHostname
	I0814 01:11:31.812029   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:11:31.812728   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:11:31.812862   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:11:31.813062   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHPort
	I0814 01:11:31.813261   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 01:11:31.813284   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:11:31.813420   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHUsername
	I0814 01:11:31.813562   61115 sshutil.go:53] new ssh client: &{IP:192.168.50.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/embed-certs-901410/id_rsa Username:docker}
	I0814 01:11:31.813864   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:11:31.813880   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:11:31.814032   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHPort
	I0814 01:11:31.814236   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 01:11:31.814398   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHUsername
	I0814 01:11:31.814542   61115 sshutil.go:53] new ssh client: &{IP:192.168.50.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/embed-certs-901410/id_rsa Username:docker}
	I0814 01:11:31.825081   61115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34269
	I0814 01:11:31.825523   61115 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:11:31.825944   61115 main.go:141] libmachine: Using API Version  1
	I0814 01:11:31.825967   61115 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:11:31.826327   61115 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:11:31.826537   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetState
	I0814 01:11:31.831060   61115 main.go:141] libmachine: (embed-certs-901410) Calling .DriverName
	I0814 01:11:31.831292   61115 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0814 01:11:31.831315   61115 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0814 01:11:31.831334   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHHostname
	I0814 01:11:31.834552   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:11:31.834934   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:11:31.834962   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:11:31.835102   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHPort
	I0814 01:11:31.835304   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 01:11:31.835476   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHUsername
	I0814 01:11:31.835610   61115 sshutil.go:53] new ssh client: &{IP:192.168.50.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/embed-certs-901410/id_rsa Username:docker}
	I0814 01:11:31.960224   61115 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 01:11:31.980097   61115 node_ready.go:35] waiting up to 6m0s for node "embed-certs-901410" to be "Ready" ...
	I0814 01:11:31.993130   61115 node_ready.go:49] node "embed-certs-901410" has status "Ready":"True"
	I0814 01:11:31.993152   61115 node_ready.go:38] duration metric: took 13.020022ms for node "embed-certs-901410" to be "Ready" ...
	I0814 01:11:31.993164   61115 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 01:11:31.998448   61115 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-901410" in "kube-system" namespace to be "Ready" ...
	I0814 01:11:32.075908   61115 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0814 01:11:32.075933   61115 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0814 01:11:32.114559   61115 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 01:11:32.137251   61115 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0814 01:11:32.144383   61115 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0814 01:11:32.144404   61115 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0814 01:11:32.207930   61115 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0814 01:11:32.207957   61115 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0814 01:11:32.235306   61115 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0814 01:11:32.769968   61115 main.go:141] libmachine: Making call to close driver server
	I0814 01:11:32.769994   61115 main.go:141] libmachine: (embed-certs-901410) Calling .Close
	I0814 01:11:32.770140   61115 main.go:141] libmachine: Making call to close driver server
	I0814 01:11:32.770164   61115 main.go:141] libmachine: (embed-certs-901410) Calling .Close
	I0814 01:11:32.770300   61115 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:11:32.770337   61115 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:11:32.770348   61115 main.go:141] libmachine: Making call to close driver server
	I0814 01:11:32.770351   61115 main.go:141] libmachine: (embed-certs-901410) DBG | Closing plugin on server side
	I0814 01:11:32.770360   61115 main.go:141] libmachine: (embed-certs-901410) Calling .Close
	I0814 01:11:32.770412   61115 main.go:141] libmachine: (embed-certs-901410) DBG | Closing plugin on server side
	I0814 01:11:32.770434   61115 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:11:32.770447   61115 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:11:32.770461   61115 main.go:141] libmachine: Making call to close driver server
	I0814 01:11:32.770472   61115 main.go:141] libmachine: (embed-certs-901410) Calling .Close
	I0814 01:11:32.770656   61115 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:11:32.770696   61115 main.go:141] libmachine: (embed-certs-901410) DBG | Closing plugin on server side
	I0814 01:11:32.770706   61115 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:11:32.770767   61115 main.go:141] libmachine: (embed-certs-901410) DBG | Closing plugin on server side
	I0814 01:11:32.770945   61115 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:11:32.770960   61115 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:11:32.779423   61115 main.go:141] libmachine: Making call to close driver server
	I0814 01:11:32.779437   61115 main.go:141] libmachine: (embed-certs-901410) Calling .Close
	I0814 01:11:32.779661   61115 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:11:32.779675   61115 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:11:32.779702   61115 main.go:141] libmachine: (embed-certs-901410) DBG | Closing plugin on server side
	I0814 01:11:33.063157   61115 main.go:141] libmachine: Making call to close driver server
	I0814 01:11:33.063187   61115 main.go:141] libmachine: (embed-certs-901410) Calling .Close
	I0814 01:11:33.064055   61115 main.go:141] libmachine: (embed-certs-901410) DBG | Closing plugin on server side
	I0814 01:11:33.064101   61115 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:11:33.064110   61115 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:11:33.064120   61115 main.go:141] libmachine: Making call to close driver server
	I0814 01:11:33.064127   61115 main.go:141] libmachine: (embed-certs-901410) Calling .Close
	I0814 01:11:33.064378   61115 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:11:33.064397   61115 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:11:33.064409   61115 addons.go:475] Verifying addon metrics-server=true in "embed-certs-901410"
	I0814 01:11:33.064458   61115 main.go:141] libmachine: (embed-certs-901410) DBG | Closing plugin on server side
	I0814 01:11:33.066122   61115 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0814 01:11:33.067534   61115 addons.go:510] duration metric: took 1.302585898s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0814 01:11:34.004078   61115 pod_ready.go:102] pod "etcd-embed-certs-901410" in "kube-system" namespace has status "Ready":"False"
	I0814 01:11:36.005391   61115 pod_ready.go:102] pod "etcd-embed-certs-901410" in "kube-system" namespace has status "Ready":"False"
	I0814 01:11:38.505031   61115 pod_ready.go:102] pod "etcd-embed-certs-901410" in "kube-system" namespace has status "Ready":"False"
	I0814 01:11:39.507006   61115 pod_ready.go:92] pod "etcd-embed-certs-901410" in "kube-system" namespace has status "Ready":"True"
	I0814 01:11:39.507026   61115 pod_ready.go:81] duration metric: took 7.508554233s for pod "etcd-embed-certs-901410" in "kube-system" namespace to be "Ready" ...
	I0814 01:11:39.507035   61115 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-901410" in "kube-system" namespace to be "Ready" ...
	I0814 01:11:39.517719   61115 pod_ready.go:92] pod "kube-apiserver-embed-certs-901410" in "kube-system" namespace has status "Ready":"True"
	I0814 01:11:39.517739   61115 pod_ready.go:81] duration metric: took 10.698211ms for pod "kube-apiserver-embed-certs-901410" in "kube-system" namespace to be "Ready" ...
	I0814 01:11:39.517751   61115 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-901410" in "kube-system" namespace to be "Ready" ...
	I0814 01:11:39.522245   61115 pod_ready.go:92] pod "kube-controller-manager-embed-certs-901410" in "kube-system" namespace has status "Ready":"True"
	I0814 01:11:39.522267   61115 pod_ready.go:81] duration metric: took 4.507786ms for pod "kube-controller-manager-embed-certs-901410" in "kube-system" namespace to be "Ready" ...
	I0814 01:11:39.522280   61115 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fqmzw" in "kube-system" namespace to be "Ready" ...
	I0814 01:11:39.527880   61115 pod_ready.go:92] pod "kube-proxy-fqmzw" in "kube-system" namespace has status "Ready":"True"
	I0814 01:11:39.527897   61115 pod_ready.go:81] duration metric: took 5.609617ms for pod "kube-proxy-fqmzw" in "kube-system" namespace to be "Ready" ...
	I0814 01:11:39.527904   61115 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-901410" in "kube-system" namespace to be "Ready" ...
	I0814 01:11:39.532430   61115 pod_ready.go:92] pod "kube-scheduler-embed-certs-901410" in "kube-system" namespace has status "Ready":"True"
	I0814 01:11:39.532448   61115 pod_ready.go:81] duration metric: took 4.536902ms for pod "kube-scheduler-embed-certs-901410" in "kube-system" namespace to be "Ready" ...
	I0814 01:11:39.532456   61115 pod_ready.go:38] duration metric: took 7.539280742s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 01:11:39.532471   61115 api_server.go:52] waiting for apiserver process to appear ...
	I0814 01:11:39.532537   61115 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:11:39.547608   61115 api_server.go:72] duration metric: took 7.782698582s to wait for apiserver process to appear ...
	I0814 01:11:39.547635   61115 api_server.go:88] waiting for apiserver healthz status ...
	I0814 01:11:39.547652   61115 api_server.go:253] Checking apiserver healthz at https://192.168.50.210:8443/healthz ...
	I0814 01:11:39.552021   61115 api_server.go:279] https://192.168.50.210:8443/healthz returned 200:
	ok
	I0814 01:11:39.552955   61115 api_server.go:141] control plane version: v1.31.0
	I0814 01:11:39.552972   61115 api_server.go:131] duration metric: took 5.330974ms to wait for apiserver health ...
	I0814 01:11:39.552979   61115 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 01:11:39.704928   61115 system_pods.go:59] 9 kube-system pods found
	I0814 01:11:39.704952   61115 system_pods.go:61] "coredns-6f6b679f8f-bq2xk" [6593bc2b-ef8f-4738-8674-dcaea675b88b] Running
	I0814 01:11:39.704959   61115 system_pods.go:61] "coredns-6f6b679f8f-lwd2j" [75f6e3fe-c5ac-4dbc-bbbb-bfb91796aaff] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0814 01:11:39.704964   61115 system_pods.go:61] "etcd-embed-certs-901410" [60eb6469-1be4-401b-9382-977428a0ead5] Running
	I0814 01:11:39.704970   61115 system_pods.go:61] "kube-apiserver-embed-certs-901410" [802d6cc2-d1d4-485c-98d8-e5b4afa9e632] Running
	I0814 01:11:39.704974   61115 system_pods.go:61] "kube-controller-manager-embed-certs-901410" [12e308db-7ca5-4d33-b62a-e144e7dd06c5] Running
	I0814 01:11:39.704977   61115 system_pods.go:61] "kube-proxy-fqmzw" [f9d63b14-ce56-4d0b-8511-1198b306b70e] Running
	I0814 01:11:39.704980   61115 system_pods.go:61] "kube-scheduler-embed-certs-901410" [668258a9-02d2-416d-ac07-b2b87deea00d] Running
	I0814 01:11:39.704985   61115 system_pods.go:61] "metrics-server-6867b74b74-mwl74" [065b6973-cd9d-4091-96b9-8dff2c5f85eb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 01:11:39.704989   61115 system_pods.go:61] "storage-provisioner" [e0f82856-b50c-4a5f-b0c7-4cd81e4b896e] Running
	I0814 01:11:39.704995   61115 system_pods.go:74] duration metric: took 152.010903ms to wait for pod list to return data ...
	I0814 01:11:39.705004   61115 default_sa.go:34] waiting for default service account to be created ...
	I0814 01:11:39.902622   61115 default_sa.go:45] found service account: "default"
	I0814 01:11:39.902662   61115 default_sa.go:55] duration metric: took 197.651811ms for default service account to be created ...
	I0814 01:11:39.902674   61115 system_pods.go:116] waiting for k8s-apps to be running ...
	I0814 01:11:40.105740   61115 system_pods.go:86] 9 kube-system pods found
	I0814 01:11:40.105767   61115 system_pods.go:89] "coredns-6f6b679f8f-bq2xk" [6593bc2b-ef8f-4738-8674-dcaea675b88b] Running
	I0814 01:11:40.105775   61115 system_pods.go:89] "coredns-6f6b679f8f-lwd2j" [75f6e3fe-c5ac-4dbc-bbbb-bfb91796aaff] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0814 01:11:40.105781   61115 system_pods.go:89] "etcd-embed-certs-901410" [60eb6469-1be4-401b-9382-977428a0ead5] Running
	I0814 01:11:40.105787   61115 system_pods.go:89] "kube-apiserver-embed-certs-901410" [802d6cc2-d1d4-485c-98d8-e5b4afa9e632] Running
	I0814 01:11:40.105791   61115 system_pods.go:89] "kube-controller-manager-embed-certs-901410" [12e308db-7ca5-4d33-b62a-e144e7dd06c5] Running
	I0814 01:11:40.105794   61115 system_pods.go:89] "kube-proxy-fqmzw" [f9d63b14-ce56-4d0b-8511-1198b306b70e] Running
	I0814 01:11:40.105798   61115 system_pods.go:89] "kube-scheduler-embed-certs-901410" [668258a9-02d2-416d-ac07-b2b87deea00d] Running
	I0814 01:11:40.105804   61115 system_pods.go:89] "metrics-server-6867b74b74-mwl74" [065b6973-cd9d-4091-96b9-8dff2c5f85eb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 01:11:40.105809   61115 system_pods.go:89] "storage-provisioner" [e0f82856-b50c-4a5f-b0c7-4cd81e4b896e] Running
	I0814 01:11:40.105815   61115 system_pods.go:126] duration metric: took 203.134555ms to wait for k8s-apps to be running ...
	I0814 01:11:40.105824   61115 system_svc.go:44] waiting for kubelet service to be running ....
	I0814 01:11:40.105866   61115 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 01:11:40.121399   61115 system_svc.go:56] duration metric: took 15.565745ms WaitForService to wait for kubelet
	I0814 01:11:40.121427   61115 kubeadm.go:582] duration metric: took 8.356517219s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 01:11:40.121445   61115 node_conditions.go:102] verifying NodePressure condition ...
	I0814 01:11:40.303687   61115 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0814 01:11:40.303720   61115 node_conditions.go:123] node cpu capacity is 2
	I0814 01:11:40.303732   61115 node_conditions.go:105] duration metric: took 182.281943ms to run NodePressure ...
	I0814 01:11:40.303745   61115 start.go:241] waiting for startup goroutines ...
	I0814 01:11:40.303754   61115 start.go:246] waiting for cluster config update ...
	I0814 01:11:40.303768   61115 start.go:255] writing updated cluster config ...
	I0814 01:11:40.304122   61115 ssh_runner.go:195] Run: rm -f paused
	I0814 01:11:40.350855   61115 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0814 01:11:40.352610   61115 out.go:177] * Done! kubectl is now configured to use "embed-certs-901410" cluster and "default" namespace by default
	I0814 01:11:44.695887   61804 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 01:11:44.696122   61804 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 01:12:24.697922   61804 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 01:12:24.698217   61804 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 01:12:24.698256   61804 kubeadm.go:310] 
	I0814 01:12:24.698318   61804 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0814 01:12:24.698406   61804 kubeadm.go:310] 		timed out waiting for the condition
	I0814 01:12:24.698434   61804 kubeadm.go:310] 
	I0814 01:12:24.698484   61804 kubeadm.go:310] 	This error is likely caused by:
	I0814 01:12:24.698530   61804 kubeadm.go:310] 		- The kubelet is not running
	I0814 01:12:24.698640   61804 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0814 01:12:24.698651   61804 kubeadm.go:310] 
	I0814 01:12:24.698784   61804 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0814 01:12:24.698841   61804 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0814 01:12:24.698874   61804 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0814 01:12:24.698878   61804 kubeadm.go:310] 
	I0814 01:12:24.699009   61804 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0814 01:12:24.699119   61804 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0814 01:12:24.699128   61804 kubeadm.go:310] 
	I0814 01:12:24.699294   61804 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0814 01:12:24.699431   61804 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0814 01:12:24.699536   61804 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0814 01:12:24.699635   61804 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0814 01:12:24.699647   61804 kubeadm.go:310] 
	I0814 01:12:24.700201   61804 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0814 01:12:24.700300   61804 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0814 01:12:24.700391   61804 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0814 01:12:24.700527   61804 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0814 01:12:24.700577   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0814 01:12:30.038180   61804 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.337582505s)
	I0814 01:12:30.038256   61804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 01:12:30.052476   61804 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 01:12:30.062330   61804 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 01:12:30.062357   61804 kubeadm.go:157] found existing configuration files:
	
	I0814 01:12:30.062409   61804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 01:12:30.072303   61804 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 01:12:30.072355   61804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 01:12:30.081331   61804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 01:12:30.090105   61804 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 01:12:30.090163   61804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 01:12:30.099446   61804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 01:12:30.108290   61804 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 01:12:30.108346   61804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 01:12:30.117872   61804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 01:12:30.126357   61804 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 01:12:30.126424   61804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 01:12:30.136277   61804 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0814 01:12:30.342736   61804 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0814 01:14:26.274820   61804 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0814 01:14:26.274958   61804 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0814 01:14:26.276512   61804 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0814 01:14:26.276601   61804 kubeadm.go:310] [preflight] Running pre-flight checks
	I0814 01:14:26.276743   61804 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0814 01:14:26.276887   61804 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0814 01:14:26.277017   61804 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0814 01:14:26.277097   61804 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0814 01:14:26.278845   61804 out.go:204]   - Generating certificates and keys ...
	I0814 01:14:26.278935   61804 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0814 01:14:26.279005   61804 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0814 01:14:26.279103   61804 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0814 01:14:26.279187   61804 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0814 01:14:26.279278   61804 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0814 01:14:26.279351   61804 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0814 01:14:26.279433   61804 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0814 01:14:26.279515   61804 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0814 01:14:26.279623   61804 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0814 01:14:26.279725   61804 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0814 01:14:26.279776   61804 kubeadm.go:310] [certs] Using the existing "sa" key
	I0814 01:14:26.279858   61804 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0814 01:14:26.279933   61804 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0814 01:14:26.280086   61804 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0814 01:14:26.280188   61804 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0814 01:14:26.280289   61804 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0814 01:14:26.280424   61804 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0814 01:14:26.280517   61804 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0814 01:14:26.280573   61804 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0814 01:14:26.280648   61804 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0814 01:14:26.281982   61804 out.go:204]   - Booting up control plane ...
	I0814 01:14:26.282070   61804 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0814 01:14:26.282159   61804 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0814 01:14:26.282249   61804 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0814 01:14:26.282389   61804 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0814 01:14:26.282564   61804 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0814 01:14:26.282624   61804 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0814 01:14:26.282685   61804 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 01:14:26.282866   61804 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 01:14:26.282971   61804 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 01:14:26.283161   61804 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 01:14:26.283235   61804 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 01:14:26.283494   61804 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 01:14:26.283611   61804 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 01:14:26.283768   61804 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 01:14:26.283830   61804 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 01:14:26.284021   61804 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 01:14:26.284032   61804 kubeadm.go:310] 
	I0814 01:14:26.284069   61804 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0814 01:14:26.284126   61804 kubeadm.go:310] 		timed out waiting for the condition
	I0814 01:14:26.284135   61804 kubeadm.go:310] 
	I0814 01:14:26.284188   61804 kubeadm.go:310] 	This error is likely caused by:
	I0814 01:14:26.284234   61804 kubeadm.go:310] 		- The kubelet is not running
	I0814 01:14:26.284336   61804 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0814 01:14:26.284344   61804 kubeadm.go:310] 
	I0814 01:14:26.284429   61804 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0814 01:14:26.284463   61804 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0814 01:14:26.284490   61804 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0814 01:14:26.284499   61804 kubeadm.go:310] 
	I0814 01:14:26.284587   61804 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0814 01:14:26.284726   61804 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0814 01:14:26.284747   61804 kubeadm.go:310] 
	I0814 01:14:26.284889   61804 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0814 01:14:26.285007   61804 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0814 01:14:26.285083   61804 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0814 01:14:26.285158   61804 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0814 01:14:26.285174   61804 kubeadm.go:310] 
	I0814 01:14:26.285220   61804 kubeadm.go:394] duration metric: took 8m6.417053649s to StartCluster
	I0814 01:14:26.285266   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:14:26.285318   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:14:26.327320   61804 cri.go:89] found id: ""
	I0814 01:14:26.327351   61804 logs.go:276] 0 containers: []
	W0814 01:14:26.327359   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:14:26.327366   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:14:26.327435   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:14:26.362074   61804 cri.go:89] found id: ""
	I0814 01:14:26.362101   61804 logs.go:276] 0 containers: []
	W0814 01:14:26.362109   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:14:26.362115   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:14:26.362192   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:14:26.395777   61804 cri.go:89] found id: ""
	I0814 01:14:26.395802   61804 logs.go:276] 0 containers: []
	W0814 01:14:26.395814   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:14:26.395821   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:14:26.395884   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:14:26.429263   61804 cri.go:89] found id: ""
	I0814 01:14:26.429290   61804 logs.go:276] 0 containers: []
	W0814 01:14:26.429299   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:14:26.429307   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:14:26.429370   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:14:26.463278   61804 cri.go:89] found id: ""
	I0814 01:14:26.463307   61804 logs.go:276] 0 containers: []
	W0814 01:14:26.463314   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:14:26.463321   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:14:26.463381   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:14:26.496454   61804 cri.go:89] found id: ""
	I0814 01:14:26.496493   61804 logs.go:276] 0 containers: []
	W0814 01:14:26.496513   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:14:26.496521   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:14:26.496591   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:14:26.530536   61804 cri.go:89] found id: ""
	I0814 01:14:26.530567   61804 logs.go:276] 0 containers: []
	W0814 01:14:26.530579   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:14:26.530587   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:14:26.530659   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:14:26.564201   61804 cri.go:89] found id: ""
	I0814 01:14:26.564232   61804 logs.go:276] 0 containers: []
	W0814 01:14:26.564245   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:14:26.564258   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:14:26.564274   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:14:26.614225   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:14:26.614263   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:14:26.632126   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:14:26.632162   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:14:26.733732   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:14:26.733757   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:14:26.733773   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:14:26.849177   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:14:26.849218   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0814 01:14:26.885741   61804 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0814 01:14:26.885794   61804 out.go:239] * 
	W0814 01:14:26.885846   61804 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0814 01:14:26.885871   61804 out.go:239] * 
	W0814 01:14:26.886747   61804 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0814 01:14:26.889874   61804 out.go:177] 
	W0814 01:14:26.891040   61804 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0814 01:14:26.891083   61804 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0814 01:14:26.891101   61804 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0814 01:14:26.892501   61804 out.go:177] 
	
	
	==> CRI-O <==
	Aug 14 01:20:03 default-k8s-diff-port-585256 crio[720]: time="2024-08-14 01:20:03.359964159Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598403359937827,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=aa62c03d-fa02-49af-85a0-97ae096cf0a1 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 01:20:03 default-k8s-diff-port-585256 crio[720]: time="2024-08-14 01:20:03.360592174Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=586150a4-c443-4c65-9830-487eee6f875e name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 01:20:03 default-k8s-diff-port-585256 crio[720]: time="2024-08-14 01:20:03.360648052Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=586150a4-c443-4c65-9830-487eee6f875e name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 01:20:03 default-k8s-diff-port-585256 crio[720]: time="2024-08-14 01:20:03.360899112Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:178ad8a6bac1357bb802e3d04d4a245d48d7d17ada831702e7c8b8576d501dd2,PodSandboxId:6f98fff5404794ccef4bb9d032df8093f55924505cda14bdcde5a3ba7cda3970,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723597853398580338,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1636777b-2347-4c48-b72a-3b5445c4862a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a30f6f8799cac2a8f016c3eaf2abaf8462dbc8c55b19ea96d20ff345cd84557,PodSandboxId:9eca25d767f1a81f28b14158d7c80ca0ffb1397c3f86f79708b9ef2b6afda147,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723597852912193042,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-hngz9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 213f9a45-596b-47b3-9c37-ceae021433ea,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39c53a765019e07c801f353b5bd3181a2b9adb29b71bbf5ff1e384dc1f3b9af6,PodSandboxId:01056aaf40aa4e053f6a713b8800657d9b8d39f399c57d6b1eb2fc89aef05542,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723597852839646809,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-jmqk7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 397fb54b-40cd-4c4e-9503-c077f814c6e5,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85fda842f55cd00d8fe6aaea85760248f75acc62e7346fd6892aa6d01236fc0f,PodSandboxId:00369bc4aed926bb963ceeb61eb396f9f6eb6d5b9329f30c4310ee1f9d21a2bd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING
,CreatedAt:1723597852320287203,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rg8h9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2601104-a6f5-4065-87d5-c027d583f647,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2030360e485495175fa2be21c7c093c7d7310dfb73cf98c599fe4f9695485624,PodSandboxId:bc1dd8cbb18bc40b7490227aee0040905b7330da761fb42f4035d068c9e0edbc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723597841373142601
,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-585256,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63f2be92dbc40486c02357bb4abdde53,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e64d705f36b0067db76dc0ad093697a628d84f8b955847ef47867dbf1a7f9fe,PodSandboxId:71ce6596516d365b5372df76128b02d8a6051a0d0ce23a4367a3e8507ecf20d2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:172359784130
4029196,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-585256,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c65692368a95f1446ffe5a25cc5946d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c2ba2d805c8434f2a11f3cd7612d8b5e3857ef1450b928cad13153036ba31df,PodSandboxId:05b6d78a4af0439040fe1dfceffa45c4fec37ab4661259746bb22dbd4477fa8d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:172359
7841307764691,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-585256,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a27e79549c7620840739e6e02d96eba0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c4a3040cf2e5a03a5ceee9ed4044568f56fc5ef6ef1c69f9b963f837d55c4ce,PodSandboxId:88cb42849b1235a2a66a92861478f078a21a29de919930305958763f81f330e1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723597841236277469,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-585256,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9b8458885f7bf294298151b292cf053,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9ede10be40aa734712d3099693d401dbbd0b4f44fb5192ae012b554b9747ad7,PodSandboxId:8eb9ce14fa9cd506a3a371f7475fa31b94ca888cfa80f7d9c00effdd8aac0287,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723597560719516832,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-585256,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9b8458885f7bf294298151b292cf053,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=586150a4-c443-4c65-9830-487eee6f875e name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 01:20:03 default-k8s-diff-port-585256 crio[720]: time="2024-08-14 01:20:03.397394245Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=42e4c637-6a66-4a00-8df4-0e530bb5fe04 name=/runtime.v1.RuntimeService/Version
	Aug 14 01:20:03 default-k8s-diff-port-585256 crio[720]: time="2024-08-14 01:20:03.397728063Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=42e4c637-6a66-4a00-8df4-0e530bb5fe04 name=/runtime.v1.RuntimeService/Version
	Aug 14 01:20:03 default-k8s-diff-port-585256 crio[720]: time="2024-08-14 01:20:03.398729737Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=838131ea-1f63-40f4-a661-31cc759dd6ff name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 01:20:03 default-k8s-diff-port-585256 crio[720]: time="2024-08-14 01:20:03.399280379Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598403399254344,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=838131ea-1f63-40f4-a661-31cc759dd6ff name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 01:20:03 default-k8s-diff-port-585256 crio[720]: time="2024-08-14 01:20:03.399773239Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1fc59333-9075-41b5-8bae-4771ccf86089 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 01:20:03 default-k8s-diff-port-585256 crio[720]: time="2024-08-14 01:20:03.399896945Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1fc59333-9075-41b5-8bae-4771ccf86089 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 01:20:03 default-k8s-diff-port-585256 crio[720]: time="2024-08-14 01:20:03.400098880Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:178ad8a6bac1357bb802e3d04d4a245d48d7d17ada831702e7c8b8576d501dd2,PodSandboxId:6f98fff5404794ccef4bb9d032df8093f55924505cda14bdcde5a3ba7cda3970,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723597853398580338,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1636777b-2347-4c48-b72a-3b5445c4862a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a30f6f8799cac2a8f016c3eaf2abaf8462dbc8c55b19ea96d20ff345cd84557,PodSandboxId:9eca25d767f1a81f28b14158d7c80ca0ffb1397c3f86f79708b9ef2b6afda147,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723597852912193042,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-hngz9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 213f9a45-596b-47b3-9c37-ceae021433ea,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39c53a765019e07c801f353b5bd3181a2b9adb29b71bbf5ff1e384dc1f3b9af6,PodSandboxId:01056aaf40aa4e053f6a713b8800657d9b8d39f399c57d6b1eb2fc89aef05542,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723597852839646809,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-jmqk7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 397fb54b-40cd-4c4e-9503-c077f814c6e5,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85fda842f55cd00d8fe6aaea85760248f75acc62e7346fd6892aa6d01236fc0f,PodSandboxId:00369bc4aed926bb963ceeb61eb396f9f6eb6d5b9329f30c4310ee1f9d21a2bd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING
,CreatedAt:1723597852320287203,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rg8h9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2601104-a6f5-4065-87d5-c027d583f647,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2030360e485495175fa2be21c7c093c7d7310dfb73cf98c599fe4f9695485624,PodSandboxId:bc1dd8cbb18bc40b7490227aee0040905b7330da761fb42f4035d068c9e0edbc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723597841373142601
,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-585256,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63f2be92dbc40486c02357bb4abdde53,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e64d705f36b0067db76dc0ad093697a628d84f8b955847ef47867dbf1a7f9fe,PodSandboxId:71ce6596516d365b5372df76128b02d8a6051a0d0ce23a4367a3e8507ecf20d2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:172359784130
4029196,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-585256,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c65692368a95f1446ffe5a25cc5946d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c2ba2d805c8434f2a11f3cd7612d8b5e3857ef1450b928cad13153036ba31df,PodSandboxId:05b6d78a4af0439040fe1dfceffa45c4fec37ab4661259746bb22dbd4477fa8d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:172359
7841307764691,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-585256,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a27e79549c7620840739e6e02d96eba0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c4a3040cf2e5a03a5ceee9ed4044568f56fc5ef6ef1c69f9b963f837d55c4ce,PodSandboxId:88cb42849b1235a2a66a92861478f078a21a29de919930305958763f81f330e1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723597841236277469,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-585256,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9b8458885f7bf294298151b292cf053,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9ede10be40aa734712d3099693d401dbbd0b4f44fb5192ae012b554b9747ad7,PodSandboxId:8eb9ce14fa9cd506a3a371f7475fa31b94ca888cfa80f7d9c00effdd8aac0287,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723597560719516832,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-585256,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9b8458885f7bf294298151b292cf053,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1fc59333-9075-41b5-8bae-4771ccf86089 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 01:20:03 default-k8s-diff-port-585256 crio[720]: time="2024-08-14 01:20:03.434061459Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4b417651-a6ef-48f6-ba96-280e811d9c8b name=/runtime.v1.RuntimeService/Version
	Aug 14 01:20:03 default-k8s-diff-port-585256 crio[720]: time="2024-08-14 01:20:03.434146681Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4b417651-a6ef-48f6-ba96-280e811d9c8b name=/runtime.v1.RuntimeService/Version
	Aug 14 01:20:03 default-k8s-diff-port-585256 crio[720]: time="2024-08-14 01:20:03.435314689Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=72fc1e62-96a0-41ae-babc-0978fa27aacf name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 01:20:03 default-k8s-diff-port-585256 crio[720]: time="2024-08-14 01:20:03.435707626Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598403435686527,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=72fc1e62-96a0-41ae-babc-0978fa27aacf name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 01:20:03 default-k8s-diff-port-585256 crio[720]: time="2024-08-14 01:20:03.436245407Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a7e820ed-32ad-4bb0-bcc6-ce2bdeddbab5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 01:20:03 default-k8s-diff-port-585256 crio[720]: time="2024-08-14 01:20:03.436313910Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a7e820ed-32ad-4bb0-bcc6-ce2bdeddbab5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 01:20:03 default-k8s-diff-port-585256 crio[720]: time="2024-08-14 01:20:03.436555906Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:178ad8a6bac1357bb802e3d04d4a245d48d7d17ada831702e7c8b8576d501dd2,PodSandboxId:6f98fff5404794ccef4bb9d032df8093f55924505cda14bdcde5a3ba7cda3970,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723597853398580338,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1636777b-2347-4c48-b72a-3b5445c4862a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a30f6f8799cac2a8f016c3eaf2abaf8462dbc8c55b19ea96d20ff345cd84557,PodSandboxId:9eca25d767f1a81f28b14158d7c80ca0ffb1397c3f86f79708b9ef2b6afda147,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723597852912193042,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-hngz9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 213f9a45-596b-47b3-9c37-ceae021433ea,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39c53a765019e07c801f353b5bd3181a2b9adb29b71bbf5ff1e384dc1f3b9af6,PodSandboxId:01056aaf40aa4e053f6a713b8800657d9b8d39f399c57d6b1eb2fc89aef05542,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723597852839646809,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-jmqk7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 397fb54b-40cd-4c4e-9503-c077f814c6e5,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85fda842f55cd00d8fe6aaea85760248f75acc62e7346fd6892aa6d01236fc0f,PodSandboxId:00369bc4aed926bb963ceeb61eb396f9f6eb6d5b9329f30c4310ee1f9d21a2bd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING
,CreatedAt:1723597852320287203,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rg8h9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2601104-a6f5-4065-87d5-c027d583f647,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2030360e485495175fa2be21c7c093c7d7310dfb73cf98c599fe4f9695485624,PodSandboxId:bc1dd8cbb18bc40b7490227aee0040905b7330da761fb42f4035d068c9e0edbc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723597841373142601
,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-585256,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63f2be92dbc40486c02357bb4abdde53,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e64d705f36b0067db76dc0ad093697a628d84f8b955847ef47867dbf1a7f9fe,PodSandboxId:71ce6596516d365b5372df76128b02d8a6051a0d0ce23a4367a3e8507ecf20d2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:172359784130
4029196,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-585256,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c65692368a95f1446ffe5a25cc5946d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c2ba2d805c8434f2a11f3cd7612d8b5e3857ef1450b928cad13153036ba31df,PodSandboxId:05b6d78a4af0439040fe1dfceffa45c4fec37ab4661259746bb22dbd4477fa8d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:172359
7841307764691,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-585256,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a27e79549c7620840739e6e02d96eba0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c4a3040cf2e5a03a5ceee9ed4044568f56fc5ef6ef1c69f9b963f837d55c4ce,PodSandboxId:88cb42849b1235a2a66a92861478f078a21a29de919930305958763f81f330e1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723597841236277469,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-585256,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9b8458885f7bf294298151b292cf053,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9ede10be40aa734712d3099693d401dbbd0b4f44fb5192ae012b554b9747ad7,PodSandboxId:8eb9ce14fa9cd506a3a371f7475fa31b94ca888cfa80f7d9c00effdd8aac0287,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723597560719516832,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-585256,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9b8458885f7bf294298151b292cf053,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a7e820ed-32ad-4bb0-bcc6-ce2bdeddbab5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 01:20:03 default-k8s-diff-port-585256 crio[720]: time="2024-08-14 01:20:03.472420845Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f3e8840c-9671-419a-91b5-c73cb20c01df name=/runtime.v1.RuntimeService/Version
	Aug 14 01:20:03 default-k8s-diff-port-585256 crio[720]: time="2024-08-14 01:20:03.472529101Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f3e8840c-9671-419a-91b5-c73cb20c01df name=/runtime.v1.RuntimeService/Version
	Aug 14 01:20:03 default-k8s-diff-port-585256 crio[720]: time="2024-08-14 01:20:03.473372993Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8c0e926d-3110-4d80-a4f1-01b413efbdce name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 01:20:03 default-k8s-diff-port-585256 crio[720]: time="2024-08-14 01:20:03.473773203Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598403473751085,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8c0e926d-3110-4d80-a4f1-01b413efbdce name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 01:20:03 default-k8s-diff-port-585256 crio[720]: time="2024-08-14 01:20:03.474245119Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=464589ff-9bc1-4af6-ac54-60b8b8017b11 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 01:20:03 default-k8s-diff-port-585256 crio[720]: time="2024-08-14 01:20:03.474311161Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=464589ff-9bc1-4af6-ac54-60b8b8017b11 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 01:20:03 default-k8s-diff-port-585256 crio[720]: time="2024-08-14 01:20:03.474518704Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:178ad8a6bac1357bb802e3d04d4a245d48d7d17ada831702e7c8b8576d501dd2,PodSandboxId:6f98fff5404794ccef4bb9d032df8093f55924505cda14bdcde5a3ba7cda3970,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723597853398580338,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1636777b-2347-4c48-b72a-3b5445c4862a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a30f6f8799cac2a8f016c3eaf2abaf8462dbc8c55b19ea96d20ff345cd84557,PodSandboxId:9eca25d767f1a81f28b14158d7c80ca0ffb1397c3f86f79708b9ef2b6afda147,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723597852912193042,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-hngz9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 213f9a45-596b-47b3-9c37-ceae021433ea,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39c53a765019e07c801f353b5bd3181a2b9adb29b71bbf5ff1e384dc1f3b9af6,PodSandboxId:01056aaf40aa4e053f6a713b8800657d9b8d39f399c57d6b1eb2fc89aef05542,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723597852839646809,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-jmqk7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 397fb54b-40cd-4c4e-9503-c077f814c6e5,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85fda842f55cd00d8fe6aaea85760248f75acc62e7346fd6892aa6d01236fc0f,PodSandboxId:00369bc4aed926bb963ceeb61eb396f9f6eb6d5b9329f30c4310ee1f9d21a2bd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING
,CreatedAt:1723597852320287203,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rg8h9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2601104-a6f5-4065-87d5-c027d583f647,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2030360e485495175fa2be21c7c093c7d7310dfb73cf98c599fe4f9695485624,PodSandboxId:bc1dd8cbb18bc40b7490227aee0040905b7330da761fb42f4035d068c9e0edbc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723597841373142601
,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-585256,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63f2be92dbc40486c02357bb4abdde53,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e64d705f36b0067db76dc0ad093697a628d84f8b955847ef47867dbf1a7f9fe,PodSandboxId:71ce6596516d365b5372df76128b02d8a6051a0d0ce23a4367a3e8507ecf20d2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:172359784130
4029196,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-585256,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c65692368a95f1446ffe5a25cc5946d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c2ba2d805c8434f2a11f3cd7612d8b5e3857ef1450b928cad13153036ba31df,PodSandboxId:05b6d78a4af0439040fe1dfceffa45c4fec37ab4661259746bb22dbd4477fa8d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:172359
7841307764691,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-585256,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a27e79549c7620840739e6e02d96eba0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c4a3040cf2e5a03a5ceee9ed4044568f56fc5ef6ef1c69f9b963f837d55c4ce,PodSandboxId:88cb42849b1235a2a66a92861478f078a21a29de919930305958763f81f330e1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723597841236277469,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-585256,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9b8458885f7bf294298151b292cf053,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9ede10be40aa734712d3099693d401dbbd0b4f44fb5192ae012b554b9747ad7,PodSandboxId:8eb9ce14fa9cd506a3a371f7475fa31b94ca888cfa80f7d9c00effdd8aac0287,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723597560719516832,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-585256,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9b8458885f7bf294298151b292cf053,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=464589ff-9bc1-4af6-ac54-60b8b8017b11 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	178ad8a6bac13       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   6f98fff540479       storage-provisioner
	4a30f6f8799ca       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   9eca25d767f1a       coredns-6f6b679f8f-hngz9
	39c53a765019e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   01056aaf40aa4       coredns-6f6b679f8f-jmqk7
	85fda842f55cd       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   9 minutes ago       Running             kube-proxy                0                   00369bc4aed92       kube-proxy-rg8h9
	2030360e48549       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   9 minutes ago       Running             kube-scheduler            2                   bc1dd8cbb18bc       kube-scheduler-default-k8s-diff-port-585256
	3c2ba2d805c84       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago       Running             etcd                      2                   05b6d78a4af04       etcd-default-k8s-diff-port-585256
	1e64d705f36b0       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   9 minutes ago       Running             kube-controller-manager   2                   71ce6596516d3       kube-controller-manager-default-k8s-diff-port-585256
	4c4a3040cf2e5       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   9 minutes ago       Running             kube-apiserver            2                   88cb42849b123       kube-apiserver-default-k8s-diff-port-585256
	a9ede10be40aa       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   14 minutes ago      Exited              kube-apiserver            1                   8eb9ce14fa9cd       kube-apiserver-default-k8s-diff-port-585256
	
	
	==> coredns [39c53a765019e07c801f353b5bd3181a2b9adb29b71bbf5ff1e384dc1f3b9af6] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [4a30f6f8799cac2a8f016c3eaf2abaf8462dbc8c55b19ea96d20ff345cd84557] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-585256
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-585256
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a5ac17629aabe8562ef9220195ba6559cf416caf
	                    minikube.k8s.io/name=default-k8s-diff-port-585256
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_14T01_10_47_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 14 Aug 2024 01:10:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-585256
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 14 Aug 2024 01:19:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 14 Aug 2024 01:16:03 +0000   Wed, 14 Aug 2024 01:10:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 14 Aug 2024 01:16:03 +0000   Wed, 14 Aug 2024 01:10:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 14 Aug 2024 01:16:03 +0000   Wed, 14 Aug 2024 01:10:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 14 Aug 2024 01:16:03 +0000   Wed, 14 Aug 2024 01:10:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.110
	  Hostname:    default-k8s-diff-port-585256
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 666676425019446a941ebb971b72dcb3
	  System UUID:                66667642-5019-446a-941e-bb971b72dcb3
	  Boot ID:                    ed146dfb-8b26-4148-877f-d40b1fba7453
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-hngz9                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m12s
	  kube-system                 coredns-6f6b679f8f-jmqk7                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m12s
	  kube-system                 etcd-default-k8s-diff-port-585256                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m18s
	  kube-system                 kube-apiserver-default-k8s-diff-port-585256             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m17s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-585256    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m17s
	  kube-system                 kube-proxy-rg8h9                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m12s
	  kube-system                 kube-scheduler-default-k8s-diff-port-585256             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m17s
	  kube-system                 metrics-server-6867b74b74-lzfpz                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m10s
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m10s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  9m23s (x8 over 9m23s)  kubelet          Node default-k8s-diff-port-585256 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m23s (x8 over 9m23s)  kubelet          Node default-k8s-diff-port-585256 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m23s (x7 over 9m23s)  kubelet          Node default-k8s-diff-port-585256 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m17s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m17s                  kubelet          Node default-k8s-diff-port-585256 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m17s                  kubelet          Node default-k8s-diff-port-585256 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m17s                  kubelet          Node default-k8s-diff-port-585256 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m13s                  node-controller  Node default-k8s-diff-port-585256 event: Registered Node default-k8s-diff-port-585256 in Controller
	
	
	==> dmesg <==
	[  +0.050512] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039230] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.730977] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.848194] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.410528] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.245917] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.055560] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.065016] systemd-fstab-generator[649]: Ignoring "noauto" option for root device
	[  +0.210631] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[  +0.151481] systemd-fstab-generator[676]: Ignoring "noauto" option for root device
	[  +0.339617] systemd-fstab-generator[705]: Ignoring "noauto" option for root device
	[  +4.292368] systemd-fstab-generator[803]: Ignoring "noauto" option for root device
	[  +0.063639] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.561710] systemd-fstab-generator[923]: Ignoring "noauto" option for root device
	[Aug14 01:06] kauditd_printk_skb: 97 callbacks suppressed
	[  +6.688674] kauditd_printk_skb: 85 callbacks suppressed
	[Aug14 01:10] kauditd_printk_skb: 4 callbacks suppressed
	[  +1.201538] systemd-fstab-generator[2577]: Ignoring "noauto" option for root device
	[  +4.439054] kauditd_printk_skb: 56 callbacks suppressed
	[  +1.617681] systemd-fstab-generator[2898]: Ignoring "noauto" option for root device
	[  +5.313063] systemd-fstab-generator[3015]: Ignoring "noauto" option for root device
	[  +0.090024] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.732176] kauditd_printk_skb: 84 callbacks suppressed
	
	
	==> etcd [3c2ba2d805c8434f2a11f3cd7612d8b5e3857ef1450b928cad13153036ba31df] <==
	{"level":"info","ts":"2024-08-14T01:10:41.579362Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-14T01:10:41.579434Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.110:2380"}
	{"level":"info","ts":"2024-08-14T01:10:41.579458Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.110:2380"}
	{"level":"info","ts":"2024-08-14T01:10:41.580761Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbb007bab925a598 switched to configuration voters=(18136004197972551064)"}
	{"level":"info","ts":"2024-08-14T01:10:41.583471Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"a3dbfa6decfc8853","local-member-id":"fbb007bab925a598","added-peer-id":"fbb007bab925a598","added-peer-peer-urls":["https://192.168.39.110:2380"]}
	{"level":"info","ts":"2024-08-14T01:10:41.940859Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbb007bab925a598 is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-14T01:10:41.940970Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbb007bab925a598 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-14T01:10:41.941014Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbb007bab925a598 received MsgPreVoteResp from fbb007bab925a598 at term 1"}
	{"level":"info","ts":"2024-08-14T01:10:41.941045Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbb007bab925a598 became candidate at term 2"}
	{"level":"info","ts":"2024-08-14T01:10:41.941069Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbb007bab925a598 received MsgVoteResp from fbb007bab925a598 at term 2"}
	{"level":"info","ts":"2024-08-14T01:10:41.941097Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbb007bab925a598 became leader at term 2"}
	{"level":"info","ts":"2024-08-14T01:10:41.941123Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: fbb007bab925a598 elected leader fbb007bab925a598 at term 2"}
	{"level":"info","ts":"2024-08-14T01:10:41.945938Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-14T01:10:41.947508Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"fbb007bab925a598","local-member-attributes":"{Name:default-k8s-diff-port-585256 ClientURLs:[https://192.168.39.110:2379]}","request-path":"/0/members/fbb007bab925a598/attributes","cluster-id":"a3dbfa6decfc8853","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-14T01:10:41.947981Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-14T01:10:41.948668Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-14T01:10:41.953236Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-14T01:10:41.956371Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-14T01:10:41.967166Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-14T01:10:41.956753Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"a3dbfa6decfc8853","local-member-id":"fbb007bab925a598","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-14T01:10:41.967599Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-14T01:10:41.967761Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-14T01:10:41.956774Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-14T01:10:41.973335Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-14T01:10:41.979057Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.110:2379"}
	
	
	==> kernel <==
	 01:20:03 up 14 min,  0 users,  load average: 0.40, 0.28, 0.19
	Linux default-k8s-diff-port-585256 5.10.207 #1 SMP Tue Aug 13 22:05:29 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [4c4a3040cf2e5a03a5ceee9ed4044568f56fc5ef6ef1c69f9b963f837d55c4ce] <==
	W0814 01:15:44.918117       1 handler_proxy.go:99] no RequestInfo found in the context
	E0814 01:15:44.918170       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0814 01:15:44.919299       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0814 01:15:44.919352       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0814 01:16:44.921890       1 handler_proxy.go:99] no RequestInfo found in the context
	E0814 01:16:44.921978       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0814 01:16:44.922202       1 handler_proxy.go:99] no RequestInfo found in the context
	E0814 01:16:44.922363       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0814 01:16:44.923111       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0814 01:16:44.924277       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0814 01:18:44.923559       1 handler_proxy.go:99] no RequestInfo found in the context
	E0814 01:18:44.923914       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0814 01:18:44.924834       1 handler_proxy.go:99] no RequestInfo found in the context
	E0814 01:18:44.924969       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0814 01:18:44.925018       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0814 01:18:44.926929       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [a9ede10be40aa734712d3099693d401dbbd0b4f44fb5192ae012b554b9747ad7] <==
	W0814 01:10:36.536513       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 01:10:36.546034       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 01:10:36.568676       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 01:10:36.570070       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 01:10:36.577949       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 01:10:36.591651       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 01:10:36.598058       1 logging.go:55] [core] [Channel #16 SubChannel #17]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 01:10:36.613938       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 01:10:36.619402       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 01:10:36.639168       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 01:10:36.648659       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 01:10:36.678383       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 01:10:36.696183       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 01:10:36.709853       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 01:10:36.737735       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 01:10:36.784554       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 01:10:36.805599       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 01:10:36.875844       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 01:10:36.879424       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 01:10:36.879630       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 01:10:36.889082       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 01:10:36.955568       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 01:10:36.959197       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 01:10:36.968726       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 01:10:36.972236       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [1e64d705f36b0067db76dc0ad093697a628d84f8b955847ef47867dbf1a7f9fe] <==
	E0814 01:14:50.801329       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 01:14:51.338451       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0814 01:15:20.807648       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 01:15:21.346194       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0814 01:15:50.813901       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 01:15:51.354134       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0814 01:16:03.714692       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-585256"
	E0814 01:16:20.820567       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 01:16:21.363703       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0814 01:16:50.827426       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 01:16:51.372211       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0814 01:16:52.614570       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="379.35µs"
	I0814 01:17:04.611968       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="237.561µs"
	E0814 01:17:20.833962       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 01:17:21.379257       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0814 01:17:50.840279       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 01:17:51.387703       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0814 01:18:20.846614       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 01:18:21.394688       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0814 01:18:50.853156       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 01:18:51.404369       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0814 01:19:20.860662       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 01:19:21.413839       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0814 01:19:50.867196       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 01:19:51.422086       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [85fda842f55cd00d8fe6aaea85760248f75acc62e7346fd6892aa6d01236fc0f] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0814 01:10:52.608122       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0814 01:10:52.622552       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.110"]
	E0814 01:10:52.622627       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0814 01:10:52.898766       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0814 01:10:52.898859       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0814 01:10:52.898892       1 server_linux.go:169] "Using iptables Proxier"
	I0814 01:10:52.901404       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0814 01:10:52.901750       1 server.go:483] "Version info" version="v1.31.0"
	I0814 01:10:52.901866       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0814 01:10:52.918305       1 config.go:197] "Starting service config controller"
	I0814 01:10:52.918348       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0814 01:10:52.918368       1 config.go:104] "Starting endpoint slice config controller"
	I0814 01:10:52.918371       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0814 01:10:52.921429       1 config.go:326] "Starting node config controller"
	I0814 01:10:52.921447       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0814 01:10:53.020143       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0814 01:10:53.020234       1 shared_informer.go:320] Caches are synced for service config
	I0814 01:10:53.023984       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [2030360e485495175fa2be21c7c093c7d7310dfb73cf98c599fe4f9695485624] <==
	W0814 01:10:43.948860       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0814 01:10:43.949160       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0814 01:10:43.949314       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0814 01:10:43.949352       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0814 01:10:43.949462       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0814 01:10:43.949494       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0814 01:10:43.949526       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0814 01:10:43.949561       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0814 01:10:44.843562       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0814 01:10:44.843668       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0814 01:10:44.848663       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0814 01:10:44.848755       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0814 01:10:44.955169       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0814 01:10:44.955265       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0814 01:10:45.030248       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0814 01:10:45.030312       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0814 01:10:45.046128       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0814 01:10:45.046190       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0814 01:10:45.097166       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0814 01:10:45.097506       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0814 01:10:45.199362       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0814 01:10:45.199425       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0814 01:10:45.220123       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0814 01:10:45.220371       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	I0814 01:10:48.037753       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 14 01:18:56 default-k8s-diff-port-585256 kubelet[2905]: E0814 01:18:56.747875    2905 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598336747198006,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 01:18:56 default-k8s-diff-port-585256 kubelet[2905]: E0814 01:18:56.747935    2905 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598336747198006,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 01:19:03 default-k8s-diff-port-585256 kubelet[2905]: E0814 01:19:03.590940    2905 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-lzfpz" podUID="2dd31ad2-c384-4edd-8d5c-561bc2fa72e4"
	Aug 14 01:19:06 default-k8s-diff-port-585256 kubelet[2905]: E0814 01:19:06.749706    2905 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598346749296720,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 01:19:06 default-k8s-diff-port-585256 kubelet[2905]: E0814 01:19:06.749744    2905 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598346749296720,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 01:19:15 default-k8s-diff-port-585256 kubelet[2905]: E0814 01:19:15.591715    2905 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-lzfpz" podUID="2dd31ad2-c384-4edd-8d5c-561bc2fa72e4"
	Aug 14 01:19:16 default-k8s-diff-port-585256 kubelet[2905]: E0814 01:19:16.752029    2905 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598356751265341,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 01:19:16 default-k8s-diff-port-585256 kubelet[2905]: E0814 01:19:16.753888    2905 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598356751265341,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 01:19:26 default-k8s-diff-port-585256 kubelet[2905]: E0814 01:19:26.592042    2905 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-lzfpz" podUID="2dd31ad2-c384-4edd-8d5c-561bc2fa72e4"
	Aug 14 01:19:26 default-k8s-diff-port-585256 kubelet[2905]: E0814 01:19:26.755120    2905 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598366754896239,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 01:19:26 default-k8s-diff-port-585256 kubelet[2905]: E0814 01:19:26.755162    2905 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598366754896239,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 01:19:36 default-k8s-diff-port-585256 kubelet[2905]: E0814 01:19:36.756421    2905 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598376756185409,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 01:19:36 default-k8s-diff-port-585256 kubelet[2905]: E0814 01:19:36.756457    2905 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598376756185409,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 01:19:38 default-k8s-diff-port-585256 kubelet[2905]: E0814 01:19:38.592926    2905 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-lzfpz" podUID="2dd31ad2-c384-4edd-8d5c-561bc2fa72e4"
	Aug 14 01:19:46 default-k8s-diff-port-585256 kubelet[2905]: E0814 01:19:46.610527    2905 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 14 01:19:46 default-k8s-diff-port-585256 kubelet[2905]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 14 01:19:46 default-k8s-diff-port-585256 kubelet[2905]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 14 01:19:46 default-k8s-diff-port-585256 kubelet[2905]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 14 01:19:46 default-k8s-diff-port-585256 kubelet[2905]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 14 01:19:46 default-k8s-diff-port-585256 kubelet[2905]: E0814 01:19:46.757697    2905 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598386757396064,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 01:19:46 default-k8s-diff-port-585256 kubelet[2905]: E0814 01:19:46.757730    2905 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598386757396064,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 01:19:50 default-k8s-diff-port-585256 kubelet[2905]: E0814 01:19:50.594835    2905 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-lzfpz" podUID="2dd31ad2-c384-4edd-8d5c-561bc2fa72e4"
	Aug 14 01:19:56 default-k8s-diff-port-585256 kubelet[2905]: E0814 01:19:56.759677    2905 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598396759269896,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 01:19:56 default-k8s-diff-port-585256 kubelet[2905]: E0814 01:19:56.759729    2905 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598396759269896,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 01:20:01 default-k8s-diff-port-585256 kubelet[2905]: E0814 01:20:01.591308    2905 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-lzfpz" podUID="2dd31ad2-c384-4edd-8d5c-561bc2fa72e4"
	
	
	==> storage-provisioner [178ad8a6bac1357bb802e3d04d4a245d48d7d17ada831702e7c8b8576d501dd2] <==
	I0814 01:10:53.502743       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0814 01:10:53.518304       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0814 01:10:53.518391       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0814 01:10:53.549838       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0814 01:10:53.550404       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-585256_b8154309-70c8-444e-8be9-df686861cf5d!
	I0814 01:10:53.551716       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"887a86e8-3ce8-4d79-9ca4-abb6cd830367", APIVersion:"v1", ResourceVersion:"429", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-585256_b8154309-70c8-444e-8be9-df686861cf5d became leader
	I0814 01:10:53.651495       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-585256_b8154309-70c8-444e-8be9-df686861cf5d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-585256 -n default-k8s-diff-port-585256
E0814 01:20:05.518820   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/addons-937866/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-585256 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-lzfpz
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-585256 describe pod metrics-server-6867b74b74-lzfpz
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-585256 describe pod metrics-server-6867b74b74-lzfpz: exit status 1 (98.594809ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-lzfpz" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-585256 describe pod metrics-server-6867b74b74-lzfpz: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0814 01:12:14.185328   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/functional-770612/client.crt: no such file or directory" logger="UnhandledError"
E0814 01:13:37.256060   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/functional-770612/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-901410 -n embed-certs-901410
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-08-14 01:20:40.865578075 +0000 UTC m=+5631.029075283
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-901410 -n embed-certs-901410
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-901410 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-901410 logs -n 25: (2.070156754s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| unpause | -p pause-074686                                        | pause-074686                 | jenkins | v1.33.1 | 14 Aug 24 00:56 UTC | 14 Aug 24 00:56 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| pause   | -p pause-074686                                        | pause-074686                 | jenkins | v1.33.1 | 14 Aug 24 00:56 UTC | 14 Aug 24 00:56 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| delete  | -p pause-074686                                        | pause-074686                 | jenkins | v1.33.1 | 14 Aug 24 00:56 UTC | 14 Aug 24 00:56 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| delete  | -p pause-074686                                        | pause-074686                 | jenkins | v1.33.1 | 14 Aug 24 00:56 UTC | 14 Aug 24 00:56 UTC |
	| delete  | -p                                                     | disable-driver-mounts-655306 | jenkins | v1.33.1 | 14 Aug 24 00:56 UTC | 14 Aug 24 00:56 UTC |
	|         | disable-driver-mounts-655306                           |                              |         |         |                     |                     |
	| start   | -p no-preload-776907                                   | no-preload-776907            | jenkins | v1.33.1 | 14 Aug 24 00:56 UTC | 14 Aug 24 00:58 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-769488                              | cert-expiration-769488       | jenkins | v1.33.1 | 14 Aug 24 00:57 UTC | 14 Aug 24 00:58 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-769488                              | cert-expiration-769488       | jenkins | v1.33.1 | 14 Aug 24 00:58 UTC | 14 Aug 24 00:58 UTC |
	| start   | -p                                                     | default-k8s-diff-port-585256 | jenkins | v1.33.1 | 14 Aug 24 00:58 UTC | 14 Aug 24 00:58 UTC |
	|         | default-k8s-diff-port-585256                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-901410            | embed-certs-901410           | jenkins | v1.33.1 | 14 Aug 24 00:58 UTC | 14 Aug 24 00:58 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-901410                                  | embed-certs-901410           | jenkins | v1.33.1 | 14 Aug 24 00:58 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-776907             | no-preload-776907            | jenkins | v1.33.1 | 14 Aug 24 00:58 UTC | 14 Aug 24 00:58 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-776907                                   | no-preload-776907            | jenkins | v1.33.1 | 14 Aug 24 00:58 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-585256  | default-k8s-diff-port-585256 | jenkins | v1.33.1 | 14 Aug 24 00:59 UTC | 14 Aug 24 00:59 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-585256 | jenkins | v1.33.1 | 14 Aug 24 00:59 UTC |                     |
	|         | default-k8s-diff-port-585256                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-179312        | old-k8s-version-179312       | jenkins | v1.33.1 | 14 Aug 24 01:00 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-901410                 | embed-certs-901410           | jenkins | v1.33.1 | 14 Aug 24 01:00 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-901410                                  | embed-certs-901410           | jenkins | v1.33.1 | 14 Aug 24 01:00 UTC | 14 Aug 24 01:11 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-776907                  | no-preload-776907            | jenkins | v1.33.1 | 14 Aug 24 01:01 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-776907                                   | no-preload-776907            | jenkins | v1.33.1 | 14 Aug 24 01:01 UTC | 14 Aug 24 01:10 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-585256       | default-k8s-diff-port-585256 | jenkins | v1.33.1 | 14 Aug 24 01:01 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-179312                              | old-k8s-version-179312       | jenkins | v1.33.1 | 14 Aug 24 01:01 UTC | 14 Aug 24 01:01 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-585256 | jenkins | v1.33.1 | 14 Aug 24 01:01 UTC | 14 Aug 24 01:11 UTC |
	|         | default-k8s-diff-port-585256                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-179312             | old-k8s-version-179312       | jenkins | v1.33.1 | 14 Aug 24 01:01 UTC | 14 Aug 24 01:01 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-179312                              | old-k8s-version-179312       | jenkins | v1.33.1 | 14 Aug 24 01:01 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/14 01:01:39
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0814 01:01:39.512898   61804 out.go:291] Setting OutFile to fd 1 ...
	I0814 01:01:39.513038   61804 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 01:01:39.513051   61804 out.go:304] Setting ErrFile to fd 2...
	I0814 01:01:39.513057   61804 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 01:01:39.513259   61804 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19429-9425/.minikube/bin
	I0814 01:01:39.513864   61804 out.go:298] Setting JSON to false
	I0814 01:01:39.514866   61804 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6245,"bootTime":1723591054,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0814 01:01:39.514924   61804 start.go:139] virtualization: kvm guest
	I0814 01:01:39.516858   61804 out.go:177] * [old-k8s-version-179312] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0814 01:01:39.518018   61804 out.go:177]   - MINIKUBE_LOCATION=19429
	I0814 01:01:39.518036   61804 notify.go:220] Checking for updates...
	I0814 01:01:39.520190   61804 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0814 01:01:39.521372   61804 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19429-9425/kubeconfig
	I0814 01:01:39.522536   61804 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19429-9425/.minikube
	I0814 01:01:39.523748   61804 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0814 01:01:39.524905   61804 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0814 01:01:39.526506   61804 config.go:182] Loaded profile config "old-k8s-version-179312": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0814 01:01:39.526919   61804 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:01:39.526976   61804 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:01:39.541877   61804 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35025
	I0814 01:01:39.542250   61804 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:01:39.542776   61804 main.go:141] libmachine: Using API Version  1
	I0814 01:01:39.542796   61804 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:01:39.543149   61804 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:01:39.543304   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .DriverName
	I0814 01:01:39.544990   61804 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0814 01:01:39.546103   61804 driver.go:392] Setting default libvirt URI to qemu:///system
	I0814 01:01:39.546426   61804 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:01:39.546461   61804 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:01:39.561404   61804 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42995
	I0814 01:01:39.561820   61804 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:01:39.562277   61804 main.go:141] libmachine: Using API Version  1
	I0814 01:01:39.562305   61804 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:01:39.562609   61804 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:01:39.562824   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .DriverName
	I0814 01:01:39.598760   61804 out.go:177] * Using the kvm2 driver based on existing profile
	I0814 01:01:39.599899   61804 start.go:297] selected driver: kvm2
	I0814 01:01:39.599912   61804 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-179312 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-179312 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.123 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 01:01:39.600052   61804 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0814 01:01:39.600706   61804 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 01:01:39.600767   61804 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19429-9425/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0814 01:01:39.616316   61804 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0814 01:01:39.616678   61804 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 01:01:39.616712   61804 cni.go:84] Creating CNI manager for ""
	I0814 01:01:39.616719   61804 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 01:01:39.616748   61804 start.go:340] cluster config:
	{Name:old-k8s-version-179312 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-179312 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.123 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 01:01:39.616839   61804 iso.go:125] acquiring lock: {Name:mk654171f0e78c238a265344dbbd1eacb21d0f1b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 01:01:39.618491   61804 out.go:177] * Starting "old-k8s-version-179312" primary control-plane node in "old-k8s-version-179312" cluster
	I0814 01:01:36.022382   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:01:39.094354   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:01:38.136107   61689 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0814 01:01:38.136146   61689 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0814 01:01:38.136159   61689 cache.go:56] Caching tarball of preloaded images
	I0814 01:01:38.136234   61689 preload.go:172] Found /home/jenkins/minikube-integration/19429-9425/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0814 01:01:38.136245   61689 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0814 01:01:38.136360   61689 profile.go:143] Saving config to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/default-k8s-diff-port-585256/config.json ...
	I0814 01:01:38.136567   61689 start.go:360] acquireMachinesLock for default-k8s-diff-port-585256: {Name:mk8ab1b491404dd6cc95a37a50c74a14d6d0e4c9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 01:01:39.619632   61804 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0814 01:01:39.619674   61804 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0814 01:01:39.619694   61804 cache.go:56] Caching tarball of preloaded images
	I0814 01:01:39.619767   61804 preload.go:172] Found /home/jenkins/minikube-integration/19429-9425/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0814 01:01:39.619781   61804 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0814 01:01:39.619899   61804 profile.go:143] Saving config to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312/config.json ...
	I0814 01:01:39.620085   61804 start.go:360] acquireMachinesLock for old-k8s-version-179312: {Name:mk8ab1b491404dd6cc95a37a50c74a14d6d0e4c9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 01:01:45.174229   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:01:48.246337   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:01:54.326275   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:01:57.398310   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:02:03.478349   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:02:06.550262   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:02:12.630330   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:02:15.702383   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:02:21.782321   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:02:24.854346   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:02:30.934349   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:02:34.006298   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:02:40.086361   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:02:43.158326   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:02:49.238298   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:02:52.310357   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:02:58.390361   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:03:01.462356   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:03:07.542292   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:03:10.614310   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:03:16.694325   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:03:19.766305   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:03:25.846331   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:03:28.918369   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:03:34.998360   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:03:38.070357   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:03:44.150338   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:03:47.222336   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:03:53.302301   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:03:56.374355   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:04:02.454379   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:04:05.526325   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:04:11.606322   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:04:14.678359   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:04:20.758332   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:04:23.830339   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:04:29.910318   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:04:32.982355   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:04:39.062376   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:04:42.134351   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:04:48.214321   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:04:51.286357   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:04:57.366282   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:05:00.438378   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:05:06.518254   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:05:09.590272   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:05:12.594550   61447 start.go:364] duration metric: took 3m55.982517455s to acquireMachinesLock for "no-preload-776907"
	I0814 01:05:12.594617   61447 start.go:96] Skipping create...Using existing machine configuration
	I0814 01:05:12.594639   61447 fix.go:54] fixHost starting: 
	I0814 01:05:12.595017   61447 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:05:12.595051   61447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:05:12.611377   61447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40079
	I0814 01:05:12.611848   61447 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:05:12.612405   61447 main.go:141] libmachine: Using API Version  1
	I0814 01:05:12.612433   61447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:05:12.612810   61447 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:05:12.613004   61447 main.go:141] libmachine: (no-preload-776907) Calling .DriverName
	I0814 01:05:12.613170   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetState
	I0814 01:05:12.614831   61447 fix.go:112] recreateIfNeeded on no-preload-776907: state=Stopped err=<nil>
	I0814 01:05:12.614852   61447 main.go:141] libmachine: (no-preload-776907) Calling .DriverName
	W0814 01:05:12.615027   61447 fix.go:138] unexpected machine state, will restart: <nil>
	I0814 01:05:12.616713   61447 out.go:177] * Restarting existing kvm2 VM for "no-preload-776907" ...
	I0814 01:05:12.591919   61115 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 01:05:12.591979   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetMachineName
	I0814 01:05:12.592302   61115 buildroot.go:166] provisioning hostname "embed-certs-901410"
	I0814 01:05:12.592333   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetMachineName
	I0814 01:05:12.592567   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHHostname
	I0814 01:05:12.594384   61115 machine.go:97] duration metric: took 4m37.436734696s to provisionDockerMachine
	I0814 01:05:12.594452   61115 fix.go:56] duration metric: took 4m37.45620173s for fixHost
	I0814 01:05:12.594468   61115 start.go:83] releasing machines lock for "embed-certs-901410", held for 4m37.456229846s
	W0814 01:05:12.594503   61115 start.go:714] error starting host: provision: host is not running
	W0814 01:05:12.594696   61115 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0814 01:05:12.594717   61115 start.go:729] Will try again in 5 seconds ...
	I0814 01:05:12.617855   61447 main.go:141] libmachine: (no-preload-776907) Calling .Start
	I0814 01:05:12.618047   61447 main.go:141] libmachine: (no-preload-776907) Ensuring networks are active...
	I0814 01:05:12.619058   61447 main.go:141] libmachine: (no-preload-776907) Ensuring network default is active
	I0814 01:05:12.619398   61447 main.go:141] libmachine: (no-preload-776907) Ensuring network mk-no-preload-776907 is active
	I0814 01:05:12.619763   61447 main.go:141] libmachine: (no-preload-776907) Getting domain xml...
	I0814 01:05:12.620437   61447 main.go:141] libmachine: (no-preload-776907) Creating domain...
	I0814 01:05:13.819938   61447 main.go:141] libmachine: (no-preload-776907) Waiting to get IP...
	I0814 01:05:13.820741   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:13.821142   61447 main.go:141] libmachine: (no-preload-776907) DBG | unable to find current IP address of domain no-preload-776907 in network mk-no-preload-776907
	I0814 01:05:13.821244   61447 main.go:141] libmachine: (no-preload-776907) DBG | I0814 01:05:13.821137   62559 retry.go:31] will retry after 224.897937ms: waiting for machine to come up
	I0814 01:05:14.047611   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:14.048046   61447 main.go:141] libmachine: (no-preload-776907) DBG | unable to find current IP address of domain no-preload-776907 in network mk-no-preload-776907
	I0814 01:05:14.048073   61447 main.go:141] libmachine: (no-preload-776907) DBG | I0814 01:05:14.047999   62559 retry.go:31] will retry after 289.797156ms: waiting for machine to come up
	I0814 01:05:14.339577   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:14.339966   61447 main.go:141] libmachine: (no-preload-776907) DBG | unable to find current IP address of domain no-preload-776907 in network mk-no-preload-776907
	I0814 01:05:14.339990   61447 main.go:141] libmachine: (no-preload-776907) DBG | I0814 01:05:14.339923   62559 retry.go:31] will retry after 335.55372ms: waiting for machine to come up
	I0814 01:05:14.677277   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:14.677646   61447 main.go:141] libmachine: (no-preload-776907) DBG | unable to find current IP address of domain no-preload-776907 in network mk-no-preload-776907
	I0814 01:05:14.677850   61447 main.go:141] libmachine: (no-preload-776907) DBG | I0814 01:05:14.677612   62559 retry.go:31] will retry after 376.666569ms: waiting for machine to come up
	I0814 01:05:15.056486   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:15.057008   61447 main.go:141] libmachine: (no-preload-776907) DBG | unable to find current IP address of domain no-preload-776907 in network mk-no-preload-776907
	I0814 01:05:15.057046   61447 main.go:141] libmachine: (no-preload-776907) DBG | I0814 01:05:15.056935   62559 retry.go:31] will retry after 594.277419ms: waiting for machine to come up
	I0814 01:05:15.652571   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:15.653122   61447 main.go:141] libmachine: (no-preload-776907) DBG | unable to find current IP address of domain no-preload-776907 in network mk-no-preload-776907
	I0814 01:05:15.653156   61447 main.go:141] libmachine: (no-preload-776907) DBG | I0814 01:05:15.653066   62559 retry.go:31] will retry after 827.123674ms: waiting for machine to come up
	I0814 01:05:16.482405   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:16.482799   61447 main.go:141] libmachine: (no-preload-776907) DBG | unable to find current IP address of domain no-preload-776907 in network mk-no-preload-776907
	I0814 01:05:16.482827   61447 main.go:141] libmachine: (no-preload-776907) DBG | I0814 01:05:16.482746   62559 retry.go:31] will retry after 897.843008ms: waiting for machine to come up
	I0814 01:05:17.595257   61115 start.go:360] acquireMachinesLock for embed-certs-901410: {Name:mk8ab1b491404dd6cc95a37a50c74a14d6d0e4c9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 01:05:17.381838   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:17.382282   61447 main.go:141] libmachine: (no-preload-776907) DBG | unable to find current IP address of domain no-preload-776907 in network mk-no-preload-776907
	I0814 01:05:17.382309   61447 main.go:141] libmachine: (no-preload-776907) DBG | I0814 01:05:17.382233   62559 retry.go:31] will retry after 1.346474914s: waiting for machine to come up
	I0814 01:05:18.730384   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:18.730837   61447 main.go:141] libmachine: (no-preload-776907) DBG | unable to find current IP address of domain no-preload-776907 in network mk-no-preload-776907
	I0814 01:05:18.730865   61447 main.go:141] libmachine: (no-preload-776907) DBG | I0814 01:05:18.730770   62559 retry.go:31] will retry after 1.755579596s: waiting for machine to come up
	I0814 01:05:20.488719   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:20.489235   61447 main.go:141] libmachine: (no-preload-776907) DBG | unable to find current IP address of domain no-preload-776907 in network mk-no-preload-776907
	I0814 01:05:20.489269   61447 main.go:141] libmachine: (no-preload-776907) DBG | I0814 01:05:20.489180   62559 retry.go:31] will retry after 1.82357845s: waiting for machine to come up
	I0814 01:05:22.315099   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:22.315508   61447 main.go:141] libmachine: (no-preload-776907) DBG | unable to find current IP address of domain no-preload-776907 in network mk-no-preload-776907
	I0814 01:05:22.315543   61447 main.go:141] libmachine: (no-preload-776907) DBG | I0814 01:05:22.315458   62559 retry.go:31] will retry after 1.799604975s: waiting for machine to come up
	I0814 01:05:24.116869   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:24.117361   61447 main.go:141] libmachine: (no-preload-776907) DBG | unable to find current IP address of domain no-preload-776907 in network mk-no-preload-776907
	I0814 01:05:24.117389   61447 main.go:141] libmachine: (no-preload-776907) DBG | I0814 01:05:24.117302   62559 retry.go:31] will retry after 2.588913034s: waiting for machine to come up
	I0814 01:05:26.708996   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:26.709436   61447 main.go:141] libmachine: (no-preload-776907) DBG | unable to find current IP address of domain no-preload-776907 in network mk-no-preload-776907
	I0814 01:05:26.709462   61447 main.go:141] libmachine: (no-preload-776907) DBG | I0814 01:05:26.709395   62559 retry.go:31] will retry after 3.736481406s: waiting for machine to come up
	I0814 01:05:30.449552   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:30.450068   61447 main.go:141] libmachine: (no-preload-776907) Found IP for machine: 192.168.72.94
	I0814 01:05:30.450093   61447 main.go:141] libmachine: (no-preload-776907) Reserving static IP address...
	I0814 01:05:30.450109   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has current primary IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:30.450584   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "no-preload-776907", mac: "52:54:00:96:29:79", ip: "192.168.72.94"} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:30.450609   61447 main.go:141] libmachine: (no-preload-776907) Reserved static IP address: 192.168.72.94
	I0814 01:05:30.450629   61447 main.go:141] libmachine: (no-preload-776907) DBG | skip adding static IP to network mk-no-preload-776907 - found existing host DHCP lease matching {name: "no-preload-776907", mac: "52:54:00:96:29:79", ip: "192.168.72.94"}
	I0814 01:05:30.450640   61447 main.go:141] libmachine: (no-preload-776907) Waiting for SSH to be available...
	I0814 01:05:30.450652   61447 main.go:141] libmachine: (no-preload-776907) DBG | Getting to WaitForSSH function...
	I0814 01:05:30.452908   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:30.453222   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:30.453250   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:30.453351   61447 main.go:141] libmachine: (no-preload-776907) DBG | Using SSH client type: external
	I0814 01:05:30.453380   61447 main.go:141] libmachine: (no-preload-776907) DBG | Using SSH private key: /home/jenkins/minikube-integration/19429-9425/.minikube/machines/no-preload-776907/id_rsa (-rw-------)
	I0814 01:05:30.453413   61447 main.go:141] libmachine: (no-preload-776907) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.94 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19429-9425/.minikube/machines/no-preload-776907/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0814 01:05:30.453430   61447 main.go:141] libmachine: (no-preload-776907) DBG | About to run SSH command:
	I0814 01:05:30.453443   61447 main.go:141] libmachine: (no-preload-776907) DBG | exit 0
	I0814 01:05:30.574126   61447 main.go:141] libmachine: (no-preload-776907) DBG | SSH cmd err, output: <nil>: 
	I0814 01:05:30.574502   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetConfigRaw
	I0814 01:05:30.575125   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetIP
	I0814 01:05:30.577732   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:30.578169   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:30.578203   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:30.578449   61447 profile.go:143] Saving config to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/no-preload-776907/config.json ...
	I0814 01:05:30.578651   61447 machine.go:94] provisionDockerMachine start ...
	I0814 01:05:30.578669   61447 main.go:141] libmachine: (no-preload-776907) Calling .DriverName
	I0814 01:05:30.578916   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHHostname
	I0814 01:05:30.581363   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:30.581653   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:30.581678   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:30.581769   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHPort
	I0814 01:05:30.581944   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 01:05:30.582114   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 01:05:30.582230   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHUsername
	I0814 01:05:30.582389   61447 main.go:141] libmachine: Using SSH client type: native
	I0814 01:05:30.582631   61447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.94 22 <nil> <nil>}
	I0814 01:05:30.582641   61447 main.go:141] libmachine: About to run SSH command:
	hostname
	I0814 01:05:30.678219   61447 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0814 01:05:30.678248   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetMachineName
	I0814 01:05:30.678530   61447 buildroot.go:166] provisioning hostname "no-preload-776907"
	I0814 01:05:30.678560   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetMachineName
	I0814 01:05:30.678736   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHHostname
	I0814 01:05:30.681602   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:30.681914   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:30.681943   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:30.682058   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHPort
	I0814 01:05:30.682224   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 01:05:30.682373   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 01:05:30.682507   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHUsername
	I0814 01:05:30.682662   61447 main.go:141] libmachine: Using SSH client type: native
	I0814 01:05:30.682832   61447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.94 22 <nil> <nil>}
	I0814 01:05:30.682844   61447 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-776907 && echo "no-preload-776907" | sudo tee /etc/hostname
	I0814 01:05:30.790444   61447 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-776907
	
	I0814 01:05:30.790476   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHHostname
	I0814 01:05:30.793090   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:30.793357   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:30.793386   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:30.793503   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHPort
	I0814 01:05:30.793713   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 01:05:30.793885   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 01:05:30.794030   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHUsername
	I0814 01:05:30.794206   61447 main.go:141] libmachine: Using SSH client type: native
	I0814 01:05:30.794390   61447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.94 22 <nil> <nil>}
	I0814 01:05:30.794411   61447 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-776907' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-776907/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-776907' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0814 01:05:30.897761   61447 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 01:05:30.897818   61447 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19429-9425/.minikube CaCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19429-9425/.minikube}
	I0814 01:05:30.897869   61447 buildroot.go:174] setting up certificates
	I0814 01:05:30.897890   61447 provision.go:84] configureAuth start
	I0814 01:05:30.897915   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetMachineName
	I0814 01:05:30.898272   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetIP
	I0814 01:05:30.900961   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:30.901235   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:30.901268   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:30.901432   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHHostname
	I0814 01:05:30.903329   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:30.903604   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:30.903634   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:30.903799   61447 provision.go:143] copyHostCerts
	I0814 01:05:30.903866   61447 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem, removing ...
	I0814 01:05:30.903881   61447 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem
	I0814 01:05:30.903960   61447 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem (1082 bytes)
	I0814 01:05:30.904104   61447 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem, removing ...
	I0814 01:05:30.904126   61447 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem
	I0814 01:05:30.904165   61447 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem (1123 bytes)
	I0814 01:05:30.904259   61447 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem, removing ...
	I0814 01:05:30.904271   61447 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem
	I0814 01:05:30.904304   61447 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem (1675 bytes)
	I0814 01:05:30.904389   61447 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem org=jenkins.no-preload-776907 san=[127.0.0.1 192.168.72.94 localhost minikube no-preload-776907]
	I0814 01:05:31.219047   61447 provision.go:177] copyRemoteCerts
	I0814 01:05:31.219108   61447 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0814 01:05:31.219138   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHHostname
	I0814 01:05:31.222328   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:31.222679   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:31.222719   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:31.222858   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHPort
	I0814 01:05:31.223059   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 01:05:31.223199   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHUsername
	I0814 01:05:31.223368   61447 sshutil.go:53] new ssh client: &{IP:192.168.72.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/no-preload-776907/id_rsa Username:docker}
	I0814 01:05:31.299711   61447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0814 01:05:31.321459   61447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0814 01:05:31.342798   61447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0814 01:05:31.363610   61447 provision.go:87] duration metric: took 465.708315ms to configureAuth
	I0814 01:05:31.363636   61447 buildroot.go:189] setting minikube options for container-runtime
	I0814 01:05:31.363877   61447 config.go:182] Loaded profile config "no-preload-776907": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 01:05:31.363970   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHHostname
	I0814 01:05:31.366458   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:31.366723   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:31.366753   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:31.366948   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHPort
	I0814 01:05:31.367154   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 01:05:31.367300   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 01:05:31.367452   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHUsername
	I0814 01:05:31.367605   61447 main.go:141] libmachine: Using SSH client type: native
	I0814 01:05:31.367826   61447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.94 22 <nil> <nil>}
	I0814 01:05:31.367848   61447 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0814 01:05:31.826307   61689 start.go:364] duration metric: took 3m53.689696917s to acquireMachinesLock for "default-k8s-diff-port-585256"
	I0814 01:05:31.826378   61689 start.go:96] Skipping create...Using existing machine configuration
	I0814 01:05:31.826394   61689 fix.go:54] fixHost starting: 
	I0814 01:05:31.826794   61689 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:05:31.826829   61689 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:05:31.842943   61689 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38143
	I0814 01:05:31.843345   61689 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:05:31.843840   61689 main.go:141] libmachine: Using API Version  1
	I0814 01:05:31.843872   61689 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:05:31.844236   61689 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:05:31.844445   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .DriverName
	I0814 01:05:31.844653   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetState
	I0814 01:05:31.846298   61689 fix.go:112] recreateIfNeeded on default-k8s-diff-port-585256: state=Stopped err=<nil>
	I0814 01:05:31.846319   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .DriverName
	W0814 01:05:31.846504   61689 fix.go:138] unexpected machine state, will restart: <nil>
	I0814 01:05:31.848477   61689 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-585256" ...
	I0814 01:05:31.849592   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .Start
	I0814 01:05:31.849779   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Ensuring networks are active...
	I0814 01:05:31.850320   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Ensuring network default is active
	I0814 01:05:31.850622   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Ensuring network mk-default-k8s-diff-port-585256 is active
	I0814 01:05:31.850949   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Getting domain xml...
	I0814 01:05:31.851706   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Creating domain...
	I0814 01:05:31.612709   61447 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0814 01:05:31.612730   61447 machine.go:97] duration metric: took 1.0340672s to provisionDockerMachine
	I0814 01:05:31.612741   61447 start.go:293] postStartSetup for "no-preload-776907" (driver="kvm2")
	I0814 01:05:31.612763   61447 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0814 01:05:31.612794   61447 main.go:141] libmachine: (no-preload-776907) Calling .DriverName
	I0814 01:05:31.613074   61447 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0814 01:05:31.613098   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHHostname
	I0814 01:05:31.615600   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:31.615957   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:31.615985   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:31.616091   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHPort
	I0814 01:05:31.616244   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 01:05:31.616373   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHUsername
	I0814 01:05:31.616516   61447 sshutil.go:53] new ssh client: &{IP:192.168.72.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/no-preload-776907/id_rsa Username:docker}
	I0814 01:05:31.691987   61447 ssh_runner.go:195] Run: cat /etc/os-release
	I0814 01:05:31.695849   61447 info.go:137] Remote host: Buildroot 2023.02.9
	I0814 01:05:31.695872   61447 filesync.go:126] Scanning /home/jenkins/minikube-integration/19429-9425/.minikube/addons for local assets ...
	I0814 01:05:31.695940   61447 filesync.go:126] Scanning /home/jenkins/minikube-integration/19429-9425/.minikube/files for local assets ...
	I0814 01:05:31.696016   61447 filesync.go:149] local asset: /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem -> 165892.pem in /etc/ssl/certs
	I0814 01:05:31.696099   61447 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0814 01:05:31.704650   61447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem --> /etc/ssl/certs/165892.pem (1708 bytes)
	I0814 01:05:31.725889   61447 start.go:296] duration metric: took 113.131949ms for postStartSetup
	I0814 01:05:31.725939   61447 fix.go:56] duration metric: took 19.131305949s for fixHost
	I0814 01:05:31.725962   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHHostname
	I0814 01:05:31.728613   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:31.729001   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:31.729030   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:31.729178   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHPort
	I0814 01:05:31.729379   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 01:05:31.729556   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 01:05:31.729721   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHUsername
	I0814 01:05:31.729861   61447 main.go:141] libmachine: Using SSH client type: native
	I0814 01:05:31.730062   61447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.94 22 <nil> <nil>}
	I0814 01:05:31.730076   61447 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0814 01:05:31.826139   61447 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723597531.803704808
	
	I0814 01:05:31.826161   61447 fix.go:216] guest clock: 1723597531.803704808
	I0814 01:05:31.826172   61447 fix.go:229] Guest: 2024-08-14 01:05:31.803704808 +0000 UTC Remote: 2024-08-14 01:05:31.72594365 +0000 UTC m=+255.249076472 (delta=77.761158ms)
	I0814 01:05:31.826197   61447 fix.go:200] guest clock delta is within tolerance: 77.761158ms
	I0814 01:05:31.826208   61447 start.go:83] releasing machines lock for "no-preload-776907", held for 19.231627325s
	I0814 01:05:31.826240   61447 main.go:141] libmachine: (no-preload-776907) Calling .DriverName
	I0814 01:05:31.826536   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetIP
	I0814 01:05:31.829417   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:31.829824   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:31.829854   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:31.829986   61447 main.go:141] libmachine: (no-preload-776907) Calling .DriverName
	I0814 01:05:31.830482   61447 main.go:141] libmachine: (no-preload-776907) Calling .DriverName
	I0814 01:05:31.830633   61447 main.go:141] libmachine: (no-preload-776907) Calling .DriverName
	I0814 01:05:31.830697   61447 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0814 01:05:31.830804   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHHostname
	I0814 01:05:31.830894   61447 ssh_runner.go:195] Run: cat /version.json
	I0814 01:05:31.830914   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHHostname
	I0814 01:05:31.833641   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:31.833963   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:31.833992   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:31.834096   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:31.834260   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHPort
	I0814 01:05:31.834427   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 01:05:31.834549   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHUsername
	I0814 01:05:31.834575   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:31.834599   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:31.834696   61447 sshutil.go:53] new ssh client: &{IP:192.168.72.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/no-preload-776907/id_rsa Username:docker}
	I0814 01:05:31.834773   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHPort
	I0814 01:05:31.834917   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 01:05:31.835101   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHUsername
	I0814 01:05:31.835253   61447 sshutil.go:53] new ssh client: &{IP:192.168.72.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/no-preload-776907/id_rsa Username:docker}
	I0814 01:05:31.915928   61447 ssh_runner.go:195] Run: systemctl --version
	I0814 01:05:31.947877   61447 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0814 01:05:32.091869   61447 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0814 01:05:32.097278   61447 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0814 01:05:32.097333   61447 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0814 01:05:32.112225   61447 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0814 01:05:32.112243   61447 start.go:495] detecting cgroup driver to use...
	I0814 01:05:32.112317   61447 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0814 01:05:32.131562   61447 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0814 01:05:32.145858   61447 docker.go:217] disabling cri-docker service (if available) ...
	I0814 01:05:32.145917   61447 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0814 01:05:32.160887   61447 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0814 01:05:32.175742   61447 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0814 01:05:32.290421   61447 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0814 01:05:32.420159   61447 docker.go:233] disabling docker service ...
	I0814 01:05:32.420237   61447 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0814 01:05:32.434020   61447 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0814 01:05:32.451378   61447 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0814 01:05:32.601306   61447 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0814 01:05:32.714480   61447 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0814 01:05:32.727033   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 01:05:32.743611   61447 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0814 01:05:32.743681   61447 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:05:32.753404   61447 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0814 01:05:32.753471   61447 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:05:32.762934   61447 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:05:32.772193   61447 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:05:32.781270   61447 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0814 01:05:32.791271   61447 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:05:32.802788   61447 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:05:32.821431   61447 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:05:32.831529   61447 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0814 01:05:32.840975   61447 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0814 01:05:32.841033   61447 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0814 01:05:32.854037   61447 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0814 01:05:32.863437   61447 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 01:05:32.999601   61447 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0814 01:05:33.152806   61447 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0814 01:05:33.152868   61447 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0814 01:05:33.157209   61447 start.go:563] Will wait 60s for crictl version
	I0814 01:05:33.157266   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:05:33.160792   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0814 01:05:33.196825   61447 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0814 01:05:33.196903   61447 ssh_runner.go:195] Run: crio --version
	I0814 01:05:33.222886   61447 ssh_runner.go:195] Run: crio --version
	I0814 01:05:33.258900   61447 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0814 01:05:33.260059   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetIP
	I0814 01:05:33.263044   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:33.263422   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:33.263449   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:33.263749   61447 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0814 01:05:33.268315   61447 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 01:05:33.282628   61447 kubeadm.go:883] updating cluster {Name:no-preload-776907 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:no-preload-776907 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.94 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0814 01:05:33.282744   61447 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0814 01:05:33.282800   61447 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 01:05:33.319748   61447 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0814 01:05:33.319777   61447 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0814 01:05:33.319875   61447 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0814 01:05:33.319855   61447 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0814 01:05:33.319906   61447 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0814 01:05:33.319846   61447 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 01:05:33.319845   61447 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0814 01:05:33.320006   61447 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0814 01:05:33.320011   61447 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0814 01:05:33.320011   61447 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0814 01:05:33.321705   61447 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0814 01:05:33.321719   61447 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0814 01:05:33.321741   61447 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0814 01:05:33.321800   61447 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0814 01:05:33.321820   61447 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0814 01:05:33.321851   61447 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 01:05:33.321862   61447 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0814 01:05:33.321858   61447 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0814 01:05:33.549228   61447 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0814 01:05:33.558351   61447 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0814 01:05:33.561199   61447 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0814 01:05:33.570929   61447 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0814 01:05:33.573362   61447 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0814 01:05:33.606128   61447 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0814 01:05:33.623839   61447 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0814 01:05:33.721634   61447 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0814 01:05:33.721674   61447 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0814 01:05:33.721695   61447 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0814 01:05:33.721706   61447 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0814 01:05:33.721718   61447 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0814 01:05:33.721743   61447 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0814 01:05:33.721756   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:05:33.721790   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:05:33.721743   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:05:33.721822   61447 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0814 01:05:33.721851   61447 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0814 01:05:33.721904   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:05:33.733731   61447 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0814 01:05:33.733762   61447 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0814 01:05:33.733792   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:05:33.745957   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0814 01:05:33.745957   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0814 01:05:33.746027   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0814 01:05:33.746031   61447 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0814 01:05:33.746075   61447 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0814 01:05:33.746100   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0814 01:05:33.746110   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0814 01:05:33.746128   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:05:33.837313   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0814 01:05:33.837334   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0814 01:05:33.840696   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0814 01:05:33.840751   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0814 01:05:33.840821   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0814 01:05:33.840959   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0814 01:05:33.952383   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0814 01:05:33.952459   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0814 01:05:33.960252   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0814 01:05:33.966935   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0814 01:05:33.966980   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0814 01:05:33.966949   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0814 01:05:34.070125   61447 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0814 01:05:34.070241   61447 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0814 01:05:34.070361   61447 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0814 01:05:34.070427   61447 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0814 01:05:34.070495   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0814 01:05:34.091128   61447 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0814 01:05:34.091240   61447 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0814 01:05:34.092453   61447 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0814 01:05:34.092547   61447 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.15-0
	I0814 01:05:34.092649   61447 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0814 01:05:34.092743   61447 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0814 01:05:34.100595   61447 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0814 01:05:34.100616   61447 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0814 01:05:34.100663   61447 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0814 01:05:34.100799   61447 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0814 01:05:34.130869   61447 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0814 01:05:34.130914   61447 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0814 01:05:34.130931   61447 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0814 01:05:34.130968   61447 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0814 01:05:34.131021   61447 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0814 01:05:34.197462   61447 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 01:05:36.080029   61447 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (1.979348221s)
	I0814 01:05:36.080056   61447 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0814 01:05:36.080081   61447 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0814 01:05:36.080140   61447 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0814 01:05:36.080175   61447 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.882683519s)
	I0814 01:05:36.080139   61447 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0: (1.949094618s)
	I0814 01:05:36.080227   61447 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0814 01:05:36.080270   61447 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 01:05:36.080310   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:05:36.080232   61447 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0814 01:05:33.131411   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting to get IP...
	I0814 01:05:33.132448   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:33.132806   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | unable to find current IP address of domain default-k8s-diff-port-585256 in network mk-default-k8s-diff-port-585256
	I0814 01:05:33.132920   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | I0814 01:05:33.132799   62699 retry.go:31] will retry after 311.730649ms: waiting for machine to come up
	I0814 01:05:33.446380   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:33.446841   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | unable to find current IP address of domain default-k8s-diff-port-585256 in network mk-default-k8s-diff-port-585256
	I0814 01:05:33.446870   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | I0814 01:05:33.446794   62699 retry.go:31] will retry after 383.687115ms: waiting for machine to come up
	I0814 01:05:33.832368   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:33.832974   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | unable to find current IP address of domain default-k8s-diff-port-585256 in network mk-default-k8s-diff-port-585256
	I0814 01:05:33.833008   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | I0814 01:05:33.832808   62699 retry.go:31] will retry after 455.445491ms: waiting for machine to come up
	I0814 01:05:34.289395   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:34.289832   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | unable to find current IP address of domain default-k8s-diff-port-585256 in network mk-default-k8s-diff-port-585256
	I0814 01:05:34.289869   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | I0814 01:05:34.289782   62699 retry.go:31] will retry after 513.174411ms: waiting for machine to come up
	I0814 01:05:34.804399   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:34.804842   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | unable to find current IP address of domain default-k8s-diff-port-585256 in network mk-default-k8s-diff-port-585256
	I0814 01:05:34.804877   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | I0814 01:05:34.804793   62699 retry.go:31] will retry after 497.23394ms: waiting for machine to come up
	I0814 01:05:35.303286   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:35.303809   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | unable to find current IP address of domain default-k8s-diff-port-585256 in network mk-default-k8s-diff-port-585256
	I0814 01:05:35.303839   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | I0814 01:05:35.303757   62699 retry.go:31] will retry after 774.036418ms: waiting for machine to come up
	I0814 01:05:36.080026   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:36.080605   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | unable to find current IP address of domain default-k8s-diff-port-585256 in network mk-default-k8s-diff-port-585256
	I0814 01:05:36.080631   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | I0814 01:05:36.080572   62699 retry.go:31] will retry after 970.636476ms: waiting for machine to come up
	I0814 01:05:37.052546   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:37.052978   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | unable to find current IP address of domain default-k8s-diff-port-585256 in network mk-default-k8s-diff-port-585256
	I0814 01:05:37.053007   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | I0814 01:05:37.052929   62699 retry.go:31] will retry after 1.471882931s: waiting for machine to come up
	I0814 01:05:37.749423   61447 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.669254345s)
	I0814 01:05:37.749462   61447 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0814 01:05:37.749464   61447 ssh_runner.go:235] Completed: which crictl: (1.669139781s)
	I0814 01:05:37.749508   61447 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0814 01:05:37.749520   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 01:05:37.749573   61447 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0814 01:05:40.024973   61447 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.275431609s)
	I0814 01:05:40.024997   61447 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (2.275404079s)
	I0814 01:05:40.025019   61447 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0814 01:05:40.025049   61447 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0814 01:05:40.025050   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 01:05:40.025084   61447 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0814 01:05:38.526491   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:38.527039   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | unable to find current IP address of domain default-k8s-diff-port-585256 in network mk-default-k8s-diff-port-585256
	I0814 01:05:38.527074   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | I0814 01:05:38.526996   62699 retry.go:31] will retry after 1.14308512s: waiting for machine to come up
	I0814 01:05:39.672470   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:39.672869   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | unable to find current IP address of domain default-k8s-diff-port-585256 in network mk-default-k8s-diff-port-585256
	I0814 01:05:39.672893   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | I0814 01:05:39.672812   62699 retry.go:31] will retry after 2.208537111s: waiting for machine to come up
	I0814 01:05:41.883541   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:41.883981   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | unable to find current IP address of domain default-k8s-diff-port-585256 in network mk-default-k8s-diff-port-585256
	I0814 01:05:41.884004   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | I0814 01:05:41.883925   62699 retry.go:31] will retry after 1.996466385s: waiting for machine to come up
	I0814 01:05:43.619471   61447 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.594358195s)
	I0814 01:05:43.619507   61447 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0814 01:05:43.619537   61447 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0814 01:05:43.619541   61447 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.594466847s)
	I0814 01:05:43.619586   61447 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0814 01:05:43.619612   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 01:05:44.986974   61447 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (1.367364508s)
	I0814 01:05:44.987013   61447 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0814 01:05:44.987045   61447 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0814 01:05:44.987041   61447 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.367403978s)
	I0814 01:05:44.987087   61447 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0814 01:05:44.987109   61447 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0814 01:05:44.987207   61447 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0814 01:05:44.991463   61447 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0814 01:05:43.882980   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:43.883366   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | unable to find current IP address of domain default-k8s-diff-port-585256 in network mk-default-k8s-diff-port-585256
	I0814 01:05:43.883395   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | I0814 01:05:43.883327   62699 retry.go:31] will retry after 3.565128765s: waiting for machine to come up
	I0814 01:05:47.449997   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:47.450447   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | unable to find current IP address of domain default-k8s-diff-port-585256 in network mk-default-k8s-diff-port-585256
	I0814 01:05:47.450477   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | I0814 01:05:47.450398   62699 retry.go:31] will retry after 3.284570516s: waiting for machine to come up
	I0814 01:05:46.846330   61447 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (1.859214752s)
	I0814 01:05:46.846363   61447 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0814 01:05:46.846397   61447 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0814 01:05:46.846448   61447 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0814 01:05:47.484561   61447 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0814 01:05:47.484612   61447 cache_images.go:123] Successfully loaded all cached images
	I0814 01:05:47.484618   61447 cache_images.go:92] duration metric: took 14.164829321s to LoadCachedImages
	I0814 01:05:47.484632   61447 kubeadm.go:934] updating node { 192.168.72.94 8443 v1.31.0 crio true true} ...
	I0814 01:05:47.484813   61447 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-776907 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.94
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-776907 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0814 01:05:47.484897   61447 ssh_runner.go:195] Run: crio config
	I0814 01:05:47.530082   61447 cni.go:84] Creating CNI manager for ""
	I0814 01:05:47.530105   61447 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 01:05:47.530120   61447 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0814 01:05:47.530143   61447 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.94 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-776907 NodeName:no-preload-776907 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.94"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.94 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0814 01:05:47.530285   61447 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.94
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-776907"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.94
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.94"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0814 01:05:47.530350   61447 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0814 01:05:47.540091   61447 binaries.go:44] Found k8s binaries, skipping transfer
	I0814 01:05:47.540155   61447 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0814 01:05:47.548445   61447 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0814 01:05:47.563668   61447 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0814 01:05:47.578184   61447 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0814 01:05:47.593013   61447 ssh_runner.go:195] Run: grep 192.168.72.94	control-plane.minikube.internal$ /etc/hosts
	I0814 01:05:47.596371   61447 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.94	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 01:05:47.606895   61447 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 01:05:47.711714   61447 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 01:05:47.726979   61447 certs.go:68] Setting up /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/no-preload-776907 for IP: 192.168.72.94
	I0814 01:05:47.727006   61447 certs.go:194] generating shared ca certs ...
	I0814 01:05:47.727027   61447 certs.go:226] acquiring lock for ca certs: {Name:mke321171338291cf6d66f3acbfe43d3fabcafa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 01:05:47.727236   61447 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19429-9425/.minikube/ca.key
	I0814 01:05:47.727305   61447 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.key
	I0814 01:05:47.727321   61447 certs.go:256] generating profile certs ...
	I0814 01:05:47.727446   61447 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/no-preload-776907/client.key
	I0814 01:05:47.727532   61447 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/no-preload-776907/apiserver.key.b2b1ec25
	I0814 01:05:47.727583   61447 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/no-preload-776907/proxy-client.key
	I0814 01:05:47.727745   61447 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589.pem (1338 bytes)
	W0814 01:05:47.727796   61447 certs.go:480] ignoring /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589_empty.pem, impossibly tiny 0 bytes
	I0814 01:05:47.727811   61447 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem (1679 bytes)
	I0814 01:05:47.727846   61447 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem (1082 bytes)
	I0814 01:05:47.727882   61447 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem (1123 bytes)
	I0814 01:05:47.727907   61447 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem (1675 bytes)
	I0814 01:05:47.727948   61447 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem (1708 bytes)
	I0814 01:05:47.728598   61447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0814 01:05:47.758661   61447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0814 01:05:47.790036   61447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0814 01:05:47.814323   61447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0814 01:05:47.839537   61447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/no-preload-776907/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0814 01:05:47.867466   61447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/no-preload-776907/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0814 01:05:47.898996   61447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/no-preload-776907/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0814 01:05:47.923051   61447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/no-preload-776907/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0814 01:05:47.946004   61447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0814 01:05:47.967147   61447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589.pem --> /usr/share/ca-certificates/16589.pem (1338 bytes)
	I0814 01:05:47.988005   61447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem --> /usr/share/ca-certificates/165892.pem (1708 bytes)
	I0814 01:05:48.009704   61447 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0814 01:05:48.024096   61447 ssh_runner.go:195] Run: openssl version
	I0814 01:05:48.029499   61447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0814 01:05:48.038961   61447 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0814 01:05:48.042928   61447 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 13 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0814 01:05:48.042967   61447 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0814 01:05:48.048101   61447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0814 01:05:48.057498   61447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16589.pem && ln -fs /usr/share/ca-certificates/16589.pem /etc/ssl/certs/16589.pem"
	I0814 01:05:48.067275   61447 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16589.pem
	I0814 01:05:48.071457   61447 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 13 23:59 /usr/share/ca-certificates/16589.pem
	I0814 01:05:48.071503   61447 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16589.pem
	I0814 01:05:48.076924   61447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16589.pem /etc/ssl/certs/51391683.0"
	I0814 01:05:48.086951   61447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/165892.pem && ln -fs /usr/share/ca-certificates/165892.pem /etc/ssl/certs/165892.pem"
	I0814 01:05:48.097071   61447 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/165892.pem
	I0814 01:05:48.101070   61447 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 13 23:59 /usr/share/ca-certificates/165892.pem
	I0814 01:05:48.101116   61447 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/165892.pem
	I0814 01:05:48.106289   61447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/165892.pem /etc/ssl/certs/3ec20f2e.0"
	I0814 01:05:48.116109   61447 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0814 01:05:48.119931   61447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0814 01:05:48.124976   61447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0814 01:05:48.129900   61447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0814 01:05:48.135041   61447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0814 01:05:48.140528   61447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0814 01:05:48.145653   61447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0814 01:05:48.150733   61447 kubeadm.go:392] StartCluster: {Name:no-preload-776907 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:no-preload-776907 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.94 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 01:05:48.150833   61447 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0814 01:05:48.150869   61447 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 01:05:48.184513   61447 cri.go:89] found id: ""
	I0814 01:05:48.184585   61447 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0814 01:05:48.194089   61447 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0814 01:05:48.194107   61447 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0814 01:05:48.194145   61447 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0814 01:05:48.202993   61447 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0814 01:05:48.203917   61447 kubeconfig.go:125] found "no-preload-776907" server: "https://192.168.72.94:8443"
	I0814 01:05:48.205929   61447 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0814 01:05:48.214947   61447 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.94
	I0814 01:05:48.214974   61447 kubeadm.go:1160] stopping kube-system containers ...
	I0814 01:05:48.214985   61447 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0814 01:05:48.215023   61447 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 01:05:48.247731   61447 cri.go:89] found id: ""
	I0814 01:05:48.247803   61447 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0814 01:05:48.262901   61447 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 01:05:48.271600   61447 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 01:05:48.271616   61447 kubeadm.go:157] found existing configuration files:
	
	I0814 01:05:48.271652   61447 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 01:05:48.279915   61447 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 01:05:48.279963   61447 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 01:05:48.288458   61447 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 01:05:48.296996   61447 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 01:05:48.297049   61447 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 01:05:48.305625   61447 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 01:05:48.313796   61447 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 01:05:48.313837   61447 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 01:05:48.322211   61447 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 01:05:48.330289   61447 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 01:05:48.330350   61447 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 01:05:48.338604   61447 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 01:05:48.347106   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:05:48.452598   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:05:49.345180   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:05:49.535832   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:05:49.597770   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:05:49.711880   61447 api_server.go:52] waiting for apiserver process to appear ...
	I0814 01:05:49.711964   61447 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:05:50.212332   61447 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:05:50.712073   61447 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:05:50.726301   61447 api_server.go:72] duration metric: took 1.014425118s to wait for apiserver process to appear ...
	I0814 01:05:50.726335   61447 api_server.go:88] waiting for apiserver healthz status ...
	I0814 01:05:50.726369   61447 api_server.go:253] Checking apiserver healthz at https://192.168.72.94:8443/healthz ...
	I0814 01:05:52.086727   61804 start.go:364] duration metric: took 4m12.466611913s to acquireMachinesLock for "old-k8s-version-179312"
	I0814 01:05:52.086801   61804 start.go:96] Skipping create...Using existing machine configuration
	I0814 01:05:52.086811   61804 fix.go:54] fixHost starting: 
	I0814 01:05:52.087240   61804 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:05:52.087282   61804 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:05:52.104210   61804 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42343
	I0814 01:05:52.104679   61804 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:05:52.105122   61804 main.go:141] libmachine: Using API Version  1
	I0814 01:05:52.105146   61804 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:05:52.105462   61804 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:05:52.105656   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .DriverName
	I0814 01:05:52.105804   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetState
	I0814 01:05:52.107362   61804 fix.go:112] recreateIfNeeded on old-k8s-version-179312: state=Stopped err=<nil>
	I0814 01:05:52.107399   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .DriverName
	W0814 01:05:52.107543   61804 fix.go:138] unexpected machine state, will restart: <nil>
	I0814 01:05:52.109460   61804 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-179312" ...
	I0814 01:05:50.738825   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:50.739311   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Found IP for machine: 192.168.39.110
	I0814 01:05:50.739333   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Reserving static IP address...
	I0814 01:05:50.739353   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has current primary IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:50.739784   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-585256", mac: "52:54:00:00:bd:a3", ip: "192.168.39.110"} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:05:50.739819   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Reserved static IP address: 192.168.39.110
	I0814 01:05:50.739844   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | skip adding static IP to network mk-default-k8s-diff-port-585256 - found existing host DHCP lease matching {name: "default-k8s-diff-port-585256", mac: "52:54:00:00:bd:a3", ip: "192.168.39.110"}
	I0814 01:05:50.739871   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | Getting to WaitForSSH function...
	I0814 01:05:50.739888   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for SSH to be available...
	I0814 01:05:50.742187   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:50.742563   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:05:50.742597   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:50.742696   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | Using SSH client type: external
	I0814 01:05:50.742726   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | Using SSH private key: /home/jenkins/minikube-integration/19429-9425/.minikube/machines/default-k8s-diff-port-585256/id_rsa (-rw-------)
	I0814 01:05:50.742755   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.110 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19429-9425/.minikube/machines/default-k8s-diff-port-585256/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0814 01:05:50.742769   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | About to run SSH command:
	I0814 01:05:50.742784   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | exit 0
	I0814 01:05:50.870185   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | SSH cmd err, output: <nil>: 
	I0814 01:05:50.870601   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetConfigRaw
	I0814 01:05:50.871331   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetIP
	I0814 01:05:50.873990   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:50.874371   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:05:50.874401   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:50.874720   61689 profile.go:143] Saving config to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/default-k8s-diff-port-585256/config.json ...
	I0814 01:05:50.874962   61689 machine.go:94] provisionDockerMachine start ...
	I0814 01:05:50.874984   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .DriverName
	I0814 01:05:50.875223   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHHostname
	I0814 01:05:50.877460   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:50.877829   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:05:50.877868   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:50.877958   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHPort
	I0814 01:05:50.878140   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 01:05:50.878274   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 01:05:50.878440   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHUsername
	I0814 01:05:50.878596   61689 main.go:141] libmachine: Using SSH client type: native
	I0814 01:05:50.878828   61689 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0814 01:05:50.878844   61689 main.go:141] libmachine: About to run SSH command:
	hostname
	I0814 01:05:50.990920   61689 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0814 01:05:50.990952   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetMachineName
	I0814 01:05:50.991216   61689 buildroot.go:166] provisioning hostname "default-k8s-diff-port-585256"
	I0814 01:05:50.991244   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetMachineName
	I0814 01:05:50.991445   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHHostname
	I0814 01:05:50.994031   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:50.994353   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:05:50.994384   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:50.994595   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHPort
	I0814 01:05:50.994785   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 01:05:50.994936   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 01:05:50.995105   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHUsername
	I0814 01:05:50.995273   61689 main.go:141] libmachine: Using SSH client type: native
	I0814 01:05:50.995458   61689 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0814 01:05:50.995475   61689 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-585256 && echo "default-k8s-diff-port-585256" | sudo tee /etc/hostname
	I0814 01:05:51.115106   61689 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-585256
	
	I0814 01:05:51.115141   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHHostname
	I0814 01:05:51.118113   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:51.118480   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:05:51.118509   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:51.118726   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHPort
	I0814 01:05:51.118932   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 01:05:51.119097   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 01:05:51.119218   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHUsername
	I0814 01:05:51.119418   61689 main.go:141] libmachine: Using SSH client type: native
	I0814 01:05:51.119594   61689 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0814 01:05:51.119619   61689 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-585256' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-585256/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-585256' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0814 01:05:51.239368   61689 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 01:05:51.239404   61689 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19429-9425/.minikube CaCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19429-9425/.minikube}
	I0814 01:05:51.239430   61689 buildroot.go:174] setting up certificates
	I0814 01:05:51.239438   61689 provision.go:84] configureAuth start
	I0814 01:05:51.239450   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetMachineName
	I0814 01:05:51.239744   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetIP
	I0814 01:05:51.242426   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:51.242864   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:05:51.242894   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:51.243061   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHHostname
	I0814 01:05:51.245385   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:51.245774   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:05:51.245802   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:51.245950   61689 provision.go:143] copyHostCerts
	I0814 01:05:51.246001   61689 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem, removing ...
	I0814 01:05:51.246012   61689 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem
	I0814 01:05:51.246090   61689 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem (1082 bytes)
	I0814 01:05:51.246184   61689 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem, removing ...
	I0814 01:05:51.246192   61689 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem
	I0814 01:05:51.246211   61689 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem (1123 bytes)
	I0814 01:05:51.246268   61689 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem, removing ...
	I0814 01:05:51.246274   61689 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem
	I0814 01:05:51.246291   61689 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem (1675 bytes)
	I0814 01:05:51.246345   61689 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-585256 san=[127.0.0.1 192.168.39.110 default-k8s-diff-port-585256 localhost minikube]
	I0814 01:05:51.390720   61689 provision.go:177] copyRemoteCerts
	I0814 01:05:51.390779   61689 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0814 01:05:51.390828   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHHostname
	I0814 01:05:51.393583   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:51.394011   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:05:51.394065   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:51.394311   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHPort
	I0814 01:05:51.394493   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 01:05:51.394648   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHUsername
	I0814 01:05:51.394774   61689 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/default-k8s-diff-port-585256/id_rsa Username:docker}
	I0814 01:05:51.479700   61689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0814 01:05:51.501643   61689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0814 01:05:51.523469   61689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0814 01:05:51.548552   61689 provision.go:87] duration metric: took 309.100404ms to configureAuth
	I0814 01:05:51.548579   61689 buildroot.go:189] setting minikube options for container-runtime
	I0814 01:05:51.548811   61689 config.go:182] Loaded profile config "default-k8s-diff-port-585256": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 01:05:51.548902   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHHostname
	I0814 01:05:51.551955   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:51.552410   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:05:51.552439   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:51.552657   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHPort
	I0814 01:05:51.552846   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 01:05:51.553007   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 01:05:51.553131   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHUsername
	I0814 01:05:51.553293   61689 main.go:141] libmachine: Using SSH client type: native
	I0814 01:05:51.553506   61689 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0814 01:05:51.553536   61689 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0814 01:05:51.836027   61689 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0814 01:05:51.836048   61689 machine.go:97] duration metric: took 961.072984ms to provisionDockerMachine
	I0814 01:05:51.836060   61689 start.go:293] postStartSetup for "default-k8s-diff-port-585256" (driver="kvm2")
	I0814 01:05:51.836075   61689 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0814 01:05:51.836092   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .DriverName
	I0814 01:05:51.836448   61689 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0814 01:05:51.836483   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHHostname
	I0814 01:05:51.839252   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:51.839608   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:05:51.839634   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:51.839785   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHPort
	I0814 01:05:51.839998   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 01:05:51.840158   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHUsername
	I0814 01:05:51.840306   61689 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/default-k8s-diff-port-585256/id_rsa Username:docker}
	I0814 01:05:51.928323   61689 ssh_runner.go:195] Run: cat /etc/os-release
	I0814 01:05:51.932227   61689 info.go:137] Remote host: Buildroot 2023.02.9
	I0814 01:05:51.932252   61689 filesync.go:126] Scanning /home/jenkins/minikube-integration/19429-9425/.minikube/addons for local assets ...
	I0814 01:05:51.932331   61689 filesync.go:126] Scanning /home/jenkins/minikube-integration/19429-9425/.minikube/files for local assets ...
	I0814 01:05:51.932417   61689 filesync.go:149] local asset: /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem -> 165892.pem in /etc/ssl/certs
	I0814 01:05:51.932539   61689 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0814 01:05:51.941299   61689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem --> /etc/ssl/certs/165892.pem (1708 bytes)
	I0814 01:05:51.966445   61689 start.go:296] duration metric: took 130.370634ms for postStartSetup
	I0814 01:05:51.966488   61689 fix.go:56] duration metric: took 20.140102397s for fixHost
	I0814 01:05:51.966509   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHHostname
	I0814 01:05:51.969169   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:51.969542   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:05:51.969574   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:51.970716   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHPort
	I0814 01:05:51.970923   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 01:05:51.971093   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 01:05:51.971233   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHUsername
	I0814 01:05:51.971411   61689 main.go:141] libmachine: Using SSH client type: native
	I0814 01:05:51.971649   61689 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0814 01:05:51.971663   61689 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0814 01:05:52.086583   61689 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723597552.047212997
	
	I0814 01:05:52.086611   61689 fix.go:216] guest clock: 1723597552.047212997
	I0814 01:05:52.086621   61689 fix.go:229] Guest: 2024-08-14 01:05:52.047212997 +0000 UTC Remote: 2024-08-14 01:05:51.966492542 +0000 UTC m=+253.980961749 (delta=80.720455ms)
	I0814 01:05:52.086647   61689 fix.go:200] guest clock delta is within tolerance: 80.720455ms
	I0814 01:05:52.086653   61689 start.go:83] releasing machines lock for "default-k8s-diff-port-585256", held for 20.260304872s
	I0814 01:05:52.086686   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .DriverName
	I0814 01:05:52.086988   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetIP
	I0814 01:05:52.089862   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:52.090237   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:05:52.090269   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:52.090388   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .DriverName
	I0814 01:05:52.090896   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .DriverName
	I0814 01:05:52.091065   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .DriverName
	I0814 01:05:52.091161   61689 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0814 01:05:52.091208   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHHostname
	I0814 01:05:52.091307   61689 ssh_runner.go:195] Run: cat /version.json
	I0814 01:05:52.091327   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHHostname
	I0814 01:05:52.094188   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:52.094456   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:52.094520   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:05:52.094539   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:52.094722   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHPort
	I0814 01:05:52.094906   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 01:05:52.095028   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:05:52.095052   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:52.095095   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHUsername
	I0814 01:05:52.095210   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHPort
	I0814 01:05:52.095290   61689 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/default-k8s-diff-port-585256/id_rsa Username:docker}
	I0814 01:05:52.095355   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 01:05:52.095505   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHUsername
	I0814 01:05:52.095657   61689 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/default-k8s-diff-port-585256/id_rsa Username:docker}
	I0814 01:05:52.214838   61689 ssh_runner.go:195] Run: systemctl --version
	I0814 01:05:52.222204   61689 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0814 01:05:52.375439   61689 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0814 01:05:52.381523   61689 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0814 01:05:52.381609   61689 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0814 01:05:52.401552   61689 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0814 01:05:52.401582   61689 start.go:495] detecting cgroup driver to use...
	I0814 01:05:52.401651   61689 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0814 01:05:52.417919   61689 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0814 01:05:52.437217   61689 docker.go:217] disabling cri-docker service (if available) ...
	I0814 01:05:52.437288   61689 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0814 01:05:52.453875   61689 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0814 01:05:52.470300   61689 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0814 01:05:52.595346   61689 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0814 01:05:52.762539   61689 docker.go:233] disabling docker service ...
	I0814 01:05:52.762616   61689 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0814 01:05:52.778328   61689 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0814 01:05:52.791736   61689 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0814 01:05:52.935414   61689 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0814 01:05:53.120909   61689 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0814 01:05:53.134424   61689 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 01:05:53.152618   61689 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0814 01:05:53.152693   61689 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:05:53.164847   61689 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0814 01:05:53.164922   61689 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:05:53.176337   61689 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:05:53.187338   61689 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:05:53.198573   61689 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0814 01:05:53.208385   61689 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:05:53.218220   61689 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:05:53.234795   61689 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:05:53.251006   61689 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0814 01:05:53.265820   61689 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0814 01:05:53.265883   61689 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0814 01:05:53.285753   61689 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0814 01:05:53.298127   61689 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 01:05:53.458646   61689 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0814 01:05:53.610690   61689 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0814 01:05:53.610765   61689 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0814 01:05:53.615292   61689 start.go:563] Will wait 60s for crictl version
	I0814 01:05:53.615348   61689 ssh_runner.go:195] Run: which crictl
	I0814 01:05:53.618756   61689 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0814 01:05:53.658450   61689 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0814 01:05:53.658551   61689 ssh_runner.go:195] Run: crio --version
	I0814 01:05:53.685316   61689 ssh_runner.go:195] Run: crio --version
	I0814 01:05:53.715106   61689 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0814 01:05:52.110579   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .Start
	I0814 01:05:52.110744   61804 main.go:141] libmachine: (old-k8s-version-179312) Ensuring networks are active...
	I0814 01:05:52.111309   61804 main.go:141] libmachine: (old-k8s-version-179312) Ensuring network default is active
	I0814 01:05:52.111709   61804 main.go:141] libmachine: (old-k8s-version-179312) Ensuring network mk-old-k8s-version-179312 is active
	I0814 01:05:52.112094   61804 main.go:141] libmachine: (old-k8s-version-179312) Getting domain xml...
	I0814 01:05:52.112845   61804 main.go:141] libmachine: (old-k8s-version-179312) Creating domain...
	I0814 01:05:53.502995   61804 main.go:141] libmachine: (old-k8s-version-179312) Waiting to get IP...
	I0814 01:05:53.504003   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:05:53.504428   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 01:05:53.504496   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 01:05:53.504392   62858 retry.go:31] will retry after 197.24813ms: waiting for machine to come up
	I0814 01:05:53.702874   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:05:53.703413   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 01:05:53.703435   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 01:05:53.703362   62858 retry.go:31] will retry after 310.273767ms: waiting for machine to come up
	I0814 01:05:54.015867   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:05:54.016309   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 01:05:54.016343   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 01:05:54.016247   62858 retry.go:31] will retry after 401.494411ms: waiting for machine to come up
	I0814 01:05:54.419847   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:05:54.420305   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 01:05:54.420330   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 01:05:54.420256   62858 retry.go:31] will retry after 407.322632ms: waiting for machine to come up
	I0814 01:05:53.379895   61447 api_server.go:279] https://192.168.72.94:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0814 01:05:53.379926   61447 api_server.go:103] status: https://192.168.72.94:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0814 01:05:53.379939   61447 api_server.go:253] Checking apiserver healthz at https://192.168.72.94:8443/healthz ...
	I0814 01:05:53.410913   61447 api_server.go:279] https://192.168.72.94:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0814 01:05:53.410945   61447 api_server.go:103] status: https://192.168.72.94:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0814 01:05:53.727193   61447 api_server.go:253] Checking apiserver healthz at https://192.168.72.94:8443/healthz ...
	I0814 01:05:53.740840   61447 api_server.go:279] https://192.168.72.94:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0814 01:05:53.740877   61447 api_server.go:103] status: https://192.168.72.94:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0814 01:05:54.227186   61447 api_server.go:253] Checking apiserver healthz at https://192.168.72.94:8443/healthz ...
	I0814 01:05:54.238685   61447 api_server.go:279] https://192.168.72.94:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0814 01:05:54.238721   61447 api_server.go:103] status: https://192.168.72.94:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0814 01:05:54.727193   61447 api_server.go:253] Checking apiserver healthz at https://192.168.72.94:8443/healthz ...
	I0814 01:05:54.733996   61447 api_server.go:279] https://192.168.72.94:8443/healthz returned 200:
	ok
	I0814 01:05:54.744409   61447 api_server.go:141] control plane version: v1.31.0
	I0814 01:05:54.744439   61447 api_server.go:131] duration metric: took 4.018095644s to wait for apiserver health ...
	I0814 01:05:54.744455   61447 cni.go:84] Creating CNI manager for ""
	I0814 01:05:54.744495   61447 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 01:05:54.746461   61447 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0814 01:05:54.748115   61447 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0814 01:05:54.764310   61447 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0814 01:05:54.794096   61447 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 01:05:54.818989   61447 system_pods.go:59] 8 kube-system pods found
	I0814 01:05:54.819032   61447 system_pods.go:61] "coredns-6f6b679f8f-dz9zk" [67e29ce3-7f67-4b96-8030-c980773b5772] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0814 01:05:54.819042   61447 system_pods.go:61] "etcd-no-preload-776907" [b81b7341-dcd8-4374-8241-8797eb33d707] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0814 01:05:54.819081   61447 system_pods.go:61] "kube-apiserver-no-preload-776907" [33b066e2-28ef-46a7-95d7-b17806cdbde6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0814 01:05:54.819094   61447 system_pods.go:61] "kube-controller-manager-no-preload-776907" [1de07b1f-7e0d-4704-84dc-fbb1280fc3bf] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0814 01:05:54.819106   61447 system_pods.go:61] "kube-proxy-pgm9t" [efad60b0-c62e-4c47-974b-98fdca9d3496] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0814 01:05:54.819119   61447 system_pods.go:61] "kube-scheduler-no-preload-776907" [6a57c2f5-6194-4e84-bfd3-985a6ff2333d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0814 01:05:54.819136   61447 system_pods.go:61] "metrics-server-6867b74b74-gb2dt" [c950c58e-c5c3-4535-b10f-f4379ff03409] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 01:05:54.819157   61447 system_pods.go:61] "storage-provisioner" [d0ba9510-e0a5-4558-98e3-a9510920f93a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0814 01:05:54.819172   61447 system_pods.go:74] duration metric: took 25.05113ms to wait for pod list to return data ...
	I0814 01:05:54.819195   61447 node_conditions.go:102] verifying NodePressure condition ...
	I0814 01:05:54.826286   61447 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0814 01:05:54.826394   61447 node_conditions.go:123] node cpu capacity is 2
	I0814 01:05:54.826437   61447 node_conditions.go:105] duration metric: took 7.224617ms to run NodePressure ...
	I0814 01:05:54.826473   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:05:55.135886   61447 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0814 01:05:55.142122   61447 kubeadm.go:739] kubelet initialised
	I0814 01:05:55.142142   61447 kubeadm.go:740] duration metric: took 6.231178ms waiting for restarted kubelet to initialise ...
	I0814 01:05:55.142157   61447 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 01:05:55.147513   61447 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6f6b679f8f-dz9zk" in "kube-system" namespace to be "Ready" ...
	I0814 01:05:55.153178   61447 pod_ready.go:97] node "no-preload-776907" hosting pod "coredns-6f6b679f8f-dz9zk" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-776907" has status "Ready":"False"
	I0814 01:05:55.153200   61447 pod_ready.go:81] duration metric: took 5.659541ms for pod "coredns-6f6b679f8f-dz9zk" in "kube-system" namespace to be "Ready" ...
	E0814 01:05:55.153208   61447 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-776907" hosting pod "coredns-6f6b679f8f-dz9zk" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-776907" has status "Ready":"False"
	I0814 01:05:55.153215   61447 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-776907" in "kube-system" namespace to be "Ready" ...
	I0814 01:05:55.158158   61447 pod_ready.go:97] node "no-preload-776907" hosting pod "etcd-no-preload-776907" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-776907" has status "Ready":"False"
	I0814 01:05:55.158182   61447 pod_ready.go:81] duration metric: took 4.958453ms for pod "etcd-no-preload-776907" in "kube-system" namespace to be "Ready" ...
	E0814 01:05:55.158192   61447 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-776907" hosting pod "etcd-no-preload-776907" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-776907" has status "Ready":"False"
	I0814 01:05:55.158199   61447 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-776907" in "kube-system" namespace to be "Ready" ...
	I0814 01:05:55.164468   61447 pod_ready.go:97] node "no-preload-776907" hosting pod "kube-apiserver-no-preload-776907" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-776907" has status "Ready":"False"
	I0814 01:05:55.164490   61447 pod_ready.go:81] duration metric: took 6.286201ms for pod "kube-apiserver-no-preload-776907" in "kube-system" namespace to be "Ready" ...
	E0814 01:05:55.164499   61447 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-776907" hosting pod "kube-apiserver-no-preload-776907" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-776907" has status "Ready":"False"
	I0814 01:05:55.164506   61447 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-776907" in "kube-system" namespace to be "Ready" ...
	I0814 01:05:55.198966   61447 pod_ready.go:97] node "no-preload-776907" hosting pod "kube-controller-manager-no-preload-776907" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-776907" has status "Ready":"False"
	I0814 01:05:55.199003   61447 pod_ready.go:81] duration metric: took 34.484311ms for pod "kube-controller-manager-no-preload-776907" in "kube-system" namespace to be "Ready" ...
	E0814 01:05:55.199017   61447 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-776907" hosting pod "kube-controller-manager-no-preload-776907" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-776907" has status "Ready":"False"
	I0814 01:05:55.199026   61447 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-pgm9t" in "kube-system" namespace to be "Ready" ...
	I0814 01:05:55.598334   61447 pod_ready.go:97] node "no-preload-776907" hosting pod "kube-proxy-pgm9t" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-776907" has status "Ready":"False"
	I0814 01:05:55.598365   61447 pod_ready.go:81] duration metric: took 399.329275ms for pod "kube-proxy-pgm9t" in "kube-system" namespace to be "Ready" ...
	E0814 01:05:55.598377   61447 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-776907" hosting pod "kube-proxy-pgm9t" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-776907" has status "Ready":"False"
	I0814 01:05:55.598386   61447 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-776907" in "kube-system" namespace to be "Ready" ...
	I0814 01:05:55.998091   61447 pod_ready.go:97] node "no-preload-776907" hosting pod "kube-scheduler-no-preload-776907" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-776907" has status "Ready":"False"
	I0814 01:05:55.998127   61447 pod_ready.go:81] duration metric: took 399.731033ms for pod "kube-scheduler-no-preload-776907" in "kube-system" namespace to be "Ready" ...
	E0814 01:05:55.998142   61447 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-776907" hosting pod "kube-scheduler-no-preload-776907" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-776907" has status "Ready":"False"
	I0814 01:05:55.998152   61447 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace to be "Ready" ...
	I0814 01:05:56.397421   61447 pod_ready.go:97] node "no-preload-776907" hosting pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-776907" has status "Ready":"False"
	I0814 01:05:56.397448   61447 pod_ready.go:81] duration metric: took 399.277712ms for pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace to be "Ready" ...
	E0814 01:05:56.397458   61447 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-776907" hosting pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-776907" has status "Ready":"False"
	I0814 01:05:56.397465   61447 pod_ready.go:38] duration metric: took 1.255299191s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 01:05:56.397481   61447 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0814 01:05:56.409600   61447 ops.go:34] apiserver oom_adj: -16
	I0814 01:05:56.409643   61447 kubeadm.go:597] duration metric: took 8.215521031s to restartPrimaryControlPlane
	I0814 01:05:56.409656   61447 kubeadm.go:394] duration metric: took 8.258927601s to StartCluster
	I0814 01:05:56.409677   61447 settings.go:142] acquiring lock: {Name:mkb0f793aa2a6618ff3457f9cd2d34beec5f1b5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 01:05:56.409769   61447 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19429-9425/kubeconfig
	I0814 01:05:56.411135   61447 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19429-9425/kubeconfig: {Name:mkd99f966962f24d3ec49056c344f5320df43dba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 01:05:56.411434   61447 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.94 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0814 01:05:56.411510   61447 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0814 01:05:56.411605   61447 addons.go:69] Setting storage-provisioner=true in profile "no-preload-776907"
	I0814 01:05:56.411639   61447 addons.go:234] Setting addon storage-provisioner=true in "no-preload-776907"
	W0814 01:05:56.411651   61447 addons.go:243] addon storage-provisioner should already be in state true
	I0814 01:05:56.411692   61447 host.go:66] Checking if "no-preload-776907" exists ...
	I0814 01:05:56.411702   61447 config.go:182] Loaded profile config "no-preload-776907": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 01:05:56.411755   61447 addons.go:69] Setting default-storageclass=true in profile "no-preload-776907"
	I0814 01:05:56.411792   61447 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-776907"
	I0814 01:05:56.412127   61447 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:05:56.412169   61447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:05:56.412221   61447 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:05:56.412238   61447 addons.go:69] Setting metrics-server=true in profile "no-preload-776907"
	I0814 01:05:56.412249   61447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:05:56.412272   61447 addons.go:234] Setting addon metrics-server=true in "no-preload-776907"
	W0814 01:05:56.412289   61447 addons.go:243] addon metrics-server should already be in state true
	I0814 01:05:56.412325   61447 host.go:66] Checking if "no-preload-776907" exists ...
	I0814 01:05:56.412679   61447 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:05:56.412726   61447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:05:56.413470   61447 out.go:177] * Verifying Kubernetes components...
	I0814 01:05:56.414907   61447 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 01:05:56.432617   61447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40991
	I0814 01:05:56.433633   61447 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:05:56.433655   61447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42565
	I0814 01:05:56.433682   61447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33323
	I0814 01:05:56.434304   61447 main.go:141] libmachine: Using API Version  1
	I0814 01:05:56.434325   61447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:05:56.434348   61447 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:05:56.434768   61447 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:05:56.434828   61447 main.go:141] libmachine: Using API Version  1
	I0814 01:05:56.434849   61447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:05:56.435292   61447 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:05:56.435318   61447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:05:56.435500   61447 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:05:56.436085   61447 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:05:56.436133   61447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:05:56.436678   61447 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:05:56.438722   61447 main.go:141] libmachine: Using API Version  1
	I0814 01:05:56.438744   61447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:05:56.439300   61447 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:05:56.442254   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetState
	I0814 01:05:56.445951   61447 addons.go:234] Setting addon default-storageclass=true in "no-preload-776907"
	W0814 01:05:56.445969   61447 addons.go:243] addon default-storageclass should already be in state true
	I0814 01:05:56.445997   61447 host.go:66] Checking if "no-preload-776907" exists ...
	I0814 01:05:56.446331   61447 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:05:56.446364   61447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:05:56.457855   61447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36297
	I0814 01:05:56.459973   61447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40635
	I0814 01:05:56.460484   61447 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:05:56.461068   61447 main.go:141] libmachine: Using API Version  1
	I0814 01:05:56.461089   61447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:05:56.461565   61447 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:05:56.462741   61447 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:05:56.462899   61447 main.go:141] libmachine: Using API Version  1
	I0814 01:05:56.462913   61447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:05:56.463577   61447 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:05:56.463640   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetState
	I0814 01:05:56.464100   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetState
	I0814 01:05:56.464341   61447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38841
	I0814 01:05:56.465394   61447 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:05:56.465878   61447 main.go:141] libmachine: (no-preload-776907) Calling .DriverName
	I0814 01:05:56.465995   61447 main.go:141] libmachine: Using API Version  1
	I0814 01:05:56.466007   61447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:05:56.466617   61447 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:05:56.466684   61447 main.go:141] libmachine: (no-preload-776907) Calling .DriverName
	I0814 01:05:56.467327   61447 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:05:56.467367   61447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:05:56.468708   61447 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0814 01:05:56.468802   61447 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 01:05:56.469927   61447 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0814 01:05:56.469944   61447 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0814 01:05:56.469963   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHHostname
	I0814 01:05:56.473235   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:56.473684   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:56.473705   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:56.473879   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHPort
	I0814 01:05:56.474052   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 01:05:56.474176   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHUsername
	I0814 01:05:56.474181   61447 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 01:05:56.474230   61447 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0814 01:05:56.474244   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHHostname
	I0814 01:05:56.474328   61447 sshutil.go:53] new ssh client: &{IP:192.168.72.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/no-preload-776907/id_rsa Username:docker}
	I0814 01:05:56.477789   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:56.478291   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:56.478307   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:56.478643   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHPort
	I0814 01:05:56.478813   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 01:05:56.478932   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHUsername
	I0814 01:05:56.479056   61447 sshutil.go:53] new ssh client: &{IP:192.168.72.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/no-preload-776907/id_rsa Username:docker}
	I0814 01:05:56.506690   61447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40059
	I0814 01:05:56.507196   61447 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:05:56.507726   61447 main.go:141] libmachine: Using API Version  1
	I0814 01:05:56.507750   61447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:05:56.508129   61447 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:05:56.508352   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetState
	I0814 01:05:53.716678   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetIP
	I0814 01:05:53.719662   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:53.720132   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:05:53.720161   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:53.720382   61689 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0814 01:05:53.724276   61689 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 01:05:53.736896   61689 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-585256 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:default-k8s-diff-port-585256 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0814 01:05:53.737033   61689 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0814 01:05:53.737090   61689 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 01:05:53.786464   61689 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0814 01:05:53.786549   61689 ssh_runner.go:195] Run: which lz4
	I0814 01:05:53.791254   61689 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0814 01:05:53.796216   61689 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0814 01:05:53.796251   61689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0814 01:05:55.074296   61689 crio.go:462] duration metric: took 1.283077887s to copy over tarball
	I0814 01:05:55.074381   61689 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0814 01:05:57.330151   61689 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.255736783s)
	I0814 01:05:57.330183   61689 crio.go:469] duration metric: took 2.255855524s to extract the tarball
	I0814 01:05:57.330193   61689 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0814 01:05:57.390001   61689 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 01:05:57.438765   61689 crio.go:514] all images are preloaded for cri-o runtime.
	I0814 01:05:57.438795   61689 cache_images.go:84] Images are preloaded, skipping loading
	I0814 01:05:57.438804   61689 kubeadm.go:934] updating node { 192.168.39.110 8444 v1.31.0 crio true true} ...
	I0814 01:05:57.438939   61689 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-585256 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.110
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-585256 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0814 01:05:57.439019   61689 ssh_runner.go:195] Run: crio config
	I0814 01:05:57.487432   61689 cni.go:84] Creating CNI manager for ""
	I0814 01:05:57.487456   61689 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 01:05:57.487468   61689 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0814 01:05:57.487488   61689 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.110 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-585256 NodeName:default-k8s-diff-port-585256 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.110"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.110 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0814 01:05:57.487628   61689 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.110
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-585256"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.110
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.110"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0814 01:05:57.487683   61689 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0814 01:05:57.499806   61689 binaries.go:44] Found k8s binaries, skipping transfer
	I0814 01:05:57.499875   61689 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0814 01:05:57.508987   61689 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0814 01:05:57.527561   61689 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0814 01:05:57.546193   61689 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0814 01:05:57.566209   61689 ssh_runner.go:195] Run: grep 192.168.39.110	control-plane.minikube.internal$ /etc/hosts
	I0814 01:05:57.569852   61689 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.110	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 01:05:57.584800   61689 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 01:05:57.718643   61689 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 01:05:57.739124   61689 certs.go:68] Setting up /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/default-k8s-diff-port-585256 for IP: 192.168.39.110
	I0814 01:05:57.739153   61689 certs.go:194] generating shared ca certs ...
	I0814 01:05:57.739174   61689 certs.go:226] acquiring lock for ca certs: {Name:mke321171338291cf6d66f3acbfe43d3fabcafa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 01:05:57.739390   61689 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19429-9425/.minikube/ca.key
	I0814 01:05:57.739461   61689 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.key
	I0814 01:05:57.739476   61689 certs.go:256] generating profile certs ...
	I0814 01:05:57.739607   61689 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/default-k8s-diff-port-585256/client.key
	I0814 01:05:57.739700   61689 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/default-k8s-diff-port-585256/apiserver.key.7cbada89
	I0814 01:05:57.739764   61689 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/default-k8s-diff-port-585256/proxy-client.key
	I0814 01:05:57.739951   61689 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589.pem (1338 bytes)
	W0814 01:05:57.740000   61689 certs.go:480] ignoring /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589_empty.pem, impossibly tiny 0 bytes
	I0814 01:05:57.740017   61689 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem (1679 bytes)
	I0814 01:05:57.740054   61689 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem (1082 bytes)
	I0814 01:05:57.740096   61689 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem (1123 bytes)
	I0814 01:05:57.740128   61689 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem (1675 bytes)
	I0814 01:05:57.740198   61689 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem (1708 bytes)
	I0814 01:05:57.740914   61689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0814 01:05:57.776830   61689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0814 01:05:57.805557   61689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0814 01:05:57.838303   61689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0814 01:05:57.878807   61689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/default-k8s-diff-port-585256/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0814 01:05:57.918149   61689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/default-k8s-diff-port-585256/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0814 01:05:57.951098   61689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/default-k8s-diff-port-585256/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0814 01:05:57.979966   61689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/default-k8s-diff-port-585256/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0814 01:05:58.008045   61689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem --> /usr/share/ca-certificates/165892.pem (1708 bytes)
	I0814 01:05:56.510326   61447 main.go:141] libmachine: (no-preload-776907) Calling .DriverName
	I0814 01:05:56.510711   61447 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0814 01:05:56.510727   61447 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0814 01:05:56.510746   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHHostname
	I0814 01:05:56.513933   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:56.514347   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:56.514366   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:56.514640   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHPort
	I0814 01:05:56.514790   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 01:05:56.514921   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHUsername
	I0814 01:05:56.515041   61447 sshutil.go:53] new ssh client: &{IP:192.168.72.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/no-preload-776907/id_rsa Username:docker}
	I0814 01:05:56.648210   61447 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 01:05:56.669968   61447 node_ready.go:35] waiting up to 6m0s for node "no-preload-776907" to be "Ready" ...
	I0814 01:05:56.752258   61447 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0814 01:05:56.752282   61447 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0814 01:05:56.784534   61447 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0814 01:05:56.784570   61447 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0814 01:05:56.797555   61447 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0814 01:05:56.811711   61447 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 01:05:56.852143   61447 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0814 01:05:56.852222   61447 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0814 01:05:56.896802   61447 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0814 01:05:57.332181   61447 main.go:141] libmachine: Making call to close driver server
	I0814 01:05:57.332207   61447 main.go:141] libmachine: (no-preload-776907) Calling .Close
	I0814 01:05:57.332534   61447 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:05:57.332552   61447 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:05:57.332562   61447 main.go:141] libmachine: Making call to close driver server
	I0814 01:05:57.332570   61447 main.go:141] libmachine: (no-preload-776907) Calling .Close
	I0814 01:05:57.332892   61447 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:05:57.332908   61447 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:05:57.332999   61447 main.go:141] libmachine: (no-preload-776907) DBG | Closing plugin on server side
	I0814 01:05:57.377695   61447 main.go:141] libmachine: Making call to close driver server
	I0814 01:05:57.377726   61447 main.go:141] libmachine: (no-preload-776907) Calling .Close
	I0814 01:05:57.378310   61447 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:05:57.378335   61447 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:05:57.378307   61447 main.go:141] libmachine: (no-preload-776907) DBG | Closing plugin on server side
	I0814 01:05:58.285384   61447 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.388491618s)
	I0814 01:05:58.285399   61447 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.473604802s)
	I0814 01:05:58.285438   61447 main.go:141] libmachine: Making call to close driver server
	I0814 01:05:58.285466   61447 main.go:141] libmachine: (no-preload-776907) Calling .Close
	I0814 01:05:58.285438   61447 main.go:141] libmachine: Making call to close driver server
	I0814 01:05:58.285542   61447 main.go:141] libmachine: (no-preload-776907) Calling .Close
	I0814 01:05:58.285816   61447 main.go:141] libmachine: (no-preload-776907) DBG | Closing plugin on server side
	I0814 01:05:58.285858   61447 main.go:141] libmachine: (no-preload-776907) DBG | Closing plugin on server side
	I0814 01:05:58.285874   61447 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:05:58.285881   61447 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:05:58.285890   61447 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:05:58.285897   61447 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:05:58.285903   61447 main.go:141] libmachine: Making call to close driver server
	I0814 01:05:58.285908   61447 main.go:141] libmachine: Making call to close driver server
	I0814 01:05:58.285915   61447 main.go:141] libmachine: (no-preload-776907) Calling .Close
	I0814 01:05:58.285934   61447 main.go:141] libmachine: (no-preload-776907) Calling .Close
	I0814 01:05:58.286168   61447 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:05:58.286180   61447 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:05:58.287529   61447 main.go:141] libmachine: (no-preload-776907) DBG | Closing plugin on server side
	I0814 01:05:58.287541   61447 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:05:58.287560   61447 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:05:58.287576   61447 addons.go:475] Verifying addon metrics-server=true in "no-preload-776907"
	I0814 01:05:58.289411   61447 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0814 01:05:54.828943   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:05:54.829542   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 01:05:54.829567   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 01:05:54.829451   62858 retry.go:31] will retry after 761.368258ms: waiting for machine to come up
	I0814 01:05:55.592398   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:05:55.593051   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 01:05:55.593077   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 01:05:55.592959   62858 retry.go:31] will retry after 776.526082ms: waiting for machine to come up
	I0814 01:05:56.370701   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:05:56.371193   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 01:05:56.371214   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 01:05:56.371176   62858 retry.go:31] will retry after 1.033572565s: waiting for machine to come up
	I0814 01:05:57.407052   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:05:57.407572   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 01:05:57.407608   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 01:05:57.407514   62858 retry.go:31] will retry after 1.075443116s: waiting for machine to come up
	I0814 01:05:58.484020   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:05:58.484428   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 01:05:58.484450   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 01:05:58.484400   62858 retry.go:31] will retry after 1.753983606s: waiting for machine to come up
	I0814 01:05:58.290516   61447 addons.go:510] duration metric: took 1.879011423s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0814 01:05:58.674495   61447 node_ready.go:53] node "no-preload-776907" has status "Ready":"False"
	I0814 01:06:00.726396   61447 node_ready.go:53] node "no-preload-776907" has status "Ready":"False"
	I0814 01:05:58.035164   61689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0814 01:05:58.062151   61689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589.pem --> /usr/share/ca-certificates/16589.pem (1338 bytes)
	I0814 01:05:58.088779   61689 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0814 01:05:58.104815   61689 ssh_runner.go:195] Run: openssl version
	I0814 01:05:58.111743   61689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0814 01:05:58.122523   61689 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0814 01:05:58.126771   61689 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 13 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0814 01:05:58.126827   61689 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0814 01:05:58.132103   61689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0814 01:05:58.143604   61689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16589.pem && ln -fs /usr/share/ca-certificates/16589.pem /etc/ssl/certs/16589.pem"
	I0814 01:05:58.155065   61689 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16589.pem
	I0814 01:05:58.160457   61689 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 13 23:59 /usr/share/ca-certificates/16589.pem
	I0814 01:05:58.160511   61689 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16589.pem
	I0814 01:05:58.167417   61689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16589.pem /etc/ssl/certs/51391683.0"
	I0814 01:05:58.180825   61689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/165892.pem && ln -fs /usr/share/ca-certificates/165892.pem /etc/ssl/certs/165892.pem"
	I0814 01:05:58.193263   61689 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/165892.pem
	I0814 01:05:58.198571   61689 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 13 23:59 /usr/share/ca-certificates/165892.pem
	I0814 01:05:58.198637   61689 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/165892.pem
	I0814 01:05:58.205645   61689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/165892.pem /etc/ssl/certs/3ec20f2e.0"
	I0814 01:05:58.219088   61689 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0814 01:05:58.224431   61689 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0814 01:05:58.231762   61689 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0814 01:05:58.238996   61689 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0814 01:05:58.244758   61689 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0814 01:05:58.250112   61689 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0814 01:05:58.257224   61689 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0814 01:05:58.262563   61689 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-585256 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:default-k8s-diff-port-585256 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 01:05:58.262677   61689 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0814 01:05:58.262745   61689 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 01:05:58.309680   61689 cri.go:89] found id: ""
	I0814 01:05:58.309753   61689 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0814 01:05:58.319775   61689 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0814 01:05:58.319796   61689 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0814 01:05:58.319852   61689 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0814 01:05:58.329093   61689 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0814 01:05:58.330026   61689 kubeconfig.go:125] found "default-k8s-diff-port-585256" server: "https://192.168.39.110:8444"
	I0814 01:05:58.332001   61689 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0814 01:05:58.341206   61689 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.110
	I0814 01:05:58.341235   61689 kubeadm.go:1160] stopping kube-system containers ...
	I0814 01:05:58.341247   61689 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0814 01:05:58.341311   61689 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 01:05:58.376929   61689 cri.go:89] found id: ""
	I0814 01:05:58.376991   61689 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0814 01:05:58.393789   61689 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 01:05:58.402954   61689 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 01:05:58.402979   61689 kubeadm.go:157] found existing configuration files:
	
	I0814 01:05:58.403032   61689 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0814 01:05:58.412025   61689 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 01:05:58.412081   61689 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 01:05:58.421031   61689 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0814 01:05:58.429702   61689 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 01:05:58.429774   61689 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 01:05:58.438859   61689 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0814 01:05:58.447047   61689 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 01:05:58.447106   61689 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 01:05:58.455697   61689 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0814 01:05:58.463942   61689 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 01:05:58.464004   61689 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 01:05:58.472399   61689 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 01:05:58.481173   61689 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:05:58.591187   61689 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:05:59.150641   61689 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:05:59.356842   61689 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:05:59.416846   61689 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:05:59.500693   61689 api_server.go:52] waiting for apiserver process to appear ...
	I0814 01:05:59.500779   61689 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:00.001860   61689 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:00.500969   61689 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:01.001662   61689 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:01.030737   61689 api_server.go:72] duration metric: took 1.530044643s to wait for apiserver process to appear ...
	I0814 01:06:01.030766   61689 api_server.go:88] waiting for apiserver healthz status ...
	I0814 01:06:01.030790   61689 api_server.go:253] Checking apiserver healthz at https://192.168.39.110:8444/healthz ...
	I0814 01:06:01.031270   61689 api_server.go:269] stopped: https://192.168.39.110:8444/healthz: Get "https://192.168.39.110:8444/healthz": dial tcp 192.168.39.110:8444: connect: connection refused
	I0814 01:06:01.530913   61689 api_server.go:253] Checking apiserver healthz at https://192.168.39.110:8444/healthz ...
	I0814 01:06:00.239701   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:00.240210   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 01:06:00.240234   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 01:06:00.240157   62858 retry.go:31] will retry after 1.471169968s: waiting for machine to come up
	I0814 01:06:01.713921   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:01.714410   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 01:06:01.714449   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 01:06:01.714385   62858 retry.go:31] will retry after 2.509653415s: waiting for machine to come up
	I0814 01:06:04.225883   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:04.226391   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 01:06:04.226417   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 01:06:04.226346   62858 retry.go:31] will retry after 3.61921572s: waiting for machine to come up
	I0814 01:06:04.011296   61689 api_server.go:279] https://192.168.39.110:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0814 01:06:04.011342   61689 api_server.go:103] status: https://192.168.39.110:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0814 01:06:04.011359   61689 api_server.go:253] Checking apiserver healthz at https://192.168.39.110:8444/healthz ...
	I0814 01:06:04.030095   61689 api_server.go:279] https://192.168.39.110:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0814 01:06:04.030128   61689 api_server.go:103] status: https://192.168.39.110:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0814 01:06:04.031159   61689 api_server.go:253] Checking apiserver healthz at https://192.168.39.110:8444/healthz ...
	I0814 01:06:04.149715   61689 api_server.go:279] https://192.168.39.110:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0814 01:06:04.149760   61689 api_server.go:103] status: https://192.168.39.110:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0814 01:06:04.530942   61689 api_server.go:253] Checking apiserver healthz at https://192.168.39.110:8444/healthz ...
	I0814 01:06:04.541074   61689 api_server.go:279] https://192.168.39.110:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0814 01:06:04.541119   61689 api_server.go:103] status: https://192.168.39.110:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0814 01:06:05.031232   61689 api_server.go:253] Checking apiserver healthz at https://192.168.39.110:8444/healthz ...
	I0814 01:06:05.036252   61689 api_server.go:279] https://192.168.39.110:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0814 01:06:05.036278   61689 api_server.go:103] status: https://192.168.39.110:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0814 01:06:05.531902   61689 api_server.go:253] Checking apiserver healthz at https://192.168.39.110:8444/healthz ...
	I0814 01:06:05.536016   61689 api_server.go:279] https://192.168.39.110:8444/healthz returned 200:
	ok
	I0814 01:06:05.542693   61689 api_server.go:141] control plane version: v1.31.0
	I0814 01:06:05.542718   61689 api_server.go:131] duration metric: took 4.511944733s to wait for apiserver health ...
	I0814 01:06:05.542728   61689 cni.go:84] Creating CNI manager for ""
	I0814 01:06:05.542736   61689 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 01:06:05.544557   61689 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0814 01:06:03.174271   61447 node_ready.go:53] node "no-preload-776907" has status "Ready":"False"
	I0814 01:06:04.174287   61447 node_ready.go:49] node "no-preload-776907" has status "Ready":"True"
	I0814 01:06:04.174312   61447 node_ready.go:38] duration metric: took 7.504312709s for node "no-preload-776907" to be "Ready" ...
	I0814 01:06:04.174324   61447 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 01:06:04.181275   61447 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-dz9zk" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:04.187150   61447 pod_ready.go:92] pod "coredns-6f6b679f8f-dz9zk" in "kube-system" namespace has status "Ready":"True"
	I0814 01:06:04.187171   61447 pod_ready.go:81] duration metric: took 5.866488ms for pod "coredns-6f6b679f8f-dz9zk" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:04.187180   61447 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-776907" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:04.192673   61447 pod_ready.go:92] pod "etcd-no-preload-776907" in "kube-system" namespace has status "Ready":"True"
	I0814 01:06:04.192694   61447 pod_ready.go:81] duration metric: took 5.50752ms for pod "etcd-no-preload-776907" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:04.192705   61447 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-776907" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:06.199283   61447 pod_ready.go:102] pod "kube-apiserver-no-preload-776907" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:05.545819   61689 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0814 01:06:05.556019   61689 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0814 01:06:05.598403   61689 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 01:06:05.608687   61689 system_pods.go:59] 8 kube-system pods found
	I0814 01:06:05.608718   61689 system_pods.go:61] "coredns-6f6b679f8f-7vdsf" [ea069874-e3a9-41a4-b038-cfca429e60cc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0814 01:06:05.608730   61689 system_pods.go:61] "etcd-default-k8s-diff-port-585256" [922a7db1-2b4d-4f7b-af08-3ed730f1d6e3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0814 01:06:05.608737   61689 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-585256" [2db632ae-aaf3-4df4-85b2-7ba505297efb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0814 01:06:05.608743   61689 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-585256" [d9cc182b-9153-4606-a719-465aed72c481] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0814 01:06:05.608747   61689 system_pods.go:61] "kube-proxy-cz77l" [67d1af69-ecbd-4564-be50-f96936604345] Running
	I0814 01:06:05.608751   61689 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-585256" [f0e99120-b573-4eb6-909f-a9b79886ec47] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0814 01:06:05.608755   61689 system_pods.go:61] "metrics-server-6867b74b74-6cql9" [f1213ad4-770d-4b81-96b9-7b5e10f2a23a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 01:06:05.608760   61689 system_pods.go:61] "storage-provisioner" [589b83be-2ad6-4b16-829f-cb944487303c] Running
	I0814 01:06:05.608766   61689 system_pods.go:74] duration metric: took 10.339955ms to wait for pod list to return data ...
	I0814 01:06:05.608772   61689 node_conditions.go:102] verifying NodePressure condition ...
	I0814 01:06:05.612993   61689 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0814 01:06:05.613024   61689 node_conditions.go:123] node cpu capacity is 2
	I0814 01:06:05.613037   61689 node_conditions.go:105] duration metric: took 4.259435ms to run NodePressure ...
	I0814 01:06:05.613055   61689 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:06:05.884859   61689 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0814 01:06:05.889608   61689 kubeadm.go:739] kubelet initialised
	I0814 01:06:05.889636   61689 kubeadm.go:740] duration metric: took 4.742229ms waiting for restarted kubelet to initialise ...
	I0814 01:06:05.889644   61689 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 01:06:05.991222   61689 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6f6b679f8f-7vdsf" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:05.997411   61689 pod_ready.go:97] node "default-k8s-diff-port-585256" hosting pod "coredns-6f6b679f8f-7vdsf" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-585256" has status "Ready":"False"
	I0814 01:06:05.997442   61689 pod_ready.go:81] duration metric: took 6.186188ms for pod "coredns-6f6b679f8f-7vdsf" in "kube-system" namespace to be "Ready" ...
	E0814 01:06:05.997455   61689 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-585256" hosting pod "coredns-6f6b679f8f-7vdsf" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-585256" has status "Ready":"False"
	I0814 01:06:05.997463   61689 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-585256" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:06.008153   61689 pod_ready.go:97] node "default-k8s-diff-port-585256" hosting pod "etcd-default-k8s-diff-port-585256" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-585256" has status "Ready":"False"
	I0814 01:06:06.008188   61689 pod_ready.go:81] duration metric: took 10.714691ms for pod "etcd-default-k8s-diff-port-585256" in "kube-system" namespace to be "Ready" ...
	E0814 01:06:06.008204   61689 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-585256" hosting pod "etcd-default-k8s-diff-port-585256" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-585256" has status "Ready":"False"
	I0814 01:06:06.008213   61689 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-585256" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:06.013480   61689 pod_ready.go:97] node "default-k8s-diff-port-585256" hosting pod "kube-apiserver-default-k8s-diff-port-585256" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-585256" has status "Ready":"False"
	I0814 01:06:06.013500   61689 pod_ready.go:81] duration metric: took 5.279106ms for pod "kube-apiserver-default-k8s-diff-port-585256" in "kube-system" namespace to be "Ready" ...
	E0814 01:06:06.013510   61689 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-585256" hosting pod "kube-apiserver-default-k8s-diff-port-585256" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-585256" has status "Ready":"False"
	I0814 01:06:06.013517   61689 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-585256" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:06.022821   61689 pod_ready.go:97] node "default-k8s-diff-port-585256" hosting pod "kube-controller-manager-default-k8s-diff-port-585256" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-585256" has status "Ready":"False"
	I0814 01:06:06.022841   61689 pod_ready.go:81] duration metric: took 9.318586ms for pod "kube-controller-manager-default-k8s-diff-port-585256" in "kube-system" namespace to be "Ready" ...
	E0814 01:06:06.022851   61689 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-585256" hosting pod "kube-controller-manager-default-k8s-diff-port-585256" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-585256" has status "Ready":"False"
	I0814 01:06:06.022857   61689 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-cz77l" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:06.402225   61689 pod_ready.go:92] pod "kube-proxy-cz77l" in "kube-system" namespace has status "Ready":"True"
	I0814 01:06:06.402251   61689 pod_ready.go:81] duration metric: took 379.387097ms for pod "kube-proxy-cz77l" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:06.402267   61689 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-585256" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:07.847343   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:07.847844   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 01:06:07.847879   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 01:06:07.847800   62858 retry.go:31] will retry after 2.983420512s: waiting for machine to come up
	I0814 01:06:07.699362   61447 pod_ready.go:92] pod "kube-apiserver-no-preload-776907" in "kube-system" namespace has status "Ready":"True"
	I0814 01:06:07.699393   61447 pod_ready.go:81] duration metric: took 3.506678951s for pod "kube-apiserver-no-preload-776907" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:07.699407   61447 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-776907" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:07.704007   61447 pod_ready.go:92] pod "kube-controller-manager-no-preload-776907" in "kube-system" namespace has status "Ready":"True"
	I0814 01:06:07.704028   61447 pod_ready.go:81] duration metric: took 4.613152ms for pod "kube-controller-manager-no-preload-776907" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:07.704038   61447 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pgm9t" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:07.708027   61447 pod_ready.go:92] pod "kube-proxy-pgm9t" in "kube-system" namespace has status "Ready":"True"
	I0814 01:06:07.708044   61447 pod_ready.go:81] duration metric: took 3.999792ms for pod "kube-proxy-pgm9t" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:07.708052   61447 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-776907" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:07.774591   61447 pod_ready.go:92] pod "kube-scheduler-no-preload-776907" in "kube-system" namespace has status "Ready":"True"
	I0814 01:06:07.774621   61447 pod_ready.go:81] duration metric: took 66.56102ms for pod "kube-scheduler-no-preload-776907" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:07.774642   61447 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:09.781156   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:12.050400   61115 start.go:364] duration metric: took 54.455049928s to acquireMachinesLock for "embed-certs-901410"
	I0814 01:06:12.050448   61115 start.go:96] Skipping create...Using existing machine configuration
	I0814 01:06:12.050458   61115 fix.go:54] fixHost starting: 
	I0814 01:06:12.050897   61115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:06:12.050932   61115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:06:12.067865   61115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41559
	I0814 01:06:12.068209   61115 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:06:12.068726   61115 main.go:141] libmachine: Using API Version  1
	I0814 01:06:12.068757   61115 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:06:12.069116   61115 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:06:12.069354   61115 main.go:141] libmachine: (embed-certs-901410) Calling .DriverName
	I0814 01:06:12.069516   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetState
	I0814 01:06:12.070994   61115 fix.go:112] recreateIfNeeded on embed-certs-901410: state=Stopped err=<nil>
	I0814 01:06:12.071029   61115 main.go:141] libmachine: (embed-certs-901410) Calling .DriverName
	W0814 01:06:12.071156   61115 fix.go:138] unexpected machine state, will restart: <nil>
	I0814 01:06:12.072932   61115 out.go:177] * Restarting existing kvm2 VM for "embed-certs-901410" ...
	I0814 01:06:08.410114   61689 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-585256" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:10.909528   61689 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-585256" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:12.911385   61689 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-585256" in "kube-system" namespace has status "Ready":"True"
	I0814 01:06:12.911416   61689 pod_ready.go:81] duration metric: took 6.509140238s for pod "kube-scheduler-default-k8s-diff-port-585256" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:12.911432   61689 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:10.834861   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:10.835358   61804 main.go:141] libmachine: (old-k8s-version-179312) Found IP for machine: 192.168.61.123
	I0814 01:06:10.835381   61804 main.go:141] libmachine: (old-k8s-version-179312) Reserving static IP address...
	I0814 01:06:10.835396   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has current primary IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:10.835795   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "old-k8s-version-179312", mac: "52:54:00:b2:76:73", ip: "192.168.61.123"} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:10.835827   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | skip adding static IP to network mk-old-k8s-version-179312 - found existing host DHCP lease matching {name: "old-k8s-version-179312", mac: "52:54:00:b2:76:73", ip: "192.168.61.123"}
	I0814 01:06:10.835846   61804 main.go:141] libmachine: (old-k8s-version-179312) Reserved static IP address: 192.168.61.123
	I0814 01:06:10.835866   61804 main.go:141] libmachine: (old-k8s-version-179312) Waiting for SSH to be available...
	I0814 01:06:10.835880   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | Getting to WaitForSSH function...
	I0814 01:06:10.837965   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:10.838336   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:10.838379   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:10.838482   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | Using SSH client type: external
	I0814 01:06:10.838520   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | Using SSH private key: /home/jenkins/minikube-integration/19429-9425/.minikube/machines/old-k8s-version-179312/id_rsa (-rw-------)
	I0814 01:06:10.838549   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.123 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19429-9425/.minikube/machines/old-k8s-version-179312/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0814 01:06:10.838568   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | About to run SSH command:
	I0814 01:06:10.838578   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | exit 0
	I0814 01:06:10.965836   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | SSH cmd err, output: <nil>: 
	I0814 01:06:10.966231   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetConfigRaw
	I0814 01:06:10.966912   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetIP
	I0814 01:06:10.969194   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:10.969535   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:10.969560   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:10.969789   61804 profile.go:143] Saving config to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312/config.json ...
	I0814 01:06:10.969969   61804 machine.go:94] provisionDockerMachine start ...
	I0814 01:06:10.969987   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .DriverName
	I0814 01:06:10.970183   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHHostname
	I0814 01:06:10.972010   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:10.972332   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:10.972361   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:10.972476   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHPort
	I0814 01:06:10.972658   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 01:06:10.972807   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 01:06:10.972942   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHUsername
	I0814 01:06:10.973088   61804 main.go:141] libmachine: Using SSH client type: native
	I0814 01:06:10.973257   61804 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I0814 01:06:10.973267   61804 main.go:141] libmachine: About to run SSH command:
	hostname
	I0814 01:06:11.074077   61804 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0814 01:06:11.074111   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetMachineName
	I0814 01:06:11.074328   61804 buildroot.go:166] provisioning hostname "old-k8s-version-179312"
	I0814 01:06:11.074364   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetMachineName
	I0814 01:06:11.074666   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHHostname
	I0814 01:06:11.077309   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.077697   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:11.077730   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.077803   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHPort
	I0814 01:06:11.077990   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 01:06:11.078161   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 01:06:11.078304   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHUsername
	I0814 01:06:11.078510   61804 main.go:141] libmachine: Using SSH client type: native
	I0814 01:06:11.078729   61804 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I0814 01:06:11.078743   61804 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-179312 && echo "old-k8s-version-179312" | sudo tee /etc/hostname
	I0814 01:06:11.193209   61804 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-179312
	
	I0814 01:06:11.193241   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHHostname
	I0814 01:06:11.195907   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.196315   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:11.196342   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.196569   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHPort
	I0814 01:06:11.196774   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 01:06:11.196936   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 01:06:11.197079   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHUsername
	I0814 01:06:11.197234   61804 main.go:141] libmachine: Using SSH client type: native
	I0814 01:06:11.197448   61804 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I0814 01:06:11.197477   61804 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-179312' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-179312/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-179312' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0814 01:06:11.312005   61804 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 01:06:11.312037   61804 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19429-9425/.minikube CaCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19429-9425/.minikube}
	I0814 01:06:11.312082   61804 buildroot.go:174] setting up certificates
	I0814 01:06:11.312093   61804 provision.go:84] configureAuth start
	I0814 01:06:11.312103   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetMachineName
	I0814 01:06:11.312396   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetIP
	I0814 01:06:11.315412   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.315909   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:11.315952   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.316043   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHHostname
	I0814 01:06:11.318283   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.318603   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:11.318630   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.318791   61804 provision.go:143] copyHostCerts
	I0814 01:06:11.318852   61804 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem, removing ...
	I0814 01:06:11.318875   61804 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem
	I0814 01:06:11.318944   61804 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem (1082 bytes)
	I0814 01:06:11.319073   61804 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem, removing ...
	I0814 01:06:11.319085   61804 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem
	I0814 01:06:11.319115   61804 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem (1123 bytes)
	I0814 01:06:11.319199   61804 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem, removing ...
	I0814 01:06:11.319209   61804 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem
	I0814 01:06:11.319262   61804 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem (1675 bytes)
	I0814 01:06:11.319351   61804 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-179312 san=[127.0.0.1 192.168.61.123 localhost minikube old-k8s-version-179312]
	I0814 01:06:11.396260   61804 provision.go:177] copyRemoteCerts
	I0814 01:06:11.396338   61804 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0814 01:06:11.396372   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHHostname
	I0814 01:06:11.399365   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.399788   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:11.399824   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.399989   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHPort
	I0814 01:06:11.400186   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 01:06:11.400349   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHUsername
	I0814 01:06:11.400555   61804 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/old-k8s-version-179312/id_rsa Username:docker}
	I0814 01:06:11.483862   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0814 01:06:11.506282   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0814 01:06:11.529014   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0814 01:06:11.550986   61804 provision.go:87] duration metric: took 238.880389ms to configureAuth
	I0814 01:06:11.551022   61804 buildroot.go:189] setting minikube options for container-runtime
	I0814 01:06:11.551253   61804 config.go:182] Loaded profile config "old-k8s-version-179312": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0814 01:06:11.551330   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHHostname
	I0814 01:06:11.554244   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.554622   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:11.554655   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.554880   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHPort
	I0814 01:06:11.555073   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 01:06:11.555249   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 01:06:11.555402   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHUsername
	I0814 01:06:11.555590   61804 main.go:141] libmachine: Using SSH client type: native
	I0814 01:06:11.555834   61804 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I0814 01:06:11.555856   61804 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0814 01:06:11.824529   61804 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0814 01:06:11.824553   61804 machine.go:97] duration metric: took 854.572333ms to provisionDockerMachine
	I0814 01:06:11.824569   61804 start.go:293] postStartSetup for "old-k8s-version-179312" (driver="kvm2")
	I0814 01:06:11.824581   61804 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0814 01:06:11.824626   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .DriverName
	I0814 01:06:11.824929   61804 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0814 01:06:11.824952   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHHostname
	I0814 01:06:11.828165   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.828510   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:11.828545   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.828693   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHPort
	I0814 01:06:11.828883   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 01:06:11.829032   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHUsername
	I0814 01:06:11.829206   61804 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/old-k8s-version-179312/id_rsa Username:docker}
	I0814 01:06:11.909667   61804 ssh_runner.go:195] Run: cat /etc/os-release
	I0814 01:06:11.913426   61804 info.go:137] Remote host: Buildroot 2023.02.9
	I0814 01:06:11.913452   61804 filesync.go:126] Scanning /home/jenkins/minikube-integration/19429-9425/.minikube/addons for local assets ...
	I0814 01:06:11.913530   61804 filesync.go:126] Scanning /home/jenkins/minikube-integration/19429-9425/.minikube/files for local assets ...
	I0814 01:06:11.913630   61804 filesync.go:149] local asset: /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem -> 165892.pem in /etc/ssl/certs
	I0814 01:06:11.913753   61804 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0814 01:06:11.923687   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem --> /etc/ssl/certs/165892.pem (1708 bytes)
	I0814 01:06:11.946123   61804 start.go:296] duration metric: took 121.53594ms for postStartSetup
	I0814 01:06:11.946172   61804 fix.go:56] duration metric: took 19.859362691s for fixHost
	I0814 01:06:11.946192   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHHostname
	I0814 01:06:11.948880   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.949241   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:11.949264   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.949490   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHPort
	I0814 01:06:11.949702   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 01:06:11.949889   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 01:06:11.950031   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHUsername
	I0814 01:06:11.950210   61804 main.go:141] libmachine: Using SSH client type: native
	I0814 01:06:11.950390   61804 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I0814 01:06:11.950403   61804 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0814 01:06:12.050230   61804 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723597572.007643909
	
	I0814 01:06:12.050252   61804 fix.go:216] guest clock: 1723597572.007643909
	I0814 01:06:12.050259   61804 fix.go:229] Guest: 2024-08-14 01:06:12.007643909 +0000 UTC Remote: 2024-08-14 01:06:11.946176003 +0000 UTC m=+272.466568091 (delta=61.467906ms)
	I0814 01:06:12.050292   61804 fix.go:200] guest clock delta is within tolerance: 61.467906ms
	I0814 01:06:12.050297   61804 start.go:83] releasing machines lock for "old-k8s-version-179312", held for 19.963518958s
	I0814 01:06:12.050328   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .DriverName
	I0814 01:06:12.050593   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetIP
	I0814 01:06:12.053723   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:12.054140   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:12.054170   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:12.054376   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .DriverName
	I0814 01:06:12.054804   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .DriverName
	I0814 01:06:12.054992   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .DriverName
	I0814 01:06:12.055076   61804 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0814 01:06:12.055137   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHHostname
	I0814 01:06:12.055191   61804 ssh_runner.go:195] Run: cat /version.json
	I0814 01:06:12.055216   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHHostname
	I0814 01:06:12.058027   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:12.058378   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:12.058404   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:12.058455   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:12.058684   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHPort
	I0814 01:06:12.058796   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:12.058828   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:12.058874   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 01:06:12.059041   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHUsername
	I0814 01:06:12.059107   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHPort
	I0814 01:06:12.059179   61804 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/old-k8s-version-179312/id_rsa Username:docker}
	I0814 01:06:12.059276   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 01:06:12.059582   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHUsername
	I0814 01:06:12.059721   61804 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/old-k8s-version-179312/id_rsa Username:docker}
	I0814 01:06:12.169671   61804 ssh_runner.go:195] Run: systemctl --version
	I0814 01:06:12.175640   61804 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0814 01:06:12.326156   61804 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0814 01:06:12.332951   61804 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0814 01:06:12.333015   61804 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0814 01:06:12.351706   61804 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0814 01:06:12.351737   61804 start.go:495] detecting cgroup driver to use...
	I0814 01:06:12.351808   61804 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0814 01:06:12.367945   61804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0814 01:06:12.381540   61804 docker.go:217] disabling cri-docker service (if available) ...
	I0814 01:06:12.381607   61804 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0814 01:06:12.394497   61804 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0814 01:06:12.408848   61804 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0814 01:06:12.530080   61804 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0814 01:06:12.705566   61804 docker.go:233] disabling docker service ...
	I0814 01:06:12.705627   61804 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0814 01:06:12.721274   61804 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0814 01:06:12.736855   61804 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0814 01:06:12.851178   61804 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0814 01:06:12.973876   61804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0814 01:06:12.987600   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 01:06:13.004553   61804 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0814 01:06:13.004656   61804 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:06:13.014424   61804 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0814 01:06:13.014507   61804 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:06:13.024038   61804 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:06:13.033588   61804 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:06:13.043124   61804 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0814 01:06:13.052585   61804 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0814 01:06:13.061221   61804 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0814 01:06:13.061308   61804 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0814 01:06:13.075277   61804 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0814 01:06:13.087018   61804 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 01:06:13.227288   61804 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0814 01:06:13.372753   61804 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0814 01:06:13.372848   61804 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0814 01:06:13.377444   61804 start.go:563] Will wait 60s for crictl version
	I0814 01:06:13.377499   61804 ssh_runner.go:195] Run: which crictl
	I0814 01:06:13.381068   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0814 01:06:13.430604   61804 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0814 01:06:13.430694   61804 ssh_runner.go:195] Run: crio --version
	I0814 01:06:13.460827   61804 ssh_runner.go:195] Run: crio --version
	I0814 01:06:13.491550   61804 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0814 01:06:13.492760   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetIP
	I0814 01:06:13.495846   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:13.496218   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:13.496255   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:13.496435   61804 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0814 01:06:13.500489   61804 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 01:06:13.512643   61804 kubeadm.go:883] updating cluster {Name:old-k8s-version-179312 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-179312 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.123 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0814 01:06:13.512785   61804 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0814 01:06:13.512842   61804 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 01:06:13.560050   61804 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0814 01:06:13.560112   61804 ssh_runner.go:195] Run: which lz4
	I0814 01:06:13.564105   61804 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0814 01:06:13.567985   61804 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0814 01:06:13.568014   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0814 01:06:12.074155   61115 main.go:141] libmachine: (embed-certs-901410) Calling .Start
	I0814 01:06:12.074285   61115 main.go:141] libmachine: (embed-certs-901410) Ensuring networks are active...
	I0814 01:06:12.074948   61115 main.go:141] libmachine: (embed-certs-901410) Ensuring network default is active
	I0814 01:06:12.075282   61115 main.go:141] libmachine: (embed-certs-901410) Ensuring network mk-embed-certs-901410 is active
	I0814 01:06:12.075694   61115 main.go:141] libmachine: (embed-certs-901410) Getting domain xml...
	I0814 01:06:12.076354   61115 main.go:141] libmachine: (embed-certs-901410) Creating domain...
	I0814 01:06:13.425468   61115 main.go:141] libmachine: (embed-certs-901410) Waiting to get IP...
	I0814 01:06:13.426367   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:13.426876   61115 main.go:141] libmachine: (embed-certs-901410) DBG | unable to find current IP address of domain embed-certs-901410 in network mk-embed-certs-901410
	I0814 01:06:13.426936   61115 main.go:141] libmachine: (embed-certs-901410) DBG | I0814 01:06:13.426842   63044 retry.go:31] will retry after 280.861769ms: waiting for machine to come up
	I0814 01:06:13.709645   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:13.710369   61115 main.go:141] libmachine: (embed-certs-901410) DBG | unable to find current IP address of domain embed-certs-901410 in network mk-embed-certs-901410
	I0814 01:06:13.710524   61115 main.go:141] libmachine: (embed-certs-901410) DBG | I0814 01:06:13.710442   63044 retry.go:31] will retry after 316.02196ms: waiting for machine to come up
	I0814 01:06:14.028197   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:14.028722   61115 main.go:141] libmachine: (embed-certs-901410) DBG | unable to find current IP address of domain embed-certs-901410 in network mk-embed-certs-901410
	I0814 01:06:14.028751   61115 main.go:141] libmachine: (embed-certs-901410) DBG | I0814 01:06:14.028683   63044 retry.go:31] will retry after 317.388844ms: waiting for machine to come up
	I0814 01:06:14.347390   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:14.347888   61115 main.go:141] libmachine: (embed-certs-901410) DBG | unable to find current IP address of domain embed-certs-901410 in network mk-embed-certs-901410
	I0814 01:06:14.347917   61115 main.go:141] libmachine: (embed-certs-901410) DBG | I0814 01:06:14.347834   63044 retry.go:31] will retry after 422.687955ms: waiting for machine to come up
	I0814 01:06:14.772182   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:14.772756   61115 main.go:141] libmachine: (embed-certs-901410) DBG | unable to find current IP address of domain embed-certs-901410 in network mk-embed-certs-901410
	I0814 01:06:14.772785   61115 main.go:141] libmachine: (embed-certs-901410) DBG | I0814 01:06:14.772704   63044 retry.go:31] will retry after 517.722001ms: waiting for machine to come up
	I0814 01:06:11.781300   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:13.782226   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:15.782509   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:14.919068   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:16.920536   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:15.010425   61804 crio.go:462] duration metric: took 1.446361159s to copy over tarball
	I0814 01:06:15.010503   61804 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0814 01:06:17.960543   61804 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.950002604s)
	I0814 01:06:17.960583   61804 crio.go:469] duration metric: took 2.950131362s to extract the tarball
	I0814 01:06:17.960595   61804 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0814 01:06:18.002898   61804 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 01:06:18.039862   61804 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0814 01:06:18.039887   61804 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0814 01:06:18.039949   61804 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 01:06:18.039976   61804 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0814 01:06:18.040029   61804 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0814 01:06:18.040037   61804 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 01:06:18.040076   61804 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0814 01:06:18.040092   61804 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0814 01:06:18.040279   61804 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0814 01:06:18.040285   61804 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0814 01:06:18.041502   61804 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 01:06:18.041605   61804 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0814 01:06:18.041642   61804 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 01:06:18.041655   61804 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0814 01:06:18.041683   61804 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0814 01:06:18.041709   61804 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0814 01:06:18.041712   61804 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0814 01:06:18.041643   61804 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0814 01:06:18.267865   61804 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0814 01:06:18.300630   61804 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0814 01:06:18.309691   61804 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0814 01:06:18.312711   61804 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0814 01:06:18.319830   61804 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 01:06:18.333483   61804 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0814 01:06:18.333571   61804 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0814 01:06:18.333617   61804 ssh_runner.go:195] Run: which crictl
	I0814 01:06:18.333854   61804 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0814 01:06:18.355530   61804 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0814 01:06:18.460940   61804 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0814 01:06:18.460989   61804 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0814 01:06:18.460991   61804 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0814 01:06:18.461028   61804 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0814 01:06:18.461038   61804 ssh_runner.go:195] Run: which crictl
	I0814 01:06:18.461072   61804 ssh_runner.go:195] Run: which crictl
	I0814 01:06:18.466105   61804 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0814 01:06:18.466146   61804 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 01:06:18.466158   61804 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0814 01:06:18.466194   61804 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0814 01:06:18.466200   61804 ssh_runner.go:195] Run: which crictl
	I0814 01:06:18.466232   61804 ssh_runner.go:195] Run: which crictl
	I0814 01:06:18.466109   61804 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0814 01:06:18.466290   61804 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0814 01:06:18.466163   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0814 01:06:18.466338   61804 ssh_runner.go:195] Run: which crictl
	I0814 01:06:18.471203   61804 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0814 01:06:18.471244   61804 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0814 01:06:18.471327   61804 ssh_runner.go:195] Run: which crictl
	I0814 01:06:18.477596   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0814 01:06:18.477709   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 01:06:18.477741   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0814 01:06:18.536417   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0814 01:06:18.536483   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0814 01:06:18.536443   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0814 01:06:18.536516   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0814 01:06:18.560937   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0814 01:06:18.560979   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0814 01:06:18.571932   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 01:06:18.690215   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0814 01:06:18.690271   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0814 01:06:18.690385   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0814 01:06:18.690416   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0814 01:06:18.710801   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0814 01:06:18.722130   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0814 01:06:18.722180   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 01:06:18.854942   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0814 01:06:18.854975   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0814 01:06:18.855019   61804 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0814 01:06:18.855064   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0814 01:06:18.855069   61804 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0814 01:06:18.855143   61804 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0814 01:06:18.855197   61804 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0814 01:06:18.917832   61804 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0814 01:06:18.917892   61804 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0814 01:06:18.919778   61804 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0814 01:06:18.937014   61804 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 01:06:19.077956   61804 cache_images.go:92] duration metric: took 1.038051355s to LoadCachedImages
	W0814 01:06:19.078050   61804 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0814 01:06:19.078068   61804 kubeadm.go:934] updating node { 192.168.61.123 8443 v1.20.0 crio true true} ...
	I0814 01:06:19.078198   61804 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-179312 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.123
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-179312 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0814 01:06:19.078309   61804 ssh_runner.go:195] Run: crio config
	I0814 01:06:19.126091   61804 cni.go:84] Creating CNI manager for ""
	I0814 01:06:19.126114   61804 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 01:06:19.126129   61804 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0814 01:06:19.126159   61804 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.123 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-179312 NodeName:old-k8s-version-179312 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.123"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.123 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0814 01:06:19.126325   61804 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.123
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-179312"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.123
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.123"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0814 01:06:19.126402   61804 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0814 01:06:19.136422   61804 binaries.go:44] Found k8s binaries, skipping transfer
	I0814 01:06:19.136481   61804 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0814 01:06:19.145476   61804 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0814 01:06:19.161780   61804 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0814 01:06:19.178893   61804 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0814 01:06:19.196515   61804 ssh_runner.go:195] Run: grep 192.168.61.123	control-plane.minikube.internal$ /etc/hosts
	I0814 01:06:19.200204   61804 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.123	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 01:06:19.211943   61804 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 01:06:19.333517   61804 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 01:06:19.350008   61804 certs.go:68] Setting up /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312 for IP: 192.168.61.123
	I0814 01:06:19.350055   61804 certs.go:194] generating shared ca certs ...
	I0814 01:06:19.350094   61804 certs.go:226] acquiring lock for ca certs: {Name:mke321171338291cf6d66f3acbfe43d3fabcafa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 01:06:19.350294   61804 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19429-9425/.minikube/ca.key
	I0814 01:06:19.350371   61804 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.key
	I0814 01:06:19.350387   61804 certs.go:256] generating profile certs ...
	I0814 01:06:19.350530   61804 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312/client.key
	I0814 01:06:19.350603   61804 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312/apiserver.key.6e56bf34
	I0814 01:06:19.350667   61804 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312/proxy-client.key
	I0814 01:06:19.350846   61804 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589.pem (1338 bytes)
	W0814 01:06:19.350928   61804 certs.go:480] ignoring /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589_empty.pem, impossibly tiny 0 bytes
	I0814 01:06:19.350958   61804 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem (1679 bytes)
	I0814 01:06:19.350995   61804 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem (1082 bytes)
	I0814 01:06:19.351032   61804 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem (1123 bytes)
	I0814 01:06:19.351076   61804 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem (1675 bytes)
	I0814 01:06:19.351152   61804 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem (1708 bytes)
	I0814 01:06:19.352060   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0814 01:06:19.400249   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0814 01:06:19.430497   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0814 01:06:19.478315   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0814 01:06:19.507327   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0814 01:06:15.292336   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:15.292816   61115 main.go:141] libmachine: (embed-certs-901410) DBG | unable to find current IP address of domain embed-certs-901410 in network mk-embed-certs-901410
	I0814 01:06:15.292847   61115 main.go:141] libmachine: (embed-certs-901410) DBG | I0814 01:06:15.292765   63044 retry.go:31] will retry after 585.844986ms: waiting for machine to come up
	I0814 01:06:15.880233   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:15.880833   61115 main.go:141] libmachine: (embed-certs-901410) DBG | unable to find current IP address of domain embed-certs-901410 in network mk-embed-certs-901410
	I0814 01:06:15.880903   61115 main.go:141] libmachine: (embed-certs-901410) DBG | I0814 01:06:15.880810   63044 retry.go:31] will retry after 827.81891ms: waiting for machine to come up
	I0814 01:06:16.710168   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:16.710630   61115 main.go:141] libmachine: (embed-certs-901410) DBG | unable to find current IP address of domain embed-certs-901410 in network mk-embed-certs-901410
	I0814 01:06:16.710671   61115 main.go:141] libmachine: (embed-certs-901410) DBG | I0814 01:06:16.710577   63044 retry.go:31] will retry after 1.430172339s: waiting for machine to come up
	I0814 01:06:18.142094   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:18.142557   61115 main.go:141] libmachine: (embed-certs-901410) DBG | unable to find current IP address of domain embed-certs-901410 in network mk-embed-certs-901410
	I0814 01:06:18.142604   61115 main.go:141] libmachine: (embed-certs-901410) DBG | I0814 01:06:18.142477   63044 retry.go:31] will retry after 1.240583508s: waiting for machine to come up
	I0814 01:06:19.384686   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:19.385102   61115 main.go:141] libmachine: (embed-certs-901410) DBG | unable to find current IP address of domain embed-certs-901410 in network mk-embed-certs-901410
	I0814 01:06:19.385132   61115 main.go:141] libmachine: (embed-certs-901410) DBG | I0814 01:06:19.385044   63044 retry.go:31] will retry after 2.005758756s: waiting for machine to come up
	I0814 01:06:18.281722   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:20.571594   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:19.619695   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:21.918897   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:19.535095   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0814 01:06:19.564128   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0814 01:06:19.600227   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0814 01:06:19.624712   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0814 01:06:19.649975   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589.pem --> /usr/share/ca-certificates/16589.pem (1338 bytes)
	I0814 01:06:19.673278   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem --> /usr/share/ca-certificates/165892.pem (1708 bytes)
	I0814 01:06:19.697408   61804 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0814 01:06:19.716197   61804 ssh_runner.go:195] Run: openssl version
	I0814 01:06:19.723669   61804 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/165892.pem && ln -fs /usr/share/ca-certificates/165892.pem /etc/ssl/certs/165892.pem"
	I0814 01:06:19.737165   61804 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/165892.pem
	I0814 01:06:19.742731   61804 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 13 23:59 /usr/share/ca-certificates/165892.pem
	I0814 01:06:19.742778   61804 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/165892.pem
	I0814 01:06:19.750009   61804 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/165892.pem /etc/ssl/certs/3ec20f2e.0"
	I0814 01:06:19.761830   61804 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0814 01:06:19.772601   61804 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0814 01:06:19.777222   61804 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 13 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0814 01:06:19.777311   61804 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0814 01:06:19.784554   61804 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0814 01:06:19.794731   61804 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16589.pem && ln -fs /usr/share/ca-certificates/16589.pem /etc/ssl/certs/16589.pem"
	I0814 01:06:19.804326   61804 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16589.pem
	I0814 01:06:19.808528   61804 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 13 23:59 /usr/share/ca-certificates/16589.pem
	I0814 01:06:19.808589   61804 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16589.pem
	I0814 01:06:19.815518   61804 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16589.pem /etc/ssl/certs/51391683.0"
	I0814 01:06:19.828687   61804 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0814 01:06:19.833943   61804 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0814 01:06:19.839826   61804 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0814 01:06:19.845576   61804 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0814 01:06:19.851700   61804 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0814 01:06:19.857179   61804 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0814 01:06:19.862728   61804 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0814 01:06:19.868172   61804 kubeadm.go:392] StartCluster: {Name:old-k8s-version-179312 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-179312 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.123 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 01:06:19.868280   61804 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0814 01:06:19.868327   61804 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 01:06:19.905130   61804 cri.go:89] found id: ""
	I0814 01:06:19.905208   61804 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0814 01:06:19.915743   61804 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0814 01:06:19.915763   61804 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0814 01:06:19.915812   61804 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0814 01:06:19.926673   61804 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0814 01:06:19.928112   61804 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-179312" does not appear in /home/jenkins/minikube-integration/19429-9425/kubeconfig
	I0814 01:06:19.929057   61804 kubeconfig.go:62] /home/jenkins/minikube-integration/19429-9425/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-179312" cluster setting kubeconfig missing "old-k8s-version-179312" context setting]
	I0814 01:06:19.931588   61804 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19429-9425/kubeconfig: {Name:mkd99f966962f24d3ec49056c344f5320df43dba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 01:06:19.938507   61804 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0814 01:06:19.947574   61804 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.123
	I0814 01:06:19.947601   61804 kubeadm.go:1160] stopping kube-system containers ...
	I0814 01:06:19.947641   61804 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0814 01:06:19.947698   61804 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 01:06:19.986219   61804 cri.go:89] found id: ""
	I0814 01:06:19.986301   61804 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0814 01:06:20.001325   61804 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 01:06:20.010260   61804 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 01:06:20.010278   61804 kubeadm.go:157] found existing configuration files:
	
	I0814 01:06:20.010320   61804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 01:06:20.018691   61804 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 01:06:20.018753   61804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 01:06:20.027627   61804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 01:06:20.035892   61804 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 01:06:20.035948   61804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 01:06:20.044508   61804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 01:06:20.052714   61804 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 01:06:20.052760   61804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 01:06:20.062524   61804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 01:06:20.070978   61804 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 01:06:20.071037   61804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 01:06:20.079423   61804 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 01:06:20.088368   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:06:20.206955   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:06:21.197237   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:06:21.439928   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:06:21.552279   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:06:21.636249   61804 api_server.go:52] waiting for apiserver process to appear ...
	I0814 01:06:21.636337   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:22.136661   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:22.636861   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:23.136511   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:23.636583   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:24.136899   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:21.392188   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:21.392717   61115 main.go:141] libmachine: (embed-certs-901410) DBG | unable to find current IP address of domain embed-certs-901410 in network mk-embed-certs-901410
	I0814 01:06:21.392744   61115 main.go:141] libmachine: (embed-certs-901410) DBG | I0814 01:06:21.392636   63044 retry.go:31] will retry after 2.297974145s: waiting for machine to come up
	I0814 01:06:23.692024   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:23.692545   61115 main.go:141] libmachine: (embed-certs-901410) DBG | unable to find current IP address of domain embed-certs-901410 in network mk-embed-certs-901410
	I0814 01:06:23.692574   61115 main.go:141] libmachine: (embed-certs-901410) DBG | I0814 01:06:23.692496   63044 retry.go:31] will retry after 2.273164713s: waiting for machine to come up
	I0814 01:06:22.780588   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:24.781349   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:23.919847   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:26.417563   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:24.636605   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:25.136809   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:25.636474   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:26.137253   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:26.636758   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:27.137184   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:27.637201   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:28.137082   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:28.637409   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:29.136794   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:25.967275   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:25.967771   61115 main.go:141] libmachine: (embed-certs-901410) DBG | unable to find current IP address of domain embed-certs-901410 in network mk-embed-certs-901410
	I0814 01:06:25.967799   61115 main.go:141] libmachine: (embed-certs-901410) DBG | I0814 01:06:25.967714   63044 retry.go:31] will retry after 3.279375715s: waiting for machine to come up
	I0814 01:06:29.249387   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.249873   61115 main.go:141] libmachine: (embed-certs-901410) Found IP for machine: 192.168.50.210
	I0814 01:06:29.249893   61115 main.go:141] libmachine: (embed-certs-901410) Reserving static IP address...
	I0814 01:06:29.249911   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has current primary IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.250345   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "embed-certs-901410", mac: "52:54:00:fa:4e:56", ip: "192.168.50.210"} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:06:29.250380   61115 main.go:141] libmachine: (embed-certs-901410) DBG | skip adding static IP to network mk-embed-certs-901410 - found existing host DHCP lease matching {name: "embed-certs-901410", mac: "52:54:00:fa:4e:56", ip: "192.168.50.210"}
	I0814 01:06:29.250394   61115 main.go:141] libmachine: (embed-certs-901410) Reserved static IP address: 192.168.50.210
	I0814 01:06:29.250409   61115 main.go:141] libmachine: (embed-certs-901410) Waiting for SSH to be available...
	I0814 01:06:29.250425   61115 main.go:141] libmachine: (embed-certs-901410) DBG | Getting to WaitForSSH function...
	I0814 01:06:29.252472   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.252801   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:06:29.252825   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.252933   61115 main.go:141] libmachine: (embed-certs-901410) DBG | Using SSH client type: external
	I0814 01:06:29.252973   61115 main.go:141] libmachine: (embed-certs-901410) DBG | Using SSH private key: /home/jenkins/minikube-integration/19429-9425/.minikube/machines/embed-certs-901410/id_rsa (-rw-------)
	I0814 01:06:29.253015   61115 main.go:141] libmachine: (embed-certs-901410) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.210 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19429-9425/.minikube/machines/embed-certs-901410/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0814 01:06:29.253031   61115 main.go:141] libmachine: (embed-certs-901410) DBG | About to run SSH command:
	I0814 01:06:29.253044   61115 main.go:141] libmachine: (embed-certs-901410) DBG | exit 0
	I0814 01:06:29.381821   61115 main.go:141] libmachine: (embed-certs-901410) DBG | SSH cmd err, output: <nil>: 
	I0814 01:06:29.382216   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetConfigRaw
	I0814 01:06:29.382909   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetIP
	I0814 01:06:29.385247   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.385611   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:06:29.385648   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.385918   61115 profile.go:143] Saving config to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/embed-certs-901410/config.json ...
	I0814 01:06:29.386116   61115 machine.go:94] provisionDockerMachine start ...
	I0814 01:06:29.386151   61115 main.go:141] libmachine: (embed-certs-901410) Calling .DriverName
	I0814 01:06:29.386370   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHHostname
	I0814 01:06:29.388690   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.389026   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:06:29.389054   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.389185   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHPort
	I0814 01:06:29.389353   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 01:06:29.389510   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 01:06:29.389658   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHUsername
	I0814 01:06:29.389812   61115 main.go:141] libmachine: Using SSH client type: native
	I0814 01:06:29.390022   61115 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.210 22 <nil> <nil>}
	I0814 01:06:29.390033   61115 main.go:141] libmachine: About to run SSH command:
	hostname
	I0814 01:06:29.502650   61115 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0814 01:06:29.502704   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetMachineName
	I0814 01:06:29.502923   61115 buildroot.go:166] provisioning hostname "embed-certs-901410"
	I0814 01:06:29.502947   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetMachineName
	I0814 01:06:29.503141   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHHostname
	I0814 01:06:29.505440   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.505866   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:06:29.505903   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.506078   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHPort
	I0814 01:06:29.506278   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 01:06:29.506425   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 01:06:29.506558   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHUsername
	I0814 01:06:29.506733   61115 main.go:141] libmachine: Using SSH client type: native
	I0814 01:06:29.506942   61115 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.210 22 <nil> <nil>}
	I0814 01:06:29.506961   61115 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-901410 && echo "embed-certs-901410" | sudo tee /etc/hostname
	I0814 01:06:29.632717   61115 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-901410
	
	I0814 01:06:29.632749   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHHostname
	I0814 01:06:29.635919   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.636318   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:06:29.636346   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.636582   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHPort
	I0814 01:06:29.636804   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 01:06:29.637010   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 01:06:29.637205   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHUsername
	I0814 01:06:29.637413   61115 main.go:141] libmachine: Using SSH client type: native
	I0814 01:06:29.637574   61115 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.210 22 <nil> <nil>}
	I0814 01:06:29.637590   61115 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-901410' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-901410/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-901410' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0814 01:06:29.759030   61115 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 01:06:29.759059   61115 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19429-9425/.minikube CaCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19429-9425/.minikube}
	I0814 01:06:29.759100   61115 buildroot.go:174] setting up certificates
	I0814 01:06:29.759114   61115 provision.go:84] configureAuth start
	I0814 01:06:29.759126   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetMachineName
	I0814 01:06:29.759412   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetIP
	I0814 01:06:29.761597   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.761918   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:06:29.761946   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.762095   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHHostname
	I0814 01:06:29.763969   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.764320   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:06:29.764353   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.764497   61115 provision.go:143] copyHostCerts
	I0814 01:06:29.764568   61115 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem, removing ...
	I0814 01:06:29.764582   61115 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem
	I0814 01:06:29.764653   61115 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem (1082 bytes)
	I0814 01:06:29.764781   61115 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem, removing ...
	I0814 01:06:29.764791   61115 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem
	I0814 01:06:29.764814   61115 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem (1123 bytes)
	I0814 01:06:29.764875   61115 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem, removing ...
	I0814 01:06:29.764882   61115 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem
	I0814 01:06:29.764899   61115 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem (1675 bytes)
	I0814 01:06:29.764954   61115 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem org=jenkins.embed-certs-901410 san=[127.0.0.1 192.168.50.210 embed-certs-901410 localhost minikube]
	I0814 01:06:29.870234   61115 provision.go:177] copyRemoteCerts
	I0814 01:06:29.870290   61115 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0814 01:06:29.870314   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHHostname
	I0814 01:06:29.872903   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.873188   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:06:29.873220   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.873388   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHPort
	I0814 01:06:29.873582   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 01:06:29.873748   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHUsername
	I0814 01:06:29.873849   61115 sshutil.go:53] new ssh client: &{IP:192.168.50.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/embed-certs-901410/id_rsa Username:docker}
	I0814 01:06:29.959592   61115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0814 01:06:29.982484   61115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0814 01:06:30.005257   61115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0814 01:06:30.029571   61115 provision.go:87] duration metric: took 270.444778ms to configureAuth
	I0814 01:06:30.029595   61115 buildroot.go:189] setting minikube options for container-runtime
	I0814 01:06:30.029773   61115 config.go:182] Loaded profile config "embed-certs-901410": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 01:06:30.029836   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHHostname
	I0814 01:06:30.032696   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:30.033078   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:06:30.033115   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:30.033301   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHPort
	I0814 01:06:30.033492   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 01:06:30.033658   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 01:06:30.033798   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHUsername
	I0814 01:06:30.033953   61115 main.go:141] libmachine: Using SSH client type: native
	I0814 01:06:30.034162   61115 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.210 22 <nil> <nil>}
	I0814 01:06:30.034182   61115 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0814 01:06:27.281267   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:29.284406   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:30.310330   61115 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0814 01:06:30.310362   61115 machine.go:97] duration metric: took 924.221855ms to provisionDockerMachine
	I0814 01:06:30.310376   61115 start.go:293] postStartSetup for "embed-certs-901410" (driver="kvm2")
	I0814 01:06:30.310391   61115 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0814 01:06:30.310412   61115 main.go:141] libmachine: (embed-certs-901410) Calling .DriverName
	I0814 01:06:30.310792   61115 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0814 01:06:30.310829   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHHostname
	I0814 01:06:30.313781   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:30.314184   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:06:30.314211   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:30.314417   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHPort
	I0814 01:06:30.314605   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 01:06:30.314775   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHUsername
	I0814 01:06:30.314921   61115 sshutil.go:53] new ssh client: &{IP:192.168.50.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/embed-certs-901410/id_rsa Username:docker}
	I0814 01:06:30.400094   61115 ssh_runner.go:195] Run: cat /etc/os-release
	I0814 01:06:30.403861   61115 info.go:137] Remote host: Buildroot 2023.02.9
	I0814 01:06:30.403879   61115 filesync.go:126] Scanning /home/jenkins/minikube-integration/19429-9425/.minikube/addons for local assets ...
	I0814 01:06:30.403936   61115 filesync.go:126] Scanning /home/jenkins/minikube-integration/19429-9425/.minikube/files for local assets ...
	I0814 01:06:30.404014   61115 filesync.go:149] local asset: /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem -> 165892.pem in /etc/ssl/certs
	I0814 01:06:30.404128   61115 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0814 01:06:30.412469   61115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem --> /etc/ssl/certs/165892.pem (1708 bytes)
	I0814 01:06:30.434728   61115 start.go:296] duration metric: took 124.33735ms for postStartSetup
	I0814 01:06:30.434768   61115 fix.go:56] duration metric: took 18.384308902s for fixHost
	I0814 01:06:30.434792   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHHostname
	I0814 01:06:30.437730   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:30.438155   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:06:30.438177   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:30.438320   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHPort
	I0814 01:06:30.438510   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 01:06:30.438677   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 01:06:30.438818   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHUsername
	I0814 01:06:30.439014   61115 main.go:141] libmachine: Using SSH client type: native
	I0814 01:06:30.439219   61115 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.210 22 <nil> <nil>}
	I0814 01:06:30.439234   61115 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0814 01:06:30.550947   61115 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723597590.505165718
	
	I0814 01:06:30.550974   61115 fix.go:216] guest clock: 1723597590.505165718
	I0814 01:06:30.550984   61115 fix.go:229] Guest: 2024-08-14 01:06:30.505165718 +0000 UTC Remote: 2024-08-14 01:06:30.434773276 +0000 UTC m=+355.429845421 (delta=70.392442ms)
	I0814 01:06:30.551009   61115 fix.go:200] guest clock delta is within tolerance: 70.392442ms
	I0814 01:06:30.551018   61115 start.go:83] releasing machines lock for "embed-certs-901410", held for 18.500591627s
	I0814 01:06:30.551046   61115 main.go:141] libmachine: (embed-certs-901410) Calling .DriverName
	I0814 01:06:30.551330   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetIP
	I0814 01:06:30.553946   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:30.554367   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:06:30.554403   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:30.554586   61115 main.go:141] libmachine: (embed-certs-901410) Calling .DriverName
	I0814 01:06:30.555088   61115 main.go:141] libmachine: (embed-certs-901410) Calling .DriverName
	I0814 01:06:30.555280   61115 main.go:141] libmachine: (embed-certs-901410) Calling .DriverName
	I0814 01:06:30.555371   61115 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0814 01:06:30.555415   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHHostname
	I0814 01:06:30.555523   61115 ssh_runner.go:195] Run: cat /version.json
	I0814 01:06:30.555549   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHHostname
	I0814 01:06:30.558280   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:30.558369   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:30.558704   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:06:30.558730   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:30.558909   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHPort
	I0814 01:06:30.558922   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:06:30.558945   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:30.559110   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHPort
	I0814 01:06:30.559121   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 01:06:30.559307   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHUsername
	I0814 01:06:30.559319   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 01:06:30.559477   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHUsername
	I0814 01:06:30.559473   61115 sshutil.go:53] new ssh client: &{IP:192.168.50.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/embed-certs-901410/id_rsa Username:docker}
	I0814 01:06:30.559633   61115 sshutil.go:53] new ssh client: &{IP:192.168.50.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/embed-certs-901410/id_rsa Username:docker}
	I0814 01:06:30.650942   61115 ssh_runner.go:195] Run: systemctl --version
	I0814 01:06:30.686931   61115 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0814 01:06:30.834893   61115 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0814 01:06:30.840573   61115 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0814 01:06:30.840644   61115 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0814 01:06:30.856179   61115 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0814 01:06:30.856200   61115 start.go:495] detecting cgroup driver to use...
	I0814 01:06:30.856268   61115 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0814 01:06:30.872056   61115 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0814 01:06:30.884525   61115 docker.go:217] disabling cri-docker service (if available) ...
	I0814 01:06:30.884604   61115 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0814 01:06:30.897219   61115 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0814 01:06:30.910649   61115 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0814 01:06:31.031843   61115 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0814 01:06:31.170959   61115 docker.go:233] disabling docker service ...
	I0814 01:06:31.171034   61115 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0814 01:06:31.185812   61115 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0814 01:06:31.198349   61115 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0814 01:06:31.334492   61115 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0814 01:06:31.448638   61115 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0814 01:06:31.462494   61115 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 01:06:31.479307   61115 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0814 01:06:31.479376   61115 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:06:31.489135   61115 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0814 01:06:31.489202   61115 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:06:31.500174   61115 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:06:31.509884   61115 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:06:31.519412   61115 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0814 01:06:31.529352   61115 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:06:31.539360   61115 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:06:31.555761   61115 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:06:31.566278   61115 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0814 01:06:31.575191   61115 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0814 01:06:31.575242   61115 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0814 01:06:31.587429   61115 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0814 01:06:31.596637   61115 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 01:06:31.702555   61115 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0814 01:06:31.836836   61115 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0814 01:06:31.836908   61115 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0814 01:06:31.841202   61115 start.go:563] Will wait 60s for crictl version
	I0814 01:06:31.841272   61115 ssh_runner.go:195] Run: which crictl
	I0814 01:06:31.844681   61115 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0814 01:06:31.882260   61115 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0814 01:06:31.882348   61115 ssh_runner.go:195] Run: crio --version
	I0814 01:06:31.908181   61115 ssh_runner.go:195] Run: crio --version
	I0814 01:06:31.938158   61115 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0814 01:06:28.917018   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:30.917940   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:32.919466   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:29.636401   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:30.136547   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:30.636748   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:31.136557   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:31.636752   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:32.137082   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:32.637429   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:33.136895   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:33.636703   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:34.136811   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:31.939399   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetIP
	I0814 01:06:31.942325   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:31.942622   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:06:31.942660   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:31.942828   61115 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0814 01:06:31.947071   61115 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 01:06:31.958632   61115 kubeadm.go:883] updating cluster {Name:embed-certs-901410 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:embed-certs-901410 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.210 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0814 01:06:31.958783   61115 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0814 01:06:31.958853   61115 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 01:06:31.996526   61115 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0814 01:06:31.996602   61115 ssh_runner.go:195] Run: which lz4
	I0814 01:06:32.000322   61115 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0814 01:06:32.004629   61115 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0814 01:06:32.004661   61115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0814 01:06:33.171433   61115 crio.go:462] duration metric: took 1.171173942s to copy over tarball
	I0814 01:06:33.171504   61115 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0814 01:06:31.781468   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:33.781547   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:35.781641   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:35.418170   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:37.920694   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:34.637429   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:35.137322   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:35.636955   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:36.136713   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:36.636457   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:37.137396   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:37.637271   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:38.137099   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:38.637303   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:39.136673   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:35.285022   61115 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.11348357s)
	I0814 01:06:35.285047   61115 crio.go:469] duration metric: took 2.113589929s to extract the tarball
	I0814 01:06:35.285054   61115 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0814 01:06:35.320814   61115 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 01:06:35.362145   61115 crio.go:514] all images are preloaded for cri-o runtime.
	I0814 01:06:35.362169   61115 cache_images.go:84] Images are preloaded, skipping loading
	I0814 01:06:35.362177   61115 kubeadm.go:934] updating node { 192.168.50.210 8443 v1.31.0 crio true true} ...
	I0814 01:06:35.362289   61115 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-901410 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.210
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-901410 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0814 01:06:35.362359   61115 ssh_runner.go:195] Run: crio config
	I0814 01:06:35.413412   61115 cni.go:84] Creating CNI manager for ""
	I0814 01:06:35.413433   61115 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 01:06:35.413442   61115 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0814 01:06:35.413461   61115 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.210 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-901410 NodeName:embed-certs-901410 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.210"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.210 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0814 01:06:35.413620   61115 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.210
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-901410"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.210
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.210"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0814 01:06:35.413681   61115 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0814 01:06:35.424217   61115 binaries.go:44] Found k8s binaries, skipping transfer
	I0814 01:06:35.424287   61115 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0814 01:06:35.433358   61115 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0814 01:06:35.448828   61115 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0814 01:06:35.463579   61115 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0814 01:06:35.478423   61115 ssh_runner.go:195] Run: grep 192.168.50.210	control-plane.minikube.internal$ /etc/hosts
	I0814 01:06:35.482005   61115 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.210	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 01:06:35.493411   61115 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 01:06:35.625613   61115 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 01:06:35.642901   61115 certs.go:68] Setting up /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/embed-certs-901410 for IP: 192.168.50.210
	I0814 01:06:35.642927   61115 certs.go:194] generating shared ca certs ...
	I0814 01:06:35.642955   61115 certs.go:226] acquiring lock for ca certs: {Name:mke321171338291cf6d66f3acbfe43d3fabcafa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 01:06:35.643119   61115 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19429-9425/.minikube/ca.key
	I0814 01:06:35.643172   61115 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.key
	I0814 01:06:35.643184   61115 certs.go:256] generating profile certs ...
	I0814 01:06:35.643301   61115 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/embed-certs-901410/client.key
	I0814 01:06:35.643390   61115 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/embed-certs-901410/apiserver.key.0b2ea541
	I0814 01:06:35.643439   61115 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/embed-certs-901410/proxy-client.key
	I0814 01:06:35.643591   61115 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589.pem (1338 bytes)
	W0814 01:06:35.643630   61115 certs.go:480] ignoring /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589_empty.pem, impossibly tiny 0 bytes
	I0814 01:06:35.643648   61115 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem (1679 bytes)
	I0814 01:06:35.643682   61115 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem (1082 bytes)
	I0814 01:06:35.643727   61115 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem (1123 bytes)
	I0814 01:06:35.643768   61115 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem (1675 bytes)
	I0814 01:06:35.643825   61115 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem (1708 bytes)
	I0814 01:06:35.644478   61115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0814 01:06:35.681297   61115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0814 01:06:35.730067   61115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0814 01:06:35.763133   61115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0814 01:06:35.790593   61115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/embed-certs-901410/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0814 01:06:35.815663   61115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/embed-certs-901410/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0814 01:06:35.840763   61115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/embed-certs-901410/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0814 01:06:35.863820   61115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/embed-certs-901410/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0814 01:06:35.887018   61115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589.pem --> /usr/share/ca-certificates/16589.pem (1338 bytes)
	I0814 01:06:35.909408   61115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem --> /usr/share/ca-certificates/165892.pem (1708 bytes)
	I0814 01:06:35.934175   61115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0814 01:06:35.957179   61115 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0814 01:06:35.972922   61115 ssh_runner.go:195] Run: openssl version
	I0814 01:06:35.978523   61115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16589.pem && ln -fs /usr/share/ca-certificates/16589.pem /etc/ssl/certs/16589.pem"
	I0814 01:06:35.987896   61115 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16589.pem
	I0814 01:06:35.991861   61115 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 13 23:59 /usr/share/ca-certificates/16589.pem
	I0814 01:06:35.991922   61115 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16589.pem
	I0814 01:06:35.997354   61115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16589.pem /etc/ssl/certs/51391683.0"
	I0814 01:06:36.007366   61115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/165892.pem && ln -fs /usr/share/ca-certificates/165892.pem /etc/ssl/certs/165892.pem"
	I0814 01:06:36.017502   61115 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/165892.pem
	I0814 01:06:36.021456   61115 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 13 23:59 /usr/share/ca-certificates/165892.pem
	I0814 01:06:36.021506   61115 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/165892.pem
	I0814 01:06:36.026605   61115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/165892.pem /etc/ssl/certs/3ec20f2e.0"
	I0814 01:06:36.035758   61115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0814 01:06:36.044976   61115 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0814 01:06:36.048866   61115 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 13 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0814 01:06:36.048905   61115 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0814 01:06:36.053841   61115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0814 01:06:36.062901   61115 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0814 01:06:36.066905   61115 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0814 01:06:36.072359   61115 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0814 01:06:36.077384   61115 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0814 01:06:36.082634   61115 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0814 01:06:36.087734   61115 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0814 01:06:36.093076   61115 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0814 01:06:36.098239   61115 kubeadm.go:392] StartCluster: {Name:embed-certs-901410 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:embed-certs-901410 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.210 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 01:06:36.098366   61115 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0814 01:06:36.098414   61115 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 01:06:36.137745   61115 cri.go:89] found id: ""
	I0814 01:06:36.137812   61115 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0814 01:06:36.151288   61115 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0814 01:06:36.151304   61115 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0814 01:06:36.151346   61115 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0814 01:06:36.160854   61115 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0814 01:06:36.162454   61115 kubeconfig.go:125] found "embed-certs-901410" server: "https://192.168.50.210:8443"
	I0814 01:06:36.165608   61115 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0814 01:06:36.174251   61115 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.210
	I0814 01:06:36.174272   61115 kubeadm.go:1160] stopping kube-system containers ...
	I0814 01:06:36.174307   61115 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0814 01:06:36.174355   61115 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 01:06:36.208617   61115 cri.go:89] found id: ""
	I0814 01:06:36.208689   61115 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0814 01:06:36.223217   61115 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 01:06:36.231791   61115 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 01:06:36.231807   61115 kubeadm.go:157] found existing configuration files:
	
	I0814 01:06:36.231846   61115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 01:06:36.239738   61115 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 01:06:36.239779   61115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 01:06:36.248183   61115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 01:06:36.256052   61115 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 01:06:36.256099   61115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 01:06:36.264174   61115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 01:06:36.271909   61115 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 01:06:36.271951   61115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 01:06:36.280467   61115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 01:06:36.288795   61115 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 01:06:36.288841   61115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 01:06:36.297142   61115 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 01:06:36.305326   61115 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:06:36.419654   61115 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:06:37.266994   61115 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:06:37.469417   61115 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:06:37.544102   61115 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:06:37.616596   61115 api_server.go:52] waiting for apiserver process to appear ...
	I0814 01:06:37.616684   61115 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:38.117278   61115 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:38.616805   61115 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:39.117789   61115 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:39.616986   61115 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:39.684640   61115 api_server.go:72] duration metric: took 2.068036759s to wait for apiserver process to appear ...
	I0814 01:06:39.684668   61115 api_server.go:88] waiting for apiserver healthz status ...
	I0814 01:06:39.684690   61115 api_server.go:253] Checking apiserver healthz at https://192.168.50.210:8443/healthz ...
	I0814 01:06:39.685138   61115 api_server.go:269] stopped: https://192.168.50.210:8443/healthz: Get "https://192.168.50.210:8443/healthz": dial tcp 192.168.50.210:8443: connect: connection refused
	I0814 01:06:37.782873   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:40.281438   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:40.418079   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:42.418440   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:40.184807   61115 api_server.go:253] Checking apiserver healthz at https://192.168.50.210:8443/healthz ...
	I0814 01:06:42.435930   61115 api_server.go:279] https://192.168.50.210:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0814 01:06:42.435960   61115 api_server.go:103] status: https://192.168.50.210:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0814 01:06:42.435997   61115 api_server.go:253] Checking apiserver healthz at https://192.168.50.210:8443/healthz ...
	I0814 01:06:42.464919   61115 api_server.go:279] https://192.168.50.210:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0814 01:06:42.464949   61115 api_server.go:103] status: https://192.168.50.210:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0814 01:06:42.685218   61115 api_server.go:253] Checking apiserver healthz at https://192.168.50.210:8443/healthz ...
	I0814 01:06:42.691065   61115 api_server.go:279] https://192.168.50.210:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0814 01:06:42.691089   61115 api_server.go:103] status: https://192.168.50.210:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0814 01:06:43.185274   61115 api_server.go:253] Checking apiserver healthz at https://192.168.50.210:8443/healthz ...
	I0814 01:06:43.191160   61115 api_server.go:279] https://192.168.50.210:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0814 01:06:43.191189   61115 api_server.go:103] status: https://192.168.50.210:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0814 01:06:43.685407   61115 api_server.go:253] Checking apiserver healthz at https://192.168.50.210:8443/healthz ...
	I0814 01:06:43.689515   61115 api_server.go:279] https://192.168.50.210:8443/healthz returned 200:
	ok
	I0814 01:06:43.695408   61115 api_server.go:141] control plane version: v1.31.0
	I0814 01:06:43.695435   61115 api_server.go:131] duration metric: took 4.010759094s to wait for apiserver health ...
	I0814 01:06:43.695445   61115 cni.go:84] Creating CNI manager for ""
	I0814 01:06:43.695454   61115 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 01:06:43.696966   61115 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0814 01:06:39.637384   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:40.136562   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:40.637447   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:41.137212   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:41.636824   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:42.136790   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:42.637352   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:43.137237   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:43.637327   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:44.136777   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:43.698444   61115 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0814 01:06:43.713840   61115 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0814 01:06:43.754611   61115 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 01:06:43.765369   61115 system_pods.go:59] 8 kube-system pods found
	I0814 01:06:43.765402   61115 system_pods.go:61] "coredns-6f6b679f8f-fpz8f" [0fae381f-1394-4a55-9735-61197051e0da] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0814 01:06:43.765410   61115 system_pods.go:61] "etcd-embed-certs-901410" [238a87a0-88ab-4663-bc2f-6bf2cb641902] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0814 01:06:43.765421   61115 system_pods.go:61] "kube-apiserver-embed-certs-901410" [0847b62e-42c4-4616-9412-a1547f991ea5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0814 01:06:43.765427   61115 system_pods.go:61] "kube-controller-manager-embed-certs-901410" [868c288a-504f-4bc6-9af3-8d3eff0a4e66] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0814 01:06:43.765431   61115 system_pods.go:61] "kube-proxy-gtr77" [f7b7a6b1-e47f-4982-8247-2adf9ce6690b] Running
	I0814 01:06:43.765436   61115 system_pods.go:61] "kube-scheduler-embed-certs-901410" [803a8501-9a24-436d-8439-2e05ed2b6e2c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0814 01:06:43.765443   61115 system_pods.go:61] "metrics-server-6867b74b74-82tmq" [4683e8c4-92a5-4b81-86c8-55da6044e780] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 01:06:43.765447   61115 system_pods.go:61] "storage-provisioner" [796497c7-c7b4-4207-9dbb-970702bab314] Running
	I0814 01:06:43.765453   61115 system_pods.go:74] duration metric: took 10.823914ms to wait for pod list to return data ...
	I0814 01:06:43.765468   61115 node_conditions.go:102] verifying NodePressure condition ...
	I0814 01:06:43.769292   61115 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0814 01:06:43.769319   61115 node_conditions.go:123] node cpu capacity is 2
	I0814 01:06:43.769334   61115 node_conditions.go:105] duration metric: took 3.855137ms to run NodePressure ...
	I0814 01:06:43.769355   61115 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:06:44.041384   61115 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0814 01:06:44.045549   61115 kubeadm.go:739] kubelet initialised
	I0814 01:06:44.045569   61115 kubeadm.go:740] duration metric: took 4.15887ms waiting for restarted kubelet to initialise ...
	I0814 01:06:44.045576   61115 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 01:06:44.050480   61115 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6f6b679f8f-fpz8f" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:42.281812   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:44.795089   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:44.917037   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:46.918399   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:44.636971   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:45.137082   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:45.636661   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:46.136690   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:46.636597   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:47.136601   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:47.636799   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:48.136486   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:48.637415   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:49.136703   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:46.057380   61115 pod_ready.go:102] pod "coredns-6f6b679f8f-fpz8f" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:48.556914   61115 pod_ready.go:102] pod "coredns-6f6b679f8f-fpz8f" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:49.561672   61115 pod_ready.go:92] pod "coredns-6f6b679f8f-fpz8f" in "kube-system" namespace has status "Ready":"True"
	I0814 01:06:49.561693   61115 pod_ready.go:81] duration metric: took 5.511190087s for pod "coredns-6f6b679f8f-fpz8f" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:49.561705   61115 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-901410" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:47.281700   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:49.780884   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:49.418739   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:51.918181   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:49.636646   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:50.137134   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:50.637310   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:51.136913   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:51.636930   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:52.137158   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:52.636489   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:53.137140   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:53.637032   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:54.137345   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:51.567510   61115 pod_ready.go:102] pod "etcd-embed-certs-901410" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:52.567550   61115 pod_ready.go:92] pod "etcd-embed-certs-901410" in "kube-system" namespace has status "Ready":"True"
	I0814 01:06:52.567575   61115 pod_ready.go:81] duration metric: took 3.005862861s for pod "etcd-embed-certs-901410" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:52.567584   61115 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-901410" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:52.572128   61115 pod_ready.go:92] pod "kube-apiserver-embed-certs-901410" in "kube-system" namespace has status "Ready":"True"
	I0814 01:06:52.572150   61115 pod_ready.go:81] duration metric: took 4.558756ms for pod "kube-apiserver-embed-certs-901410" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:52.572160   61115 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-901410" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:52.575875   61115 pod_ready.go:92] pod "kube-controller-manager-embed-certs-901410" in "kube-system" namespace has status "Ready":"True"
	I0814 01:06:52.575894   61115 pod_ready.go:81] duration metric: took 3.728258ms for pod "kube-controller-manager-embed-certs-901410" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:52.575903   61115 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-gtr77" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:52.579889   61115 pod_ready.go:92] pod "kube-proxy-gtr77" in "kube-system" namespace has status "Ready":"True"
	I0814 01:06:52.579908   61115 pod_ready.go:81] duration metric: took 3.999715ms for pod "kube-proxy-gtr77" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:52.579916   61115 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-901410" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:52.583481   61115 pod_ready.go:92] pod "kube-scheduler-embed-certs-901410" in "kube-system" namespace has status "Ready":"True"
	I0814 01:06:52.583499   61115 pod_ready.go:81] duration metric: took 3.577393ms for pod "kube-scheduler-embed-certs-901410" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:52.583508   61115 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:54.590479   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:51.781057   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:54.280478   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:54.418737   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:56.917785   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:54.636613   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:55.137191   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:55.637149   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:56.137437   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:56.637155   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:57.136629   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:57.636616   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:58.136691   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:58.637180   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:59.137246   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:57.091108   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:59.590751   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:56.781427   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:59.280620   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:01.281835   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:58.918424   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:01.418091   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:59.636603   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:00.137399   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:00.636477   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:01.136689   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:01.636867   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:02.136874   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:02.636850   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:03.136568   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:03.636915   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:04.137185   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:02.090113   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:04.589929   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:03.780774   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:05.781084   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:03.918432   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:06.417245   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:04.636433   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:05.136514   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:05.637177   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:06.136522   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:06.636384   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:07.136753   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:07.636417   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:08.137158   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:08.636665   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:09.137281   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:07.089678   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:09.590309   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:07.781208   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:10.281385   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:08.917707   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:10.917814   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:09.637102   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:10.136575   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:10.637290   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:11.136999   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:11.636523   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:12.136756   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:12.637369   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:13.136763   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:13.637275   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:14.137363   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:12.090323   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:14.092742   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:12.780837   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:14.781484   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:13.424099   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:15.917599   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:17.918631   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:14.636871   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:15.136819   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:15.636660   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:16.136568   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:16.637322   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:17.137088   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:17.637082   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:18.136469   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:18.637351   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:19.136899   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:16.589319   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:18.590539   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:17.279827   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:19.280727   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:20.418308   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:22.418709   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:19.636984   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:20.137256   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:20.636678   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:21.136871   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:21.637264   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:07:21.637336   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:07:21.674035   61804 cri.go:89] found id: ""
	I0814 01:07:21.674081   61804 logs.go:276] 0 containers: []
	W0814 01:07:21.674091   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:07:21.674100   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:07:21.674150   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:07:21.706567   61804 cri.go:89] found id: ""
	I0814 01:07:21.706594   61804 logs.go:276] 0 containers: []
	W0814 01:07:21.706602   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:07:21.706608   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:07:21.706670   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:07:21.744892   61804 cri.go:89] found id: ""
	I0814 01:07:21.744917   61804 logs.go:276] 0 containers: []
	W0814 01:07:21.744927   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:07:21.744933   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:07:21.744987   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:07:21.780766   61804 cri.go:89] found id: ""
	I0814 01:07:21.780791   61804 logs.go:276] 0 containers: []
	W0814 01:07:21.780799   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:07:21.780805   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:07:21.780861   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:07:21.813710   61804 cri.go:89] found id: ""
	I0814 01:07:21.813737   61804 logs.go:276] 0 containers: []
	W0814 01:07:21.813744   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:07:21.813750   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:07:21.813800   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:07:21.851621   61804 cri.go:89] found id: ""
	I0814 01:07:21.851649   61804 logs.go:276] 0 containers: []
	W0814 01:07:21.851657   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:07:21.851663   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:07:21.851713   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:07:21.885176   61804 cri.go:89] found id: ""
	I0814 01:07:21.885207   61804 logs.go:276] 0 containers: []
	W0814 01:07:21.885218   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:07:21.885226   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:07:21.885293   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:07:21.922273   61804 cri.go:89] found id: ""
	I0814 01:07:21.922303   61804 logs.go:276] 0 containers: []
	W0814 01:07:21.922319   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:07:21.922330   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:07:21.922344   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:07:21.975619   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:07:21.975657   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:07:21.989295   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:07:21.989330   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:07:22.117376   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:07:22.117406   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:07:22.117421   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:07:22.190366   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:07:22.190407   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:07:21.094685   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:23.592014   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:21.781584   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:24.281405   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:24.919338   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:27.417053   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:24.727910   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:24.741649   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:07:24.741722   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:07:24.778658   61804 cri.go:89] found id: ""
	I0814 01:07:24.778684   61804 logs.go:276] 0 containers: []
	W0814 01:07:24.778693   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:07:24.778699   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:07:24.778761   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:07:24.811263   61804 cri.go:89] found id: ""
	I0814 01:07:24.811290   61804 logs.go:276] 0 containers: []
	W0814 01:07:24.811314   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:07:24.811321   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:07:24.811385   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:07:24.847414   61804 cri.go:89] found id: ""
	I0814 01:07:24.847442   61804 logs.go:276] 0 containers: []
	W0814 01:07:24.847450   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:07:24.847456   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:07:24.847512   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:07:24.888714   61804 cri.go:89] found id: ""
	I0814 01:07:24.888737   61804 logs.go:276] 0 containers: []
	W0814 01:07:24.888745   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:07:24.888750   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:07:24.888828   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:07:24.937957   61804 cri.go:89] found id: ""
	I0814 01:07:24.937983   61804 logs.go:276] 0 containers: []
	W0814 01:07:24.937994   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:07:24.938002   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:07:24.938086   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:07:24.990489   61804 cri.go:89] found id: ""
	I0814 01:07:24.990514   61804 logs.go:276] 0 containers: []
	W0814 01:07:24.990522   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:07:24.990530   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:07:24.990592   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:07:25.033458   61804 cri.go:89] found id: ""
	I0814 01:07:25.033489   61804 logs.go:276] 0 containers: []
	W0814 01:07:25.033500   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:07:25.033508   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:07:25.033594   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:07:25.065009   61804 cri.go:89] found id: ""
	I0814 01:07:25.065039   61804 logs.go:276] 0 containers: []
	W0814 01:07:25.065049   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:07:25.065062   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:07:25.065074   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:07:25.116806   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:07:25.116841   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:07:25.131759   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:07:25.131790   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:07:25.206389   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:07:25.206415   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:07:25.206435   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:07:25.284603   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:07:25.284632   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:07:27.823371   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:27.836369   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:07:27.836452   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:07:27.876906   61804 cri.go:89] found id: ""
	I0814 01:07:27.876937   61804 logs.go:276] 0 containers: []
	W0814 01:07:27.876950   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:07:27.876960   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:07:27.877039   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:07:27.912449   61804 cri.go:89] found id: ""
	I0814 01:07:27.912481   61804 logs.go:276] 0 containers: []
	W0814 01:07:27.912494   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:07:27.912501   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:07:27.912568   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:07:27.945584   61804 cri.go:89] found id: ""
	I0814 01:07:27.945611   61804 logs.go:276] 0 containers: []
	W0814 01:07:27.945620   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:07:27.945628   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:07:27.945693   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:07:27.982470   61804 cri.go:89] found id: ""
	I0814 01:07:27.982498   61804 logs.go:276] 0 containers: []
	W0814 01:07:27.982508   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:07:27.982517   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:07:27.982592   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:07:28.020494   61804 cri.go:89] found id: ""
	I0814 01:07:28.020521   61804 logs.go:276] 0 containers: []
	W0814 01:07:28.020529   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:07:28.020535   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:07:28.020604   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:07:28.055810   61804 cri.go:89] found id: ""
	I0814 01:07:28.055835   61804 logs.go:276] 0 containers: []
	W0814 01:07:28.055846   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:07:28.055854   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:07:28.055917   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:07:28.092241   61804 cri.go:89] found id: ""
	I0814 01:07:28.092266   61804 logs.go:276] 0 containers: []
	W0814 01:07:28.092273   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:07:28.092279   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:07:28.092336   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:07:28.128234   61804 cri.go:89] found id: ""
	I0814 01:07:28.128259   61804 logs.go:276] 0 containers: []
	W0814 01:07:28.128266   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:07:28.128275   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:07:28.128292   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:07:28.169651   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:07:28.169682   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:07:28.223578   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:07:28.223614   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:07:28.237283   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:07:28.237317   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:07:28.310610   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:07:28.310633   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:07:28.310657   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:07:26.090425   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:28.090637   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:26.781404   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:29.280644   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:31.281808   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:29.917201   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:31.918087   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:30.892125   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:30.904416   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:07:30.904487   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:07:30.938158   61804 cri.go:89] found id: ""
	I0814 01:07:30.938186   61804 logs.go:276] 0 containers: []
	W0814 01:07:30.938197   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:07:30.938204   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:07:30.938273   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:07:30.969960   61804 cri.go:89] found id: ""
	I0814 01:07:30.969990   61804 logs.go:276] 0 containers: []
	W0814 01:07:30.970000   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:07:30.970006   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:07:30.970094   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:07:31.003442   61804 cri.go:89] found id: ""
	I0814 01:07:31.003472   61804 logs.go:276] 0 containers: []
	W0814 01:07:31.003484   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:07:31.003492   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:07:31.003547   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:07:31.036819   61804 cri.go:89] found id: ""
	I0814 01:07:31.036852   61804 logs.go:276] 0 containers: []
	W0814 01:07:31.036866   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:07:31.036874   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:07:31.036943   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:07:31.070521   61804 cri.go:89] found id: ""
	I0814 01:07:31.070546   61804 logs.go:276] 0 containers: []
	W0814 01:07:31.070556   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:07:31.070570   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:07:31.070627   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:07:31.111200   61804 cri.go:89] found id: ""
	I0814 01:07:31.111223   61804 logs.go:276] 0 containers: []
	W0814 01:07:31.111230   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:07:31.111236   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:07:31.111299   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:07:31.143931   61804 cri.go:89] found id: ""
	I0814 01:07:31.143965   61804 logs.go:276] 0 containers: []
	W0814 01:07:31.143973   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:07:31.143978   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:07:31.144027   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:07:31.176742   61804 cri.go:89] found id: ""
	I0814 01:07:31.176765   61804 logs.go:276] 0 containers: []
	W0814 01:07:31.176773   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:07:31.176782   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:07:31.176800   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:07:31.247117   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:07:31.247145   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:07:31.247159   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:07:31.327763   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:07:31.327797   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:07:31.368715   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:07:31.368753   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:07:31.421802   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:07:31.421833   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:07:33.936162   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:33.949580   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:07:33.949647   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:07:33.982423   61804 cri.go:89] found id: ""
	I0814 01:07:33.982452   61804 logs.go:276] 0 containers: []
	W0814 01:07:33.982464   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:07:33.982472   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:07:33.982532   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:07:34.015547   61804 cri.go:89] found id: ""
	I0814 01:07:34.015580   61804 logs.go:276] 0 containers: []
	W0814 01:07:34.015591   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:07:34.015598   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:07:34.015660   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:07:34.047814   61804 cri.go:89] found id: ""
	I0814 01:07:34.047837   61804 logs.go:276] 0 containers: []
	W0814 01:07:34.047845   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:07:34.047851   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:07:34.047914   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:07:34.080509   61804 cri.go:89] found id: ""
	I0814 01:07:34.080539   61804 logs.go:276] 0 containers: []
	W0814 01:07:34.080552   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:07:34.080561   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:07:34.080629   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:07:34.114693   61804 cri.go:89] found id: ""
	I0814 01:07:34.114723   61804 logs.go:276] 0 containers: []
	W0814 01:07:34.114735   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:07:34.114742   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:07:34.114812   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:07:34.148294   61804 cri.go:89] found id: ""
	I0814 01:07:34.148321   61804 logs.go:276] 0 containers: []
	W0814 01:07:34.148334   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:07:34.148344   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:07:34.148410   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:07:34.182913   61804 cri.go:89] found id: ""
	I0814 01:07:34.182938   61804 logs.go:276] 0 containers: []
	W0814 01:07:34.182947   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:07:34.182953   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:07:34.183002   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:07:34.215609   61804 cri.go:89] found id: ""
	I0814 01:07:34.215639   61804 logs.go:276] 0 containers: []
	W0814 01:07:34.215649   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:07:34.215662   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:07:34.215688   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:07:34.278627   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:07:34.278657   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:07:34.278674   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:07:34.353824   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:07:34.353863   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:07:34.390511   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:07:34.390551   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:07:34.440170   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:07:34.440205   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:07:30.589452   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:33.089231   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:33.780724   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:35.781648   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:34.417300   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:36.418300   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:36.955228   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:36.968676   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:07:36.968752   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:07:37.005738   61804 cri.go:89] found id: ""
	I0814 01:07:37.005770   61804 logs.go:276] 0 containers: []
	W0814 01:07:37.005781   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:07:37.005800   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:07:37.005876   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:07:37.038556   61804 cri.go:89] found id: ""
	I0814 01:07:37.038586   61804 logs.go:276] 0 containers: []
	W0814 01:07:37.038594   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:07:37.038599   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:07:37.038659   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:07:37.073835   61804 cri.go:89] found id: ""
	I0814 01:07:37.073870   61804 logs.go:276] 0 containers: []
	W0814 01:07:37.073881   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:07:37.073890   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:07:37.073952   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:07:37.109720   61804 cri.go:89] found id: ""
	I0814 01:07:37.109754   61804 logs.go:276] 0 containers: []
	W0814 01:07:37.109766   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:07:37.109774   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:07:37.109837   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:07:37.141361   61804 cri.go:89] found id: ""
	I0814 01:07:37.141391   61804 logs.go:276] 0 containers: []
	W0814 01:07:37.141401   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:07:37.141409   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:07:37.141460   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:07:37.172803   61804 cri.go:89] found id: ""
	I0814 01:07:37.172833   61804 logs.go:276] 0 containers: []
	W0814 01:07:37.172841   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:07:37.172847   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:07:37.172898   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:07:37.205074   61804 cri.go:89] found id: ""
	I0814 01:07:37.205101   61804 logs.go:276] 0 containers: []
	W0814 01:07:37.205110   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:07:37.205116   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:07:37.205172   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:07:37.237440   61804 cri.go:89] found id: ""
	I0814 01:07:37.237462   61804 logs.go:276] 0 containers: []
	W0814 01:07:37.237472   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:07:37.237484   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:07:37.237499   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:07:37.286411   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:07:37.286442   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:07:37.299649   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:07:37.299673   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:07:37.363165   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:07:37.363188   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:07:37.363209   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:07:37.440551   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:07:37.440589   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:07:35.090686   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:37.091438   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:39.590158   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:38.281686   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:40.780496   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:38.919024   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:41.417327   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:39.980740   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:39.992656   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:07:39.992724   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:07:40.026980   61804 cri.go:89] found id: ""
	I0814 01:07:40.027009   61804 logs.go:276] 0 containers: []
	W0814 01:07:40.027020   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:07:40.027027   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:07:40.027093   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:07:40.059474   61804 cri.go:89] found id: ""
	I0814 01:07:40.059509   61804 logs.go:276] 0 containers: []
	W0814 01:07:40.059521   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:07:40.059528   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:07:40.059602   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:07:40.092222   61804 cri.go:89] found id: ""
	I0814 01:07:40.092251   61804 logs.go:276] 0 containers: []
	W0814 01:07:40.092260   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:07:40.092265   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:07:40.092314   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:07:40.123458   61804 cri.go:89] found id: ""
	I0814 01:07:40.123487   61804 logs.go:276] 0 containers: []
	W0814 01:07:40.123495   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:07:40.123501   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:07:40.123557   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:07:40.155410   61804 cri.go:89] found id: ""
	I0814 01:07:40.155433   61804 logs.go:276] 0 containers: []
	W0814 01:07:40.155461   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:07:40.155467   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:07:40.155517   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:07:40.186726   61804 cri.go:89] found id: ""
	I0814 01:07:40.186750   61804 logs.go:276] 0 containers: []
	W0814 01:07:40.186774   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:07:40.186782   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:07:40.186842   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:07:40.223940   61804 cri.go:89] found id: ""
	I0814 01:07:40.223964   61804 logs.go:276] 0 containers: []
	W0814 01:07:40.223974   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:07:40.223981   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:07:40.224039   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:07:40.255483   61804 cri.go:89] found id: ""
	I0814 01:07:40.255511   61804 logs.go:276] 0 containers: []
	W0814 01:07:40.255520   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:07:40.255532   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:07:40.255547   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:07:40.307368   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:07:40.307400   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:07:40.320297   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:07:40.320323   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:07:40.382358   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:07:40.382390   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:07:40.382406   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:07:40.464226   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:07:40.464312   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:07:43.001144   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:43.015011   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:07:43.015090   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:07:43.047581   61804 cri.go:89] found id: ""
	I0814 01:07:43.047617   61804 logs.go:276] 0 containers: []
	W0814 01:07:43.047629   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:07:43.047636   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:07:43.047709   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:07:43.081737   61804 cri.go:89] found id: ""
	I0814 01:07:43.081769   61804 logs.go:276] 0 containers: []
	W0814 01:07:43.081780   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:07:43.081788   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:07:43.081858   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:07:43.116828   61804 cri.go:89] found id: ""
	I0814 01:07:43.116851   61804 logs.go:276] 0 containers: []
	W0814 01:07:43.116860   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:07:43.116865   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:07:43.116918   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:07:43.149154   61804 cri.go:89] found id: ""
	I0814 01:07:43.149183   61804 logs.go:276] 0 containers: []
	W0814 01:07:43.149195   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:07:43.149203   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:07:43.149270   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:07:43.183298   61804 cri.go:89] found id: ""
	I0814 01:07:43.183327   61804 logs.go:276] 0 containers: []
	W0814 01:07:43.183335   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:07:43.183341   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:07:43.183402   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:07:43.217844   61804 cri.go:89] found id: ""
	I0814 01:07:43.217875   61804 logs.go:276] 0 containers: []
	W0814 01:07:43.217885   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:07:43.217894   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:07:43.217957   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:07:43.254501   61804 cri.go:89] found id: ""
	I0814 01:07:43.254529   61804 logs.go:276] 0 containers: []
	W0814 01:07:43.254540   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:07:43.254549   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:07:43.254621   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:07:43.288499   61804 cri.go:89] found id: ""
	I0814 01:07:43.288520   61804 logs.go:276] 0 containers: []
	W0814 01:07:43.288528   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:07:43.288538   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:07:43.288553   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:07:43.364920   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:07:43.364957   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:07:43.402536   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:07:43.402563   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:07:43.454370   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:07:43.454403   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:07:43.467972   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:07:43.468000   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:07:43.541823   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:07:42.089879   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:44.090254   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:42.781141   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:45.280856   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:43.418435   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:45.918224   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:47.918468   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:46.042614   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:46.055014   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:07:46.055074   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:07:46.088632   61804 cri.go:89] found id: ""
	I0814 01:07:46.088664   61804 logs.go:276] 0 containers: []
	W0814 01:07:46.088676   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:07:46.088684   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:07:46.088755   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:07:46.121747   61804 cri.go:89] found id: ""
	I0814 01:07:46.121774   61804 logs.go:276] 0 containers: []
	W0814 01:07:46.121782   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:07:46.121788   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:07:46.121837   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:07:46.157301   61804 cri.go:89] found id: ""
	I0814 01:07:46.157329   61804 logs.go:276] 0 containers: []
	W0814 01:07:46.157340   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:07:46.157348   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:07:46.157412   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:07:46.188543   61804 cri.go:89] found id: ""
	I0814 01:07:46.188575   61804 logs.go:276] 0 containers: []
	W0814 01:07:46.188586   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:07:46.188594   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:07:46.188657   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:07:46.219762   61804 cri.go:89] found id: ""
	I0814 01:07:46.219787   61804 logs.go:276] 0 containers: []
	W0814 01:07:46.219795   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:07:46.219801   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:07:46.219849   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:07:46.253187   61804 cri.go:89] found id: ""
	I0814 01:07:46.253223   61804 logs.go:276] 0 containers: []
	W0814 01:07:46.253234   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:07:46.253242   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:07:46.253326   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:07:46.287614   61804 cri.go:89] found id: ""
	I0814 01:07:46.287647   61804 logs.go:276] 0 containers: []
	W0814 01:07:46.287656   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:07:46.287662   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:07:46.287716   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:07:46.323558   61804 cri.go:89] found id: ""
	I0814 01:07:46.323588   61804 logs.go:276] 0 containers: []
	W0814 01:07:46.323599   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:07:46.323611   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:07:46.323628   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:07:46.336110   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:07:46.336139   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:07:46.398541   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:07:46.398568   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:07:46.398584   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:07:46.476132   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:07:46.476166   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:07:46.521433   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:07:46.521470   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:07:49.071324   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:49.083741   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:07:49.083816   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:07:49.117788   61804 cri.go:89] found id: ""
	I0814 01:07:49.117816   61804 logs.go:276] 0 containers: []
	W0814 01:07:49.117828   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:07:49.117836   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:07:49.117903   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:07:49.153363   61804 cri.go:89] found id: ""
	I0814 01:07:49.153398   61804 logs.go:276] 0 containers: []
	W0814 01:07:49.153409   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:07:49.153417   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:07:49.153488   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:07:49.186229   61804 cri.go:89] found id: ""
	I0814 01:07:49.186253   61804 logs.go:276] 0 containers: []
	W0814 01:07:49.186261   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:07:49.186267   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:07:49.186327   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:07:49.218463   61804 cri.go:89] found id: ""
	I0814 01:07:49.218485   61804 logs.go:276] 0 containers: []
	W0814 01:07:49.218492   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:07:49.218498   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:07:49.218559   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:07:49.250172   61804 cri.go:89] found id: ""
	I0814 01:07:49.250204   61804 logs.go:276] 0 containers: []
	W0814 01:07:49.250214   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:07:49.250222   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:07:49.250287   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:07:49.285574   61804 cri.go:89] found id: ""
	I0814 01:07:49.285602   61804 logs.go:276] 0 containers: []
	W0814 01:07:49.285612   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:07:49.285620   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:07:49.285679   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:07:49.317583   61804 cri.go:89] found id: ""
	I0814 01:07:49.317614   61804 logs.go:276] 0 containers: []
	W0814 01:07:49.317625   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:07:49.317632   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:07:49.317690   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:07:49.350486   61804 cri.go:89] found id: ""
	I0814 01:07:49.350513   61804 logs.go:276] 0 containers: []
	W0814 01:07:49.350524   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:07:49.350535   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:07:49.350550   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:07:49.401242   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:07:49.401278   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:07:49.415776   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:07:49.415805   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:07:49.487135   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:07:49.487207   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:07:49.487229   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:07:46.092233   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:48.589232   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:47.780910   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:49.781008   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:50.418178   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:52.917953   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:49.569068   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:07:49.569103   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:07:52.108074   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:52.120495   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:07:52.120568   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:07:52.155022   61804 cri.go:89] found id: ""
	I0814 01:07:52.155047   61804 logs.go:276] 0 containers: []
	W0814 01:07:52.155055   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:07:52.155063   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:07:52.155131   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:07:52.186783   61804 cri.go:89] found id: ""
	I0814 01:07:52.186813   61804 logs.go:276] 0 containers: []
	W0814 01:07:52.186837   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:07:52.186854   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:07:52.186908   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:07:52.219089   61804 cri.go:89] found id: ""
	I0814 01:07:52.219118   61804 logs.go:276] 0 containers: []
	W0814 01:07:52.219129   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:07:52.219136   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:07:52.219200   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:07:52.252343   61804 cri.go:89] found id: ""
	I0814 01:07:52.252378   61804 logs.go:276] 0 containers: []
	W0814 01:07:52.252391   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:07:52.252399   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:07:52.252460   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:07:52.288827   61804 cri.go:89] found id: ""
	I0814 01:07:52.288848   61804 logs.go:276] 0 containers: []
	W0814 01:07:52.288855   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:07:52.288861   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:07:52.288913   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:07:52.322201   61804 cri.go:89] found id: ""
	I0814 01:07:52.322228   61804 logs.go:276] 0 containers: []
	W0814 01:07:52.322240   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:07:52.322247   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:07:52.322327   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:07:52.357482   61804 cri.go:89] found id: ""
	I0814 01:07:52.357508   61804 logs.go:276] 0 containers: []
	W0814 01:07:52.357519   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:07:52.357527   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:07:52.357599   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:07:52.390481   61804 cri.go:89] found id: ""
	I0814 01:07:52.390508   61804 logs.go:276] 0 containers: []
	W0814 01:07:52.390515   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:07:52.390523   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:07:52.390536   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:07:52.403144   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:07:52.403171   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:07:52.474148   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:07:52.474170   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:07:52.474182   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:07:52.555353   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:07:52.555396   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:07:52.592151   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:07:52.592180   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:07:50.589355   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:52.590468   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:52.282598   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:54.780753   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:55.418165   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:57.418294   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:55.143835   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:55.156285   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:07:55.156360   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:07:55.195624   61804 cri.go:89] found id: ""
	I0814 01:07:55.195655   61804 logs.go:276] 0 containers: []
	W0814 01:07:55.195666   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:07:55.195673   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:07:55.195735   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:07:55.230384   61804 cri.go:89] found id: ""
	I0814 01:07:55.230409   61804 logs.go:276] 0 containers: []
	W0814 01:07:55.230419   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:07:55.230426   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:07:55.230491   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:07:55.264774   61804 cri.go:89] found id: ""
	I0814 01:07:55.264802   61804 logs.go:276] 0 containers: []
	W0814 01:07:55.264812   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:07:55.264819   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:07:55.264905   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:07:55.297679   61804 cri.go:89] found id: ""
	I0814 01:07:55.297706   61804 logs.go:276] 0 containers: []
	W0814 01:07:55.297715   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:07:55.297721   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:07:55.297780   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:07:55.331555   61804 cri.go:89] found id: ""
	I0814 01:07:55.331591   61804 logs.go:276] 0 containers: []
	W0814 01:07:55.331602   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:07:55.331609   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:07:55.331685   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:07:55.362351   61804 cri.go:89] found id: ""
	I0814 01:07:55.362374   61804 logs.go:276] 0 containers: []
	W0814 01:07:55.362381   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:07:55.362388   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:07:55.362434   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:07:55.397261   61804 cri.go:89] found id: ""
	I0814 01:07:55.397292   61804 logs.go:276] 0 containers: []
	W0814 01:07:55.397301   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:07:55.397308   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:07:55.397355   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:07:55.431333   61804 cri.go:89] found id: ""
	I0814 01:07:55.431363   61804 logs.go:276] 0 containers: []
	W0814 01:07:55.431376   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:07:55.431388   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:07:55.431403   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:07:55.445865   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:07:55.445901   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:07:55.511474   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:07:55.511494   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:07:55.511505   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:07:55.596934   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:07:55.596966   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:07:55.632440   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:07:55.632477   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:07:58.183656   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:58.196717   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:07:58.196776   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:07:58.231854   61804 cri.go:89] found id: ""
	I0814 01:07:58.231890   61804 logs.go:276] 0 containers: []
	W0814 01:07:58.231902   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:07:58.231910   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:07:58.231972   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:07:58.267169   61804 cri.go:89] found id: ""
	I0814 01:07:58.267201   61804 logs.go:276] 0 containers: []
	W0814 01:07:58.267211   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:07:58.267218   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:07:58.267277   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:07:58.301552   61804 cri.go:89] found id: ""
	I0814 01:07:58.301581   61804 logs.go:276] 0 containers: []
	W0814 01:07:58.301589   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:07:58.301596   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:07:58.301652   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:07:58.334399   61804 cri.go:89] found id: ""
	I0814 01:07:58.334427   61804 logs.go:276] 0 containers: []
	W0814 01:07:58.334434   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:07:58.334440   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:07:58.334490   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:07:58.366748   61804 cri.go:89] found id: ""
	I0814 01:07:58.366777   61804 logs.go:276] 0 containers: []
	W0814 01:07:58.366787   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:07:58.366794   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:07:58.366860   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:07:58.401078   61804 cri.go:89] found id: ""
	I0814 01:07:58.401108   61804 logs.go:276] 0 containers: []
	W0814 01:07:58.401117   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:07:58.401123   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:07:58.401179   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:07:58.433766   61804 cri.go:89] found id: ""
	I0814 01:07:58.433795   61804 logs.go:276] 0 containers: []
	W0814 01:07:58.433807   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:07:58.433813   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:07:58.433863   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:07:58.467187   61804 cri.go:89] found id: ""
	I0814 01:07:58.467211   61804 logs.go:276] 0 containers: []
	W0814 01:07:58.467219   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:07:58.467227   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:07:58.467241   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:07:58.520695   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:07:58.520733   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:07:58.535262   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:07:58.535288   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:07:58.601335   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:07:58.601354   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:07:58.601367   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:07:58.683365   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:07:58.683411   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:07:55.089601   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:57.089754   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:59.590432   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:56.783376   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:59.282603   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:59.917309   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:01.917515   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:01.221305   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:01.233782   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:01.233863   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:01.265991   61804 cri.go:89] found id: ""
	I0814 01:08:01.266019   61804 logs.go:276] 0 containers: []
	W0814 01:08:01.266030   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:01.266048   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:01.266116   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:01.300802   61804 cri.go:89] found id: ""
	I0814 01:08:01.300825   61804 logs.go:276] 0 containers: []
	W0814 01:08:01.300840   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:01.300851   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:01.300918   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:01.334762   61804 cri.go:89] found id: ""
	I0814 01:08:01.334788   61804 logs.go:276] 0 containers: []
	W0814 01:08:01.334796   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:01.334803   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:01.334858   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:01.367051   61804 cri.go:89] found id: ""
	I0814 01:08:01.367075   61804 logs.go:276] 0 containers: []
	W0814 01:08:01.367083   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:01.367089   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:01.367147   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:01.401875   61804 cri.go:89] found id: ""
	I0814 01:08:01.401904   61804 logs.go:276] 0 containers: []
	W0814 01:08:01.401915   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:01.401922   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:01.401982   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:01.435930   61804 cri.go:89] found id: ""
	I0814 01:08:01.435958   61804 logs.go:276] 0 containers: []
	W0814 01:08:01.435975   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:01.435994   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:01.436056   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:01.470913   61804 cri.go:89] found id: ""
	I0814 01:08:01.470943   61804 logs.go:276] 0 containers: []
	W0814 01:08:01.470958   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:01.470966   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:01.471030   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:01.506552   61804 cri.go:89] found id: ""
	I0814 01:08:01.506584   61804 logs.go:276] 0 containers: []
	W0814 01:08:01.506595   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:01.506607   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:01.506621   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:01.557203   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:01.557245   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:01.570729   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:01.570754   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:01.636244   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:01.636268   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:01.636282   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:01.720905   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:01.720937   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:04.261326   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:04.274952   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:04.275020   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:04.309640   61804 cri.go:89] found id: ""
	I0814 01:08:04.309695   61804 logs.go:276] 0 containers: []
	W0814 01:08:04.309708   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:04.309717   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:04.309784   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:04.343333   61804 cri.go:89] found id: ""
	I0814 01:08:04.343368   61804 logs.go:276] 0 containers: []
	W0814 01:08:04.343380   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:04.343388   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:04.343446   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:04.377058   61804 cri.go:89] found id: ""
	I0814 01:08:04.377090   61804 logs.go:276] 0 containers: []
	W0814 01:08:04.377101   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:04.377109   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:04.377170   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:04.411932   61804 cri.go:89] found id: ""
	I0814 01:08:04.411961   61804 logs.go:276] 0 containers: []
	W0814 01:08:04.411973   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:04.411980   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:04.412039   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:04.449523   61804 cri.go:89] found id: ""
	I0814 01:08:04.449557   61804 logs.go:276] 0 containers: []
	W0814 01:08:04.449569   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:04.449577   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:04.449639   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:04.505818   61804 cri.go:89] found id: ""
	I0814 01:08:04.505844   61804 logs.go:276] 0 containers: []
	W0814 01:08:04.505852   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:04.505858   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:04.505911   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:01.594524   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:04.089421   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:01.780659   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:03.780893   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:06.281784   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:03.917861   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:06.417117   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:04.540720   61804 cri.go:89] found id: ""
	I0814 01:08:04.540747   61804 logs.go:276] 0 containers: []
	W0814 01:08:04.540754   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:04.540759   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:04.540822   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:04.575188   61804 cri.go:89] found id: ""
	I0814 01:08:04.575218   61804 logs.go:276] 0 containers: []
	W0814 01:08:04.575230   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:04.575241   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:04.575254   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:04.624557   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:04.624593   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:04.637679   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:04.637707   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:04.707655   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:04.707676   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:04.707690   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:04.792530   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:04.792564   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:07.333726   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:07.346667   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:07.346762   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:07.379773   61804 cri.go:89] found id: ""
	I0814 01:08:07.379809   61804 logs.go:276] 0 containers: []
	W0814 01:08:07.379821   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:07.379832   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:07.379898   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:07.413473   61804 cri.go:89] found id: ""
	I0814 01:08:07.413508   61804 logs.go:276] 0 containers: []
	W0814 01:08:07.413519   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:07.413528   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:07.413592   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:07.448033   61804 cri.go:89] found id: ""
	I0814 01:08:07.448065   61804 logs.go:276] 0 containers: []
	W0814 01:08:07.448076   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:07.448084   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:07.448149   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:07.483015   61804 cri.go:89] found id: ""
	I0814 01:08:07.483043   61804 logs.go:276] 0 containers: []
	W0814 01:08:07.483051   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:07.483057   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:07.483116   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:07.516222   61804 cri.go:89] found id: ""
	I0814 01:08:07.516245   61804 logs.go:276] 0 containers: []
	W0814 01:08:07.516253   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:07.516259   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:07.516309   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:07.552179   61804 cri.go:89] found id: ""
	I0814 01:08:07.552203   61804 logs.go:276] 0 containers: []
	W0814 01:08:07.552211   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:07.552217   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:07.552269   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:07.585804   61804 cri.go:89] found id: ""
	I0814 01:08:07.585832   61804 logs.go:276] 0 containers: []
	W0814 01:08:07.585842   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:07.585850   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:07.585913   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:07.620731   61804 cri.go:89] found id: ""
	I0814 01:08:07.620757   61804 logs.go:276] 0 containers: []
	W0814 01:08:07.620766   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:07.620774   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:07.620786   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:07.662648   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:07.662686   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:07.713380   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:07.713418   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:07.726770   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:07.726801   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:07.794679   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:07.794705   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:07.794720   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:06.090545   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:08.093404   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:08.780821   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:11.281395   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:08.417151   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:10.418613   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:12.916869   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:10.370665   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:10.383986   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:10.384046   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:10.417596   61804 cri.go:89] found id: ""
	I0814 01:08:10.417622   61804 logs.go:276] 0 containers: []
	W0814 01:08:10.417634   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:10.417642   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:10.417703   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:10.453782   61804 cri.go:89] found id: ""
	I0814 01:08:10.453813   61804 logs.go:276] 0 containers: []
	W0814 01:08:10.453824   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:10.453832   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:10.453895   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:10.486795   61804 cri.go:89] found id: ""
	I0814 01:08:10.486821   61804 logs.go:276] 0 containers: []
	W0814 01:08:10.486831   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:10.486839   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:10.486930   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:10.519249   61804 cri.go:89] found id: ""
	I0814 01:08:10.519285   61804 logs.go:276] 0 containers: []
	W0814 01:08:10.519296   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:10.519304   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:10.519369   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:10.551791   61804 cri.go:89] found id: ""
	I0814 01:08:10.551818   61804 logs.go:276] 0 containers: []
	W0814 01:08:10.551825   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:10.551834   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:10.551892   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:10.584630   61804 cri.go:89] found id: ""
	I0814 01:08:10.584658   61804 logs.go:276] 0 containers: []
	W0814 01:08:10.584669   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:10.584679   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:10.584742   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:10.616870   61804 cri.go:89] found id: ""
	I0814 01:08:10.616898   61804 logs.go:276] 0 containers: []
	W0814 01:08:10.616911   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:10.616918   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:10.616984   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:10.650681   61804 cri.go:89] found id: ""
	I0814 01:08:10.650709   61804 logs.go:276] 0 containers: []
	W0814 01:08:10.650721   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:10.650731   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:10.650748   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:10.663021   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:10.663047   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:10.731788   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:10.731813   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:10.731829   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:10.812174   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:10.812213   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:10.854260   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:10.854287   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:13.414862   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:13.428537   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:13.428595   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:13.460800   61804 cri.go:89] found id: ""
	I0814 01:08:13.460836   61804 logs.go:276] 0 containers: []
	W0814 01:08:13.460850   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:13.460859   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:13.460933   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:13.494240   61804 cri.go:89] found id: ""
	I0814 01:08:13.494264   61804 logs.go:276] 0 containers: []
	W0814 01:08:13.494274   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:13.494282   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:13.494370   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:13.526684   61804 cri.go:89] found id: ""
	I0814 01:08:13.526715   61804 logs.go:276] 0 containers: []
	W0814 01:08:13.526726   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:13.526734   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:13.526797   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:13.560258   61804 cri.go:89] found id: ""
	I0814 01:08:13.560281   61804 logs.go:276] 0 containers: []
	W0814 01:08:13.560289   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:13.560296   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:13.560353   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:13.592615   61804 cri.go:89] found id: ""
	I0814 01:08:13.592641   61804 logs.go:276] 0 containers: []
	W0814 01:08:13.592653   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:13.592668   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:13.592732   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:13.627268   61804 cri.go:89] found id: ""
	I0814 01:08:13.627291   61804 logs.go:276] 0 containers: []
	W0814 01:08:13.627299   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:13.627305   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:13.627363   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:13.661932   61804 cri.go:89] found id: ""
	I0814 01:08:13.661955   61804 logs.go:276] 0 containers: []
	W0814 01:08:13.661963   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:13.661968   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:13.662024   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:13.694724   61804 cri.go:89] found id: ""
	I0814 01:08:13.694750   61804 logs.go:276] 0 containers: []
	W0814 01:08:13.694760   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:13.694770   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:13.694785   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:13.759415   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:13.759436   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:13.759449   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:13.835496   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:13.835532   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:13.873749   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:13.873779   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:13.927612   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:13.927647   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:10.590789   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:13.090113   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:13.781937   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:16.281253   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:14.920559   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:17.418625   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:16.440696   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:16.455648   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:16.455734   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:16.490557   61804 cri.go:89] found id: ""
	I0814 01:08:16.490587   61804 logs.go:276] 0 containers: []
	W0814 01:08:16.490599   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:16.490606   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:16.490667   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:16.524268   61804 cri.go:89] found id: ""
	I0814 01:08:16.524294   61804 logs.go:276] 0 containers: []
	W0814 01:08:16.524303   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:16.524315   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:16.524379   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:16.562651   61804 cri.go:89] found id: ""
	I0814 01:08:16.562686   61804 logs.go:276] 0 containers: []
	W0814 01:08:16.562696   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:16.562708   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:16.562771   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:16.598581   61804 cri.go:89] found id: ""
	I0814 01:08:16.598605   61804 logs.go:276] 0 containers: []
	W0814 01:08:16.598613   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:16.598619   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:16.598669   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:16.646849   61804 cri.go:89] found id: ""
	I0814 01:08:16.646872   61804 logs.go:276] 0 containers: []
	W0814 01:08:16.646880   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:16.646886   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:16.646939   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:16.698695   61804 cri.go:89] found id: ""
	I0814 01:08:16.698720   61804 logs.go:276] 0 containers: []
	W0814 01:08:16.698727   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:16.698733   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:16.698793   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:16.748149   61804 cri.go:89] found id: ""
	I0814 01:08:16.748182   61804 logs.go:276] 0 containers: []
	W0814 01:08:16.748193   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:16.748201   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:16.748263   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:16.783334   61804 cri.go:89] found id: ""
	I0814 01:08:16.783362   61804 logs.go:276] 0 containers: []
	W0814 01:08:16.783371   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:16.783378   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:16.783389   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:16.833178   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:16.833211   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:16.845843   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:16.845873   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:16.916728   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:16.916754   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:16.916770   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:17.001194   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:17.001236   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:15.588888   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:17.589309   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:19.593806   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:18.780869   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:20.780899   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:19.918779   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:22.417464   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:19.540300   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:19.554740   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:19.554823   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:19.590452   61804 cri.go:89] found id: ""
	I0814 01:08:19.590478   61804 logs.go:276] 0 containers: []
	W0814 01:08:19.590489   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:19.590498   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:19.590559   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:19.623643   61804 cri.go:89] found id: ""
	I0814 01:08:19.623673   61804 logs.go:276] 0 containers: []
	W0814 01:08:19.623683   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:19.623691   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:19.623759   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:19.659205   61804 cri.go:89] found id: ""
	I0814 01:08:19.659228   61804 logs.go:276] 0 containers: []
	W0814 01:08:19.659236   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:19.659243   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:19.659312   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:19.695038   61804 cri.go:89] found id: ""
	I0814 01:08:19.695061   61804 logs.go:276] 0 containers: []
	W0814 01:08:19.695068   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:19.695075   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:19.695132   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:19.728525   61804 cri.go:89] found id: ""
	I0814 01:08:19.728555   61804 logs.go:276] 0 containers: []
	W0814 01:08:19.728568   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:19.728585   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:19.728652   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:19.764153   61804 cri.go:89] found id: ""
	I0814 01:08:19.764180   61804 logs.go:276] 0 containers: []
	W0814 01:08:19.764191   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:19.764198   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:19.764261   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:19.803346   61804 cri.go:89] found id: ""
	I0814 01:08:19.803382   61804 logs.go:276] 0 containers: []
	W0814 01:08:19.803392   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:19.803400   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:19.803462   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:19.835783   61804 cri.go:89] found id: ""
	I0814 01:08:19.835811   61804 logs.go:276] 0 containers: []
	W0814 01:08:19.835818   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:19.835827   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:19.835839   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:19.889917   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:19.889961   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:19.903826   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:19.903858   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:19.977790   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:19.977813   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:19.977832   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:20.053634   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:20.053672   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:22.598821   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:22.612128   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:22.612209   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:22.647840   61804 cri.go:89] found id: ""
	I0814 01:08:22.647864   61804 logs.go:276] 0 containers: []
	W0814 01:08:22.647873   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:22.647880   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:22.647942   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:22.681572   61804 cri.go:89] found id: ""
	I0814 01:08:22.681594   61804 logs.go:276] 0 containers: []
	W0814 01:08:22.681601   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:22.681606   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:22.681670   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:22.715737   61804 cri.go:89] found id: ""
	I0814 01:08:22.715785   61804 logs.go:276] 0 containers: []
	W0814 01:08:22.715793   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:22.715799   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:22.715856   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:22.750605   61804 cri.go:89] found id: ""
	I0814 01:08:22.750628   61804 logs.go:276] 0 containers: []
	W0814 01:08:22.750636   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:22.750643   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:22.750693   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:22.786410   61804 cri.go:89] found id: ""
	I0814 01:08:22.786434   61804 logs.go:276] 0 containers: []
	W0814 01:08:22.786442   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:22.786447   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:22.786502   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:22.821799   61804 cri.go:89] found id: ""
	I0814 01:08:22.821830   61804 logs.go:276] 0 containers: []
	W0814 01:08:22.821840   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:22.821846   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:22.821923   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:22.861218   61804 cri.go:89] found id: ""
	I0814 01:08:22.861243   61804 logs.go:276] 0 containers: []
	W0814 01:08:22.861254   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:22.861261   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:22.861324   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:22.896371   61804 cri.go:89] found id: ""
	I0814 01:08:22.896398   61804 logs.go:276] 0 containers: []
	W0814 01:08:22.896408   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:22.896419   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:22.896434   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:22.951998   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:22.952035   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:22.966214   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:22.966239   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:23.035790   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:23.035812   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:23.035824   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:23.119675   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:23.119708   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:22.090427   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:24.100671   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:22.781758   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:25.280556   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:24.419130   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:26.918236   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:25.657771   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:25.671521   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:25.671607   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:25.708419   61804 cri.go:89] found id: ""
	I0814 01:08:25.708451   61804 logs.go:276] 0 containers: []
	W0814 01:08:25.708460   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:25.708466   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:25.708514   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:25.743263   61804 cri.go:89] found id: ""
	I0814 01:08:25.743296   61804 logs.go:276] 0 containers: []
	W0814 01:08:25.743309   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:25.743318   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:25.743384   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:25.773544   61804 cri.go:89] found id: ""
	I0814 01:08:25.773570   61804 logs.go:276] 0 containers: []
	W0814 01:08:25.773580   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:25.773588   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:25.773649   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:25.805316   61804 cri.go:89] found id: ""
	I0814 01:08:25.805339   61804 logs.go:276] 0 containers: []
	W0814 01:08:25.805347   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:25.805353   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:25.805404   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:25.837785   61804 cri.go:89] found id: ""
	I0814 01:08:25.837810   61804 logs.go:276] 0 containers: []
	W0814 01:08:25.837818   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:25.837824   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:25.837893   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:25.877145   61804 cri.go:89] found id: ""
	I0814 01:08:25.877171   61804 logs.go:276] 0 containers: []
	W0814 01:08:25.877182   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:25.877190   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:25.877236   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:25.913823   61804 cri.go:89] found id: ""
	I0814 01:08:25.913861   61804 logs.go:276] 0 containers: []
	W0814 01:08:25.913872   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:25.913880   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:25.913946   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:25.947397   61804 cri.go:89] found id: ""
	I0814 01:08:25.947419   61804 logs.go:276] 0 containers: []
	W0814 01:08:25.947427   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:25.947435   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:25.947446   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:26.023754   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:26.023812   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:26.060030   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:26.060068   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:26.110625   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:26.110663   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:26.123952   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:26.123991   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:26.194210   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:28.694490   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:28.706976   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:28.707040   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:28.739739   61804 cri.go:89] found id: ""
	I0814 01:08:28.739768   61804 logs.go:276] 0 containers: []
	W0814 01:08:28.739775   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:28.739781   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:28.739831   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:28.771179   61804 cri.go:89] found id: ""
	I0814 01:08:28.771217   61804 logs.go:276] 0 containers: []
	W0814 01:08:28.771228   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:28.771237   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:28.771303   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:28.805634   61804 cri.go:89] found id: ""
	I0814 01:08:28.805661   61804 logs.go:276] 0 containers: []
	W0814 01:08:28.805670   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:28.805675   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:28.805727   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:28.840796   61804 cri.go:89] found id: ""
	I0814 01:08:28.840819   61804 logs.go:276] 0 containers: []
	W0814 01:08:28.840827   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:28.840833   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:28.840893   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:28.879627   61804 cri.go:89] found id: ""
	I0814 01:08:28.879656   61804 logs.go:276] 0 containers: []
	W0814 01:08:28.879668   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:28.879675   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:28.879734   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:28.916568   61804 cri.go:89] found id: ""
	I0814 01:08:28.916588   61804 logs.go:276] 0 containers: []
	W0814 01:08:28.916597   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:28.916602   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:28.916658   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:28.952959   61804 cri.go:89] found id: ""
	I0814 01:08:28.952986   61804 logs.go:276] 0 containers: []
	W0814 01:08:28.952996   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:28.953003   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:28.953061   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:28.993496   61804 cri.go:89] found id: ""
	I0814 01:08:28.993527   61804 logs.go:276] 0 containers: []
	W0814 01:08:28.993538   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:28.993550   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:28.993565   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:29.079181   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:29.079219   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:29.121692   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:29.121718   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:29.174008   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:29.174068   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:29.188872   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:29.188904   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:29.254381   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:26.589068   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:28.590266   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:27.281232   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:29.781697   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:28.918512   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:31.418087   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:31.754986   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:31.767581   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:31.767656   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:31.803826   61804 cri.go:89] found id: ""
	I0814 01:08:31.803853   61804 logs.go:276] 0 containers: []
	W0814 01:08:31.803861   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:31.803867   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:31.803927   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:31.837958   61804 cri.go:89] found id: ""
	I0814 01:08:31.837986   61804 logs.go:276] 0 containers: []
	W0814 01:08:31.837996   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:31.838004   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:31.838077   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:31.869567   61804 cri.go:89] found id: ""
	I0814 01:08:31.869595   61804 logs.go:276] 0 containers: []
	W0814 01:08:31.869604   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:31.869612   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:31.869680   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:31.906943   61804 cri.go:89] found id: ""
	I0814 01:08:31.906973   61804 logs.go:276] 0 containers: []
	W0814 01:08:31.906985   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:31.906992   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:31.907059   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:31.940969   61804 cri.go:89] found id: ""
	I0814 01:08:31.941006   61804 logs.go:276] 0 containers: []
	W0814 01:08:31.941017   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:31.941025   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:31.941081   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:31.974546   61804 cri.go:89] found id: ""
	I0814 01:08:31.974578   61804 logs.go:276] 0 containers: []
	W0814 01:08:31.974588   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:31.974596   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:31.974657   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:32.007586   61804 cri.go:89] found id: ""
	I0814 01:08:32.007619   61804 logs.go:276] 0 containers: []
	W0814 01:08:32.007633   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:32.007641   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:32.007703   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:32.040073   61804 cri.go:89] found id: ""
	I0814 01:08:32.040104   61804 logs.go:276] 0 containers: []
	W0814 01:08:32.040116   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:32.040128   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:32.040142   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:32.094938   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:32.094978   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:32.107967   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:32.108002   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:32.176290   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:32.176314   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:32.176326   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:32.251231   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:32.251269   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:30.590569   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:33.089507   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:32.287689   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:34.781273   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:33.918103   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:36.417197   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:34.791693   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:34.804519   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:34.804582   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:34.838907   61804 cri.go:89] found id: ""
	I0814 01:08:34.838933   61804 logs.go:276] 0 containers: []
	W0814 01:08:34.838941   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:34.838947   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:34.839008   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:34.869650   61804 cri.go:89] found id: ""
	I0814 01:08:34.869676   61804 logs.go:276] 0 containers: []
	W0814 01:08:34.869684   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:34.869689   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:34.869739   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:34.903598   61804 cri.go:89] found id: ""
	I0814 01:08:34.903635   61804 logs.go:276] 0 containers: []
	W0814 01:08:34.903648   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:34.903655   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:34.903719   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:34.937101   61804 cri.go:89] found id: ""
	I0814 01:08:34.937131   61804 logs.go:276] 0 containers: []
	W0814 01:08:34.937143   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:34.937151   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:34.937214   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:34.969880   61804 cri.go:89] found id: ""
	I0814 01:08:34.969913   61804 logs.go:276] 0 containers: []
	W0814 01:08:34.969925   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:34.969933   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:34.969990   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:35.004158   61804 cri.go:89] found id: ""
	I0814 01:08:35.004185   61804 logs.go:276] 0 containers: []
	W0814 01:08:35.004194   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:35.004200   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:35.004267   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:35.037368   61804 cri.go:89] found id: ""
	I0814 01:08:35.037397   61804 logs.go:276] 0 containers: []
	W0814 01:08:35.037407   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:35.037415   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:35.037467   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:35.071051   61804 cri.go:89] found id: ""
	I0814 01:08:35.071080   61804 logs.go:276] 0 containers: []
	W0814 01:08:35.071089   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:35.071102   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:35.071116   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:35.147845   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:35.147879   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:35.189235   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:35.189271   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:35.242094   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:35.242132   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:35.255405   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:35.255430   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:35.325820   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:37.826188   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:37.839036   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:37.839117   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:37.876368   61804 cri.go:89] found id: ""
	I0814 01:08:37.876397   61804 logs.go:276] 0 containers: []
	W0814 01:08:37.876406   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:37.876411   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:37.876468   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:37.916680   61804 cri.go:89] found id: ""
	I0814 01:08:37.916717   61804 logs.go:276] 0 containers: []
	W0814 01:08:37.916727   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:37.916735   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:37.916802   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:37.951025   61804 cri.go:89] found id: ""
	I0814 01:08:37.951048   61804 logs.go:276] 0 containers: []
	W0814 01:08:37.951056   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:37.951062   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:37.951122   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:37.984837   61804 cri.go:89] found id: ""
	I0814 01:08:37.984865   61804 logs.go:276] 0 containers: []
	W0814 01:08:37.984873   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:37.984878   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:37.984928   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:38.018722   61804 cri.go:89] found id: ""
	I0814 01:08:38.018744   61804 logs.go:276] 0 containers: []
	W0814 01:08:38.018752   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:38.018757   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:38.018815   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:38.052306   61804 cri.go:89] found id: ""
	I0814 01:08:38.052337   61804 logs.go:276] 0 containers: []
	W0814 01:08:38.052350   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:38.052358   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:38.052419   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:38.086752   61804 cri.go:89] found id: ""
	I0814 01:08:38.086784   61804 logs.go:276] 0 containers: []
	W0814 01:08:38.086801   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:38.086811   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:38.086877   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:38.119201   61804 cri.go:89] found id: ""
	I0814 01:08:38.119228   61804 logs.go:276] 0 containers: []
	W0814 01:08:38.119235   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:38.119243   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:38.119255   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:38.171460   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:38.171492   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:38.184712   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:38.184739   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:38.248529   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:38.248552   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:38.248568   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:38.324517   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:38.324556   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:35.092682   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:37.590633   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:39.590761   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:37.280984   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:39.780961   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:38.417262   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:40.417822   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:42.918615   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:40.865218   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:40.877772   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:40.877847   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:40.910171   61804 cri.go:89] found id: ""
	I0814 01:08:40.910197   61804 logs.go:276] 0 containers: []
	W0814 01:08:40.910204   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:40.910210   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:40.910257   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:40.947205   61804 cri.go:89] found id: ""
	I0814 01:08:40.947234   61804 logs.go:276] 0 containers: []
	W0814 01:08:40.947244   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:40.947249   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:40.947304   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:40.979404   61804 cri.go:89] found id: ""
	I0814 01:08:40.979428   61804 logs.go:276] 0 containers: []
	W0814 01:08:40.979436   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:40.979442   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:40.979500   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:41.017710   61804 cri.go:89] found id: ""
	I0814 01:08:41.017737   61804 logs.go:276] 0 containers: []
	W0814 01:08:41.017746   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:41.017752   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:41.017799   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:41.052240   61804 cri.go:89] found id: ""
	I0814 01:08:41.052269   61804 logs.go:276] 0 containers: []
	W0814 01:08:41.052278   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:41.052286   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:41.052353   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:41.084124   61804 cri.go:89] found id: ""
	I0814 01:08:41.084151   61804 logs.go:276] 0 containers: []
	W0814 01:08:41.084159   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:41.084165   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:41.084230   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:41.120994   61804 cri.go:89] found id: ""
	I0814 01:08:41.121027   61804 logs.go:276] 0 containers: []
	W0814 01:08:41.121039   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:41.121047   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:41.121106   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:41.155794   61804 cri.go:89] found id: ""
	I0814 01:08:41.155829   61804 logs.go:276] 0 containers: []
	W0814 01:08:41.155842   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:41.155854   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:41.155873   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:41.209146   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:41.209191   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:41.222112   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:41.222141   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:41.298512   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:41.298533   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:41.298550   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:41.378609   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:41.378645   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:43.924469   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:43.936857   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:43.936935   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:43.969234   61804 cri.go:89] found id: ""
	I0814 01:08:43.969267   61804 logs.go:276] 0 containers: []
	W0814 01:08:43.969276   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:43.969282   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:43.969348   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:44.003814   61804 cri.go:89] found id: ""
	I0814 01:08:44.003841   61804 logs.go:276] 0 containers: []
	W0814 01:08:44.003852   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:44.003860   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:44.003929   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:44.037828   61804 cri.go:89] found id: ""
	I0814 01:08:44.037858   61804 logs.go:276] 0 containers: []
	W0814 01:08:44.037869   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:44.037877   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:44.037931   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:44.077084   61804 cri.go:89] found id: ""
	I0814 01:08:44.077110   61804 logs.go:276] 0 containers: []
	W0814 01:08:44.077118   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:44.077124   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:44.077174   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:44.111028   61804 cri.go:89] found id: ""
	I0814 01:08:44.111054   61804 logs.go:276] 0 containers: []
	W0814 01:08:44.111063   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:44.111070   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:44.111122   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:44.143178   61804 cri.go:89] found id: ""
	I0814 01:08:44.143211   61804 logs.go:276] 0 containers: []
	W0814 01:08:44.143222   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:44.143229   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:44.143293   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:44.177606   61804 cri.go:89] found id: ""
	I0814 01:08:44.177636   61804 logs.go:276] 0 containers: []
	W0814 01:08:44.177648   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:44.177657   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:44.177723   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:44.210941   61804 cri.go:89] found id: ""
	I0814 01:08:44.210965   61804 logs.go:276] 0 containers: []
	W0814 01:08:44.210973   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:44.210982   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:44.210995   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:44.224219   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:44.224248   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:44.289411   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:44.289431   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:44.289442   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:44.369680   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:44.369720   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:44.407705   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:44.407742   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:42.088924   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:44.090237   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:41.781814   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:44.281794   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:45.418397   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:47.419132   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:46.962321   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:46.975711   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:46.975843   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:47.008529   61804 cri.go:89] found id: ""
	I0814 01:08:47.008642   61804 logs.go:276] 0 containers: []
	W0814 01:08:47.008651   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:47.008657   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:47.008707   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:47.042469   61804 cri.go:89] found id: ""
	I0814 01:08:47.042498   61804 logs.go:276] 0 containers: []
	W0814 01:08:47.042509   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:47.042518   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:47.042586   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:47.081186   61804 cri.go:89] found id: ""
	I0814 01:08:47.081214   61804 logs.go:276] 0 containers: []
	W0814 01:08:47.081222   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:47.081229   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:47.081286   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:47.117727   61804 cri.go:89] found id: ""
	I0814 01:08:47.117754   61804 logs.go:276] 0 containers: []
	W0814 01:08:47.117765   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:47.117773   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:47.117858   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:47.151247   61804 cri.go:89] found id: ""
	I0814 01:08:47.151283   61804 logs.go:276] 0 containers: []
	W0814 01:08:47.151298   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:47.151307   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:47.151370   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:47.185640   61804 cri.go:89] found id: ""
	I0814 01:08:47.185671   61804 logs.go:276] 0 containers: []
	W0814 01:08:47.185681   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:47.185689   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:47.185755   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:47.220597   61804 cri.go:89] found id: ""
	I0814 01:08:47.220625   61804 logs.go:276] 0 containers: []
	W0814 01:08:47.220633   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:47.220641   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:47.220714   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:47.257099   61804 cri.go:89] found id: ""
	I0814 01:08:47.257131   61804 logs.go:276] 0 containers: []
	W0814 01:08:47.257147   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:47.257162   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:47.257179   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:47.307503   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:47.307538   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:47.320882   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:47.320907   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:47.394519   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:47.394553   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:47.394567   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:47.475998   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:47.476058   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:46.091154   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:48.590382   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:46.780699   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:48.780773   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:51.281235   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:49.421293   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:51.918374   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:50.019454   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:50.033470   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:50.033550   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:50.070782   61804 cri.go:89] found id: ""
	I0814 01:08:50.070806   61804 logs.go:276] 0 containers: []
	W0814 01:08:50.070813   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:50.070819   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:50.070881   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:50.104047   61804 cri.go:89] found id: ""
	I0814 01:08:50.104083   61804 logs.go:276] 0 containers: []
	W0814 01:08:50.104092   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:50.104101   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:50.104172   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:50.139445   61804 cri.go:89] found id: ""
	I0814 01:08:50.139472   61804 logs.go:276] 0 containers: []
	W0814 01:08:50.139480   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:50.139487   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:50.139545   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:50.173077   61804 cri.go:89] found id: ""
	I0814 01:08:50.173109   61804 logs.go:276] 0 containers: []
	W0814 01:08:50.173118   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:50.173126   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:50.173189   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:50.204234   61804 cri.go:89] found id: ""
	I0814 01:08:50.204264   61804 logs.go:276] 0 containers: []
	W0814 01:08:50.204273   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:50.204281   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:50.204342   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:50.237005   61804 cri.go:89] found id: ""
	I0814 01:08:50.237034   61804 logs.go:276] 0 containers: []
	W0814 01:08:50.237044   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:50.237052   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:50.237107   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:50.270171   61804 cri.go:89] found id: ""
	I0814 01:08:50.270197   61804 logs.go:276] 0 containers: []
	W0814 01:08:50.270204   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:50.270209   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:50.270274   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:50.304932   61804 cri.go:89] found id: ""
	I0814 01:08:50.304959   61804 logs.go:276] 0 containers: []
	W0814 01:08:50.304968   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:50.304980   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:50.305000   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:50.317524   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:50.317552   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:50.384790   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:50.384817   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:50.384833   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:50.461398   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:50.461432   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:50.518516   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:50.518545   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:53.069835   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:53.082707   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:53.082777   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:53.119053   61804 cri.go:89] found id: ""
	I0814 01:08:53.119075   61804 logs.go:276] 0 containers: []
	W0814 01:08:53.119083   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:53.119089   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:53.119138   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:53.154565   61804 cri.go:89] found id: ""
	I0814 01:08:53.154598   61804 logs.go:276] 0 containers: []
	W0814 01:08:53.154610   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:53.154618   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:53.154690   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:53.187144   61804 cri.go:89] found id: ""
	I0814 01:08:53.187171   61804 logs.go:276] 0 containers: []
	W0814 01:08:53.187178   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:53.187184   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:53.187236   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:53.220965   61804 cri.go:89] found id: ""
	I0814 01:08:53.220989   61804 logs.go:276] 0 containers: []
	W0814 01:08:53.220998   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:53.221004   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:53.221062   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:53.256825   61804 cri.go:89] found id: ""
	I0814 01:08:53.256857   61804 logs.go:276] 0 containers: []
	W0814 01:08:53.256868   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:53.256875   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:53.256941   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:53.295733   61804 cri.go:89] found id: ""
	I0814 01:08:53.295761   61804 logs.go:276] 0 containers: []
	W0814 01:08:53.295768   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:53.295774   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:53.295822   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:53.328928   61804 cri.go:89] found id: ""
	I0814 01:08:53.328959   61804 logs.go:276] 0 containers: []
	W0814 01:08:53.328970   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:53.328979   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:53.329049   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:53.362866   61804 cri.go:89] found id: ""
	I0814 01:08:53.362896   61804 logs.go:276] 0 containers: []
	W0814 01:08:53.362907   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:53.362919   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:53.362934   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:53.375681   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:53.375718   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:53.439108   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:53.439132   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:53.439148   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:53.524801   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:53.524838   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:53.560832   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:53.560866   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:51.091445   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:53.589472   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:53.780960   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:56.281731   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:54.417207   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:56.417442   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:56.117383   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:56.129668   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:56.129729   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:56.161928   61804 cri.go:89] found id: ""
	I0814 01:08:56.161953   61804 logs.go:276] 0 containers: []
	W0814 01:08:56.161966   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:56.161971   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:56.162017   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:56.192303   61804 cri.go:89] found id: ""
	I0814 01:08:56.192332   61804 logs.go:276] 0 containers: []
	W0814 01:08:56.192343   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:56.192360   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:56.192428   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:56.226668   61804 cri.go:89] found id: ""
	I0814 01:08:56.226696   61804 logs.go:276] 0 containers: []
	W0814 01:08:56.226707   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:56.226715   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:56.226776   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:56.284959   61804 cri.go:89] found id: ""
	I0814 01:08:56.284987   61804 logs.go:276] 0 containers: []
	W0814 01:08:56.284998   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:56.285006   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:56.285066   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:56.317591   61804 cri.go:89] found id: ""
	I0814 01:08:56.317623   61804 logs.go:276] 0 containers: []
	W0814 01:08:56.317633   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:56.317640   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:56.317707   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:56.350119   61804 cri.go:89] found id: ""
	I0814 01:08:56.350146   61804 logs.go:276] 0 containers: []
	W0814 01:08:56.350157   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:56.350165   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:56.350223   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:56.382204   61804 cri.go:89] found id: ""
	I0814 01:08:56.382231   61804 logs.go:276] 0 containers: []
	W0814 01:08:56.382239   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:56.382244   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:56.382295   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:56.415098   61804 cri.go:89] found id: ""
	I0814 01:08:56.415130   61804 logs.go:276] 0 containers: []
	W0814 01:08:56.415140   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:56.415160   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:56.415174   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:56.466056   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:56.466094   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:56.480989   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:56.481019   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:56.550348   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:56.550371   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:56.550387   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:56.629331   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:56.629371   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:59.166791   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:59.179818   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:59.179907   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:59.212759   61804 cri.go:89] found id: ""
	I0814 01:08:59.212781   61804 logs.go:276] 0 containers: []
	W0814 01:08:59.212789   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:59.212796   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:59.212851   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:59.248330   61804 cri.go:89] found id: ""
	I0814 01:08:59.248354   61804 logs.go:276] 0 containers: []
	W0814 01:08:59.248362   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:59.248368   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:59.248420   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:59.282101   61804 cri.go:89] found id: ""
	I0814 01:08:59.282123   61804 logs.go:276] 0 containers: []
	W0814 01:08:59.282136   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:59.282142   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:59.282190   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:59.318477   61804 cri.go:89] found id: ""
	I0814 01:08:59.318502   61804 logs.go:276] 0 containers: []
	W0814 01:08:59.318510   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:59.318516   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:59.318566   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:59.352473   61804 cri.go:89] found id: ""
	I0814 01:08:59.352499   61804 logs.go:276] 0 containers: []
	W0814 01:08:59.352507   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:59.352514   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:59.352583   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:59.386004   61804 cri.go:89] found id: ""
	I0814 01:08:59.386032   61804 logs.go:276] 0 containers: []
	W0814 01:08:59.386056   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:59.386065   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:59.386127   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:59.424280   61804 cri.go:89] found id: ""
	I0814 01:08:59.424309   61804 logs.go:276] 0 containers: []
	W0814 01:08:59.424334   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:59.424340   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:59.424390   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:59.461555   61804 cri.go:89] found id: ""
	I0814 01:08:59.461579   61804 logs.go:276] 0 containers: []
	W0814 01:08:59.461587   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:59.461596   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:59.461608   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:59.501997   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:59.502032   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:56.089181   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:58.089349   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:58.780740   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:01.280817   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:58.417590   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:00.417914   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:02.418923   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:59.554228   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:59.554276   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:59.569169   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:59.569201   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:59.635758   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:59.635779   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:59.635793   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:02.211233   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:02.223647   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:02.223733   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:02.257172   61804 cri.go:89] found id: ""
	I0814 01:09:02.257204   61804 logs.go:276] 0 containers: []
	W0814 01:09:02.257215   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:02.257222   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:02.257286   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:02.290090   61804 cri.go:89] found id: ""
	I0814 01:09:02.290123   61804 logs.go:276] 0 containers: []
	W0814 01:09:02.290132   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:02.290139   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:02.290207   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:02.324436   61804 cri.go:89] found id: ""
	I0814 01:09:02.324461   61804 logs.go:276] 0 containers: []
	W0814 01:09:02.324469   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:02.324474   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:02.324531   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:02.357092   61804 cri.go:89] found id: ""
	I0814 01:09:02.357116   61804 logs.go:276] 0 containers: []
	W0814 01:09:02.357124   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:02.357130   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:02.357191   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:02.390237   61804 cri.go:89] found id: ""
	I0814 01:09:02.390265   61804 logs.go:276] 0 containers: []
	W0814 01:09:02.390278   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:02.390287   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:02.390357   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:02.425960   61804 cri.go:89] found id: ""
	I0814 01:09:02.425988   61804 logs.go:276] 0 containers: []
	W0814 01:09:02.425996   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:02.426002   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:02.426072   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:02.459644   61804 cri.go:89] found id: ""
	I0814 01:09:02.459683   61804 logs.go:276] 0 containers: []
	W0814 01:09:02.459694   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:02.459702   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:02.459764   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:02.496147   61804 cri.go:89] found id: ""
	I0814 01:09:02.496169   61804 logs.go:276] 0 containers: []
	W0814 01:09:02.496182   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:02.496190   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:02.496202   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:02.576512   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:02.576547   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:02.612410   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:02.612440   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:02.665810   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:02.665850   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:02.680992   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:02.681020   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:02.781868   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:00.089915   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:02.090971   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:04.589030   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:03.780689   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:05.784928   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:04.917086   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:06.918108   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:05.282001   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:05.294986   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:05.295064   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:05.326520   61804 cri.go:89] found id: ""
	I0814 01:09:05.326547   61804 logs.go:276] 0 containers: []
	W0814 01:09:05.326555   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:05.326562   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:05.326618   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:05.358458   61804 cri.go:89] found id: ""
	I0814 01:09:05.358482   61804 logs.go:276] 0 containers: []
	W0814 01:09:05.358490   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:05.358497   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:05.358556   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:05.393729   61804 cri.go:89] found id: ""
	I0814 01:09:05.393763   61804 logs.go:276] 0 containers: []
	W0814 01:09:05.393771   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:05.393777   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:05.393824   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:05.433291   61804 cri.go:89] found id: ""
	I0814 01:09:05.433319   61804 logs.go:276] 0 containers: []
	W0814 01:09:05.433327   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:05.433334   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:05.433384   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:05.467163   61804 cri.go:89] found id: ""
	I0814 01:09:05.467187   61804 logs.go:276] 0 containers: []
	W0814 01:09:05.467198   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:05.467206   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:05.467284   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:05.499718   61804 cri.go:89] found id: ""
	I0814 01:09:05.499747   61804 logs.go:276] 0 containers: []
	W0814 01:09:05.499758   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:05.499768   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:05.499819   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:05.532818   61804 cri.go:89] found id: ""
	I0814 01:09:05.532847   61804 logs.go:276] 0 containers: []
	W0814 01:09:05.532859   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:05.532867   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:05.532920   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:05.566908   61804 cri.go:89] found id: ""
	I0814 01:09:05.566936   61804 logs.go:276] 0 containers: []
	W0814 01:09:05.566947   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:05.566957   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:05.566969   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:05.621247   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:05.621283   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:05.635566   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:05.635606   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:05.698579   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:05.698606   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:05.698622   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:05.780861   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:05.780897   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:08.322931   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:08.336836   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:08.336918   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:08.369802   61804 cri.go:89] found id: ""
	I0814 01:09:08.369833   61804 logs.go:276] 0 containers: []
	W0814 01:09:08.369842   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:08.369849   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:08.369899   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:08.415414   61804 cri.go:89] found id: ""
	I0814 01:09:08.415441   61804 logs.go:276] 0 containers: []
	W0814 01:09:08.415451   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:08.415459   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:08.415525   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:08.477026   61804 cri.go:89] found id: ""
	I0814 01:09:08.477058   61804 logs.go:276] 0 containers: []
	W0814 01:09:08.477069   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:08.477077   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:08.477145   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:08.522385   61804 cri.go:89] found id: ""
	I0814 01:09:08.522417   61804 logs.go:276] 0 containers: []
	W0814 01:09:08.522429   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:08.522438   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:08.522502   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:08.555803   61804 cri.go:89] found id: ""
	I0814 01:09:08.555839   61804 logs.go:276] 0 containers: []
	W0814 01:09:08.555848   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:08.555855   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:08.555922   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:08.589910   61804 cri.go:89] found id: ""
	I0814 01:09:08.589932   61804 logs.go:276] 0 containers: []
	W0814 01:09:08.589939   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:08.589945   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:08.589992   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:08.622278   61804 cri.go:89] found id: ""
	I0814 01:09:08.622313   61804 logs.go:276] 0 containers: []
	W0814 01:09:08.622321   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:08.622328   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:08.622381   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:08.655221   61804 cri.go:89] found id: ""
	I0814 01:09:08.655248   61804 logs.go:276] 0 containers: []
	W0814 01:09:08.655257   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:08.655266   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:08.655280   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:08.691932   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:08.691965   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:08.742551   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:08.742586   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:08.755590   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:08.755619   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:08.822365   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:08.822384   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:08.822401   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:06.589889   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:09.089601   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:08.281249   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:10.781156   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:09.418153   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:11.418222   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:11.397107   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:11.409425   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:11.409498   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:11.442680   61804 cri.go:89] found id: ""
	I0814 01:09:11.442711   61804 logs.go:276] 0 containers: []
	W0814 01:09:11.442724   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:11.442732   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:11.442791   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:11.482991   61804 cri.go:89] found id: ""
	I0814 01:09:11.483016   61804 logs.go:276] 0 containers: []
	W0814 01:09:11.483023   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:11.483034   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:11.483099   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:11.516069   61804 cri.go:89] found id: ""
	I0814 01:09:11.516091   61804 logs.go:276] 0 containers: []
	W0814 01:09:11.516100   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:11.516105   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:11.516154   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:11.549745   61804 cri.go:89] found id: ""
	I0814 01:09:11.549773   61804 logs.go:276] 0 containers: []
	W0814 01:09:11.549780   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:11.549787   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:11.549851   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:11.582542   61804 cri.go:89] found id: ""
	I0814 01:09:11.582569   61804 logs.go:276] 0 containers: []
	W0814 01:09:11.582577   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:11.582583   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:11.582642   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:11.616238   61804 cri.go:89] found id: ""
	I0814 01:09:11.616261   61804 logs.go:276] 0 containers: []
	W0814 01:09:11.616269   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:11.616275   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:11.616330   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:11.650238   61804 cri.go:89] found id: ""
	I0814 01:09:11.650286   61804 logs.go:276] 0 containers: []
	W0814 01:09:11.650301   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:11.650311   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:11.650384   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:11.683100   61804 cri.go:89] found id: ""
	I0814 01:09:11.683128   61804 logs.go:276] 0 containers: []
	W0814 01:09:11.683139   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:11.683149   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:11.683169   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:11.760248   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:11.760292   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:11.798965   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:11.798996   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:11.853109   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:11.853145   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:11.865645   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:11.865682   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:11.935478   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:14.436076   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:14.448846   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:14.448927   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:14.483833   61804 cri.go:89] found id: ""
	I0814 01:09:14.483873   61804 logs.go:276] 0 containers: []
	W0814 01:09:14.483882   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:14.483887   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:14.483940   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:11.089723   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:13.090681   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:12.781680   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:14.782443   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:13.918681   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:16.417982   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:14.522643   61804 cri.go:89] found id: ""
	I0814 01:09:14.522670   61804 logs.go:276] 0 containers: []
	W0814 01:09:14.522678   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:14.522683   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:14.522783   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:14.564084   61804 cri.go:89] found id: ""
	I0814 01:09:14.564111   61804 logs.go:276] 0 containers: []
	W0814 01:09:14.564121   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:14.564129   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:14.564193   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:14.603532   61804 cri.go:89] found id: ""
	I0814 01:09:14.603560   61804 logs.go:276] 0 containers: []
	W0814 01:09:14.603571   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:14.603578   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:14.603641   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:14.644420   61804 cri.go:89] found id: ""
	I0814 01:09:14.644443   61804 logs.go:276] 0 containers: []
	W0814 01:09:14.644450   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:14.644455   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:14.644503   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:14.681652   61804 cri.go:89] found id: ""
	I0814 01:09:14.681685   61804 logs.go:276] 0 containers: []
	W0814 01:09:14.681693   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:14.681701   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:14.681757   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:14.715830   61804 cri.go:89] found id: ""
	I0814 01:09:14.715852   61804 logs.go:276] 0 containers: []
	W0814 01:09:14.715860   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:14.715866   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:14.715912   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:14.752305   61804 cri.go:89] found id: ""
	I0814 01:09:14.752336   61804 logs.go:276] 0 containers: []
	W0814 01:09:14.752343   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:14.752352   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:14.752367   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:14.765250   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:14.765287   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:14.834427   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:14.834453   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:14.834470   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:14.914683   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:14.914721   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:14.959497   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:14.959534   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:17.513077   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:17.526300   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:17.526409   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:17.563670   61804 cri.go:89] found id: ""
	I0814 01:09:17.563700   61804 logs.go:276] 0 containers: []
	W0814 01:09:17.563709   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:17.563715   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:17.563768   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:17.599019   61804 cri.go:89] found id: ""
	I0814 01:09:17.599048   61804 logs.go:276] 0 containers: []
	W0814 01:09:17.599070   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:17.599078   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:17.599158   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:17.633378   61804 cri.go:89] found id: ""
	I0814 01:09:17.633407   61804 logs.go:276] 0 containers: []
	W0814 01:09:17.633422   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:17.633430   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:17.633494   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:17.667180   61804 cri.go:89] found id: ""
	I0814 01:09:17.667213   61804 logs.go:276] 0 containers: []
	W0814 01:09:17.667225   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:17.667233   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:17.667293   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:17.704552   61804 cri.go:89] found id: ""
	I0814 01:09:17.704582   61804 logs.go:276] 0 containers: []
	W0814 01:09:17.704595   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:17.704603   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:17.704670   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:17.735937   61804 cri.go:89] found id: ""
	I0814 01:09:17.735966   61804 logs.go:276] 0 containers: []
	W0814 01:09:17.735978   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:17.735987   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:17.736057   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:17.772223   61804 cri.go:89] found id: ""
	I0814 01:09:17.772251   61804 logs.go:276] 0 containers: []
	W0814 01:09:17.772263   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:17.772271   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:17.772335   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:17.807432   61804 cri.go:89] found id: ""
	I0814 01:09:17.807462   61804 logs.go:276] 0 containers: []
	W0814 01:09:17.807474   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:17.807485   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:17.807499   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:17.860093   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:17.860135   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:17.874608   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:17.874644   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:17.948791   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:17.948812   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:17.948827   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:18.024743   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:18.024778   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:15.590951   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:18.090491   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:17.296200   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:19.780540   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:18.419867   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:20.917387   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:22.918933   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:20.559854   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:20.572920   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:20.573004   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:20.609163   61804 cri.go:89] found id: ""
	I0814 01:09:20.609189   61804 logs.go:276] 0 containers: []
	W0814 01:09:20.609200   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:20.609205   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:20.609253   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:20.646826   61804 cri.go:89] found id: ""
	I0814 01:09:20.646852   61804 logs.go:276] 0 containers: []
	W0814 01:09:20.646859   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:20.646865   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:20.646913   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:20.682403   61804 cri.go:89] found id: ""
	I0814 01:09:20.682432   61804 logs.go:276] 0 containers: []
	W0814 01:09:20.682443   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:20.682452   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:20.682515   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:20.717678   61804 cri.go:89] found id: ""
	I0814 01:09:20.717700   61804 logs.go:276] 0 containers: []
	W0814 01:09:20.717708   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:20.717713   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:20.717761   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:20.748451   61804 cri.go:89] found id: ""
	I0814 01:09:20.748481   61804 logs.go:276] 0 containers: []
	W0814 01:09:20.748492   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:20.748501   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:20.748567   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:20.785684   61804 cri.go:89] found id: ""
	I0814 01:09:20.785712   61804 logs.go:276] 0 containers: []
	W0814 01:09:20.785722   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:20.785729   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:20.785792   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:20.826195   61804 cri.go:89] found id: ""
	I0814 01:09:20.826225   61804 logs.go:276] 0 containers: []
	W0814 01:09:20.826233   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:20.826239   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:20.826305   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:20.860155   61804 cri.go:89] found id: ""
	I0814 01:09:20.860181   61804 logs.go:276] 0 containers: []
	W0814 01:09:20.860190   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:20.860198   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:20.860209   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:20.909428   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:20.909464   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:20.923178   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:20.923208   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:20.994502   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:20.994537   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:20.994556   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:21.074097   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:21.074138   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:23.615557   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:23.628906   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:23.628976   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:23.661923   61804 cri.go:89] found id: ""
	I0814 01:09:23.661954   61804 logs.go:276] 0 containers: []
	W0814 01:09:23.661966   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:23.661973   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:23.662033   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:23.693786   61804 cri.go:89] found id: ""
	I0814 01:09:23.693815   61804 logs.go:276] 0 containers: []
	W0814 01:09:23.693828   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:23.693844   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:23.693938   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:23.726707   61804 cri.go:89] found id: ""
	I0814 01:09:23.726739   61804 logs.go:276] 0 containers: []
	W0814 01:09:23.726750   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:23.726758   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:23.726823   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:23.757433   61804 cri.go:89] found id: ""
	I0814 01:09:23.757457   61804 logs.go:276] 0 containers: []
	W0814 01:09:23.757465   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:23.757471   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:23.757521   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:23.789493   61804 cri.go:89] found id: ""
	I0814 01:09:23.789516   61804 logs.go:276] 0 containers: []
	W0814 01:09:23.789523   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:23.789529   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:23.789589   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:23.824641   61804 cri.go:89] found id: ""
	I0814 01:09:23.824668   61804 logs.go:276] 0 containers: []
	W0814 01:09:23.824676   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:23.824685   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:23.824758   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:23.857651   61804 cri.go:89] found id: ""
	I0814 01:09:23.857678   61804 logs.go:276] 0 containers: []
	W0814 01:09:23.857688   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:23.857697   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:23.857761   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:23.898116   61804 cri.go:89] found id: ""
	I0814 01:09:23.898138   61804 logs.go:276] 0 containers: []
	W0814 01:09:23.898145   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:23.898154   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:23.898169   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:23.982086   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:23.982121   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:24.018340   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:24.018372   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:24.067264   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:24.067300   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:24.081648   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:24.081681   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:24.156566   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:20.590620   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:23.090160   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:21.781174   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:23.782333   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:26.282145   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:25.417101   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:27.417596   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:26.656930   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:26.669540   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:26.669616   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:26.701786   61804 cri.go:89] found id: ""
	I0814 01:09:26.701819   61804 logs.go:276] 0 containers: []
	W0814 01:09:26.701828   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:26.701834   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:26.701897   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:26.734372   61804 cri.go:89] found id: ""
	I0814 01:09:26.734397   61804 logs.go:276] 0 containers: []
	W0814 01:09:26.734405   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:26.734410   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:26.734463   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:26.767100   61804 cri.go:89] found id: ""
	I0814 01:09:26.767125   61804 logs.go:276] 0 containers: []
	W0814 01:09:26.767140   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:26.767148   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:26.767210   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:26.802145   61804 cri.go:89] found id: ""
	I0814 01:09:26.802168   61804 logs.go:276] 0 containers: []
	W0814 01:09:26.802177   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:26.802182   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:26.802230   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:26.835588   61804 cri.go:89] found id: ""
	I0814 01:09:26.835616   61804 logs.go:276] 0 containers: []
	W0814 01:09:26.835624   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:26.835630   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:26.835685   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:26.868104   61804 cri.go:89] found id: ""
	I0814 01:09:26.868130   61804 logs.go:276] 0 containers: []
	W0814 01:09:26.868138   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:26.868144   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:26.868209   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:26.899709   61804 cri.go:89] found id: ""
	I0814 01:09:26.899736   61804 logs.go:276] 0 containers: []
	W0814 01:09:26.899755   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:26.899764   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:26.899824   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:26.934964   61804 cri.go:89] found id: ""
	I0814 01:09:26.934989   61804 logs.go:276] 0 containers: []
	W0814 01:09:26.934996   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:26.935005   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:26.935023   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:26.970832   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:26.970859   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:27.022349   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:27.022390   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:27.035656   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:27.035683   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:27.115414   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:27.115441   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:27.115458   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:25.090543   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:27.590088   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:29.590449   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:28.781004   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:30.781622   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:29.920036   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:32.417796   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:29.701338   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:29.713890   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:29.713947   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:29.745724   61804 cri.go:89] found id: ""
	I0814 01:09:29.745749   61804 logs.go:276] 0 containers: []
	W0814 01:09:29.745756   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:29.745763   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:29.745816   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:29.777020   61804 cri.go:89] found id: ""
	I0814 01:09:29.777047   61804 logs.go:276] 0 containers: []
	W0814 01:09:29.777057   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:29.777065   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:29.777130   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:29.813355   61804 cri.go:89] found id: ""
	I0814 01:09:29.813386   61804 logs.go:276] 0 containers: []
	W0814 01:09:29.813398   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:29.813406   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:29.813464   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:29.845184   61804 cri.go:89] found id: ""
	I0814 01:09:29.845212   61804 logs.go:276] 0 containers: []
	W0814 01:09:29.845222   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:29.845227   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:29.845288   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:29.881128   61804 cri.go:89] found id: ""
	I0814 01:09:29.881158   61804 logs.go:276] 0 containers: []
	W0814 01:09:29.881169   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:29.881177   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:29.881249   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:29.912034   61804 cri.go:89] found id: ""
	I0814 01:09:29.912078   61804 logs.go:276] 0 containers: []
	W0814 01:09:29.912091   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:29.912100   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:29.912173   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:29.950345   61804 cri.go:89] found id: ""
	I0814 01:09:29.950378   61804 logs.go:276] 0 containers: []
	W0814 01:09:29.950386   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:29.950392   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:29.950454   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:29.989118   61804 cri.go:89] found id: ""
	I0814 01:09:29.989150   61804 logs.go:276] 0 containers: []
	W0814 01:09:29.989161   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:29.989172   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:29.989186   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:30.042231   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:30.042262   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:30.056231   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:30.056262   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:30.130840   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:30.130871   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:30.130891   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:30.209332   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:30.209372   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:32.751036   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:32.765011   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:32.765072   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:32.802505   61804 cri.go:89] found id: ""
	I0814 01:09:32.802533   61804 logs.go:276] 0 containers: []
	W0814 01:09:32.802543   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:32.802548   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:32.802600   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:32.835127   61804 cri.go:89] found id: ""
	I0814 01:09:32.835165   61804 logs.go:276] 0 containers: []
	W0814 01:09:32.835174   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:32.835179   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:32.835230   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:32.871768   61804 cri.go:89] found id: ""
	I0814 01:09:32.871793   61804 logs.go:276] 0 containers: []
	W0814 01:09:32.871800   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:32.871814   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:32.871865   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:32.907601   61804 cri.go:89] found id: ""
	I0814 01:09:32.907625   61804 logs.go:276] 0 containers: []
	W0814 01:09:32.907634   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:32.907640   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:32.907693   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:32.942615   61804 cri.go:89] found id: ""
	I0814 01:09:32.942640   61804 logs.go:276] 0 containers: []
	W0814 01:09:32.942649   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:32.942655   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:32.942707   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:32.975436   61804 cri.go:89] found id: ""
	I0814 01:09:32.975467   61804 logs.go:276] 0 containers: []
	W0814 01:09:32.975478   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:32.975486   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:32.975546   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:33.008982   61804 cri.go:89] found id: ""
	I0814 01:09:33.009013   61804 logs.go:276] 0 containers: []
	W0814 01:09:33.009021   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:33.009027   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:33.009077   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:33.042312   61804 cri.go:89] found id: ""
	I0814 01:09:33.042345   61804 logs.go:276] 0 containers: []
	W0814 01:09:33.042362   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:33.042371   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:33.042383   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:33.102102   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:33.102145   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:33.116497   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:33.116527   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:33.191821   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:33.191847   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:33.191862   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:33.272510   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:33.272562   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:32.090206   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:34.589260   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:33.280565   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:35.280918   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:34.417839   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:36.417950   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:35.813246   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:35.826224   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:35.826304   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:35.859220   61804 cri.go:89] found id: ""
	I0814 01:09:35.859252   61804 logs.go:276] 0 containers: []
	W0814 01:09:35.859263   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:35.859274   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:35.859349   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:35.896460   61804 cri.go:89] found id: ""
	I0814 01:09:35.896485   61804 logs.go:276] 0 containers: []
	W0814 01:09:35.896494   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:35.896500   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:35.896559   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:35.929796   61804 cri.go:89] found id: ""
	I0814 01:09:35.929819   61804 logs.go:276] 0 containers: []
	W0814 01:09:35.929827   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:35.929832   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:35.929883   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:35.963928   61804 cri.go:89] found id: ""
	I0814 01:09:35.963954   61804 logs.go:276] 0 containers: []
	W0814 01:09:35.963965   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:35.963972   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:35.964033   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:36.004613   61804 cri.go:89] found id: ""
	I0814 01:09:36.004644   61804 logs.go:276] 0 containers: []
	W0814 01:09:36.004654   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:36.004660   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:36.004729   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:36.039212   61804 cri.go:89] found id: ""
	I0814 01:09:36.039241   61804 logs.go:276] 0 containers: []
	W0814 01:09:36.039249   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:36.039256   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:36.039311   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:36.072917   61804 cri.go:89] found id: ""
	I0814 01:09:36.072945   61804 logs.go:276] 0 containers: []
	W0814 01:09:36.072954   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:36.072960   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:36.073020   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:36.113542   61804 cri.go:89] found id: ""
	I0814 01:09:36.113573   61804 logs.go:276] 0 containers: []
	W0814 01:09:36.113584   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:36.113594   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:36.113610   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:36.152043   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:36.152071   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:36.203163   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:36.203200   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:36.216733   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:36.216764   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:36.288171   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:36.288193   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:36.288206   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:38.868008   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:38.881009   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:38.881089   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:38.914485   61804 cri.go:89] found id: ""
	I0814 01:09:38.914515   61804 logs.go:276] 0 containers: []
	W0814 01:09:38.914527   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:38.914535   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:38.914595   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:38.950810   61804 cri.go:89] found id: ""
	I0814 01:09:38.950841   61804 logs.go:276] 0 containers: []
	W0814 01:09:38.950852   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:38.950860   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:38.950913   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:38.984938   61804 cri.go:89] found id: ""
	I0814 01:09:38.984964   61804 logs.go:276] 0 containers: []
	W0814 01:09:38.984972   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:38.984980   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:38.985050   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:39.017383   61804 cri.go:89] found id: ""
	I0814 01:09:39.017408   61804 logs.go:276] 0 containers: []
	W0814 01:09:39.017415   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:39.017421   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:39.017467   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:39.050669   61804 cri.go:89] found id: ""
	I0814 01:09:39.050694   61804 logs.go:276] 0 containers: []
	W0814 01:09:39.050706   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:39.050712   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:39.050777   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:39.083840   61804 cri.go:89] found id: ""
	I0814 01:09:39.083870   61804 logs.go:276] 0 containers: []
	W0814 01:09:39.083882   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:39.083903   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:39.083973   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:39.117880   61804 cri.go:89] found id: ""
	I0814 01:09:39.117905   61804 logs.go:276] 0 containers: []
	W0814 01:09:39.117913   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:39.117920   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:39.117989   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:39.151956   61804 cri.go:89] found id: ""
	I0814 01:09:39.151981   61804 logs.go:276] 0 containers: []
	W0814 01:09:39.151991   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:39.152002   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:39.152017   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:39.229820   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:39.229860   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:39.266989   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:39.267023   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:39.317673   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:39.317709   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:39.332968   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:39.332997   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:39.401164   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:36.591033   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:39.089990   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:37.282218   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:39.781653   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:38.918816   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:41.417142   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:41.901891   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:41.914735   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:41.914810   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:41.950605   61804 cri.go:89] found id: ""
	I0814 01:09:41.950633   61804 logs.go:276] 0 containers: []
	W0814 01:09:41.950641   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:41.950648   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:41.950699   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:41.984517   61804 cri.go:89] found id: ""
	I0814 01:09:41.984541   61804 logs.go:276] 0 containers: []
	W0814 01:09:41.984549   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:41.984555   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:41.984609   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:42.018378   61804 cri.go:89] found id: ""
	I0814 01:09:42.018405   61804 logs.go:276] 0 containers: []
	W0814 01:09:42.018413   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:42.018418   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:42.018475   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:42.057088   61804 cri.go:89] found id: ""
	I0814 01:09:42.057126   61804 logs.go:276] 0 containers: []
	W0814 01:09:42.057134   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:42.057140   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:42.057208   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:42.093523   61804 cri.go:89] found id: ""
	I0814 01:09:42.093548   61804 logs.go:276] 0 containers: []
	W0814 01:09:42.093564   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:42.093569   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:42.093621   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:42.127036   61804 cri.go:89] found id: ""
	I0814 01:09:42.127059   61804 logs.go:276] 0 containers: []
	W0814 01:09:42.127067   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:42.127072   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:42.127123   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:42.161194   61804 cri.go:89] found id: ""
	I0814 01:09:42.161218   61804 logs.go:276] 0 containers: []
	W0814 01:09:42.161226   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:42.161231   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:42.161279   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:42.195595   61804 cri.go:89] found id: ""
	I0814 01:09:42.195624   61804 logs.go:276] 0 containers: []
	W0814 01:09:42.195633   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:42.195643   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:42.195656   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:42.251942   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:42.251974   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:42.309142   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:42.309179   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:42.322696   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:42.322724   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:42.389877   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:42.389903   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:42.389918   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:41.589650   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:43.589804   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:42.281108   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:44.780495   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:43.417531   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:45.419122   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:47.918282   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:44.974486   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:44.986981   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:44.987044   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:45.023400   61804 cri.go:89] found id: ""
	I0814 01:09:45.023426   61804 logs.go:276] 0 containers: []
	W0814 01:09:45.023435   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:45.023441   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:45.023492   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:45.057923   61804 cri.go:89] found id: ""
	I0814 01:09:45.057948   61804 logs.go:276] 0 containers: []
	W0814 01:09:45.057961   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:45.057968   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:45.058024   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:45.092882   61804 cri.go:89] found id: ""
	I0814 01:09:45.092908   61804 logs.go:276] 0 containers: []
	W0814 01:09:45.092918   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:45.092924   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:45.092987   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:45.128802   61804 cri.go:89] found id: ""
	I0814 01:09:45.128832   61804 logs.go:276] 0 containers: []
	W0814 01:09:45.128840   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:45.128846   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:45.128909   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:45.164528   61804 cri.go:89] found id: ""
	I0814 01:09:45.164556   61804 logs.go:276] 0 containers: []
	W0814 01:09:45.164564   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:45.164571   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:45.164619   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:45.198115   61804 cri.go:89] found id: ""
	I0814 01:09:45.198145   61804 logs.go:276] 0 containers: []
	W0814 01:09:45.198157   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:45.198164   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:45.198231   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:45.230356   61804 cri.go:89] found id: ""
	I0814 01:09:45.230389   61804 logs.go:276] 0 containers: []
	W0814 01:09:45.230401   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:45.230409   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:45.230471   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:45.268342   61804 cri.go:89] found id: ""
	I0814 01:09:45.268367   61804 logs.go:276] 0 containers: []
	W0814 01:09:45.268376   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:45.268384   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:45.268398   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:45.321257   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:45.321294   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:45.334182   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:45.334206   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:45.409140   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:45.409162   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:45.409178   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:45.493974   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:45.494013   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:48.032466   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:48.045704   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:48.045783   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:48.084634   61804 cri.go:89] found id: ""
	I0814 01:09:48.084663   61804 logs.go:276] 0 containers: []
	W0814 01:09:48.084674   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:48.084683   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:48.084743   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:48.121917   61804 cri.go:89] found id: ""
	I0814 01:09:48.121941   61804 logs.go:276] 0 containers: []
	W0814 01:09:48.121948   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:48.121953   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:48.122014   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:48.156005   61804 cri.go:89] found id: ""
	I0814 01:09:48.156029   61804 logs.go:276] 0 containers: []
	W0814 01:09:48.156038   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:48.156046   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:48.156104   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:48.190105   61804 cri.go:89] found id: ""
	I0814 01:09:48.190127   61804 logs.go:276] 0 containers: []
	W0814 01:09:48.190136   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:48.190141   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:48.190202   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:48.222617   61804 cri.go:89] found id: ""
	I0814 01:09:48.222641   61804 logs.go:276] 0 containers: []
	W0814 01:09:48.222649   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:48.222655   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:48.222727   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:48.256198   61804 cri.go:89] found id: ""
	I0814 01:09:48.256222   61804 logs.go:276] 0 containers: []
	W0814 01:09:48.256230   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:48.256236   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:48.256294   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:48.294389   61804 cri.go:89] found id: ""
	I0814 01:09:48.294420   61804 logs.go:276] 0 containers: []
	W0814 01:09:48.294428   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:48.294434   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:48.294496   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:48.331503   61804 cri.go:89] found id: ""
	I0814 01:09:48.331540   61804 logs.go:276] 0 containers: []
	W0814 01:09:48.331553   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:48.331565   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:48.331585   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:48.407092   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:48.407134   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:48.446890   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:48.446920   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:48.498523   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:48.498559   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:48.511540   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:48.511578   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:48.576299   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:45.590239   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:48.090689   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:46.781816   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:49.280840   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:51.281638   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:50.418154   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:52.917611   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:51.076974   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:51.089440   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:51.089508   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:51.122770   61804 cri.go:89] found id: ""
	I0814 01:09:51.122794   61804 logs.go:276] 0 containers: []
	W0814 01:09:51.122806   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:51.122814   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:51.122873   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:51.159045   61804 cri.go:89] found id: ""
	I0814 01:09:51.159075   61804 logs.go:276] 0 containers: []
	W0814 01:09:51.159084   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:51.159090   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:51.159144   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:51.192983   61804 cri.go:89] found id: ""
	I0814 01:09:51.193013   61804 logs.go:276] 0 containers: []
	W0814 01:09:51.193022   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:51.193028   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:51.193087   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:51.225112   61804 cri.go:89] found id: ""
	I0814 01:09:51.225137   61804 logs.go:276] 0 containers: []
	W0814 01:09:51.225145   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:51.225151   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:51.225204   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:51.257785   61804 cri.go:89] found id: ""
	I0814 01:09:51.257813   61804 logs.go:276] 0 containers: []
	W0814 01:09:51.257822   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:51.257828   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:51.257879   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:51.289863   61804 cri.go:89] found id: ""
	I0814 01:09:51.289891   61804 logs.go:276] 0 containers: []
	W0814 01:09:51.289902   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:51.289910   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:51.289963   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:51.321834   61804 cri.go:89] found id: ""
	I0814 01:09:51.321860   61804 logs.go:276] 0 containers: []
	W0814 01:09:51.321870   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:51.321880   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:51.321949   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:51.354494   61804 cri.go:89] found id: ""
	I0814 01:09:51.354517   61804 logs.go:276] 0 containers: []
	W0814 01:09:51.354526   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:51.354535   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:51.354556   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:51.424704   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:51.424726   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:51.424741   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:51.505301   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:51.505337   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:51.544567   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:51.544603   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:51.598924   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:51.598954   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:54.113501   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:54.128000   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:54.128075   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:54.162230   61804 cri.go:89] found id: ""
	I0814 01:09:54.162260   61804 logs.go:276] 0 containers: []
	W0814 01:09:54.162270   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:54.162277   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:54.162327   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:54.196395   61804 cri.go:89] found id: ""
	I0814 01:09:54.196421   61804 logs.go:276] 0 containers: []
	W0814 01:09:54.196432   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:54.196440   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:54.196500   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:54.229685   61804 cri.go:89] found id: ""
	I0814 01:09:54.229718   61804 logs.go:276] 0 containers: []
	W0814 01:09:54.229730   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:54.229738   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:54.229825   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:54.263141   61804 cri.go:89] found id: ""
	I0814 01:09:54.263174   61804 logs.go:276] 0 containers: []
	W0814 01:09:54.263185   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:54.263193   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:54.263257   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:54.298658   61804 cri.go:89] found id: ""
	I0814 01:09:54.298689   61804 logs.go:276] 0 containers: []
	W0814 01:09:54.298700   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:54.298708   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:54.298794   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:54.331254   61804 cri.go:89] found id: ""
	I0814 01:09:54.331278   61804 logs.go:276] 0 containers: []
	W0814 01:09:54.331287   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:54.331294   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:54.331348   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:54.362930   61804 cri.go:89] found id: ""
	I0814 01:09:54.362954   61804 logs.go:276] 0 containers: []
	W0814 01:09:54.362961   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:54.362967   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:54.363017   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:54.403663   61804 cri.go:89] found id: ""
	I0814 01:09:54.403690   61804 logs.go:276] 0 containers: []
	W0814 01:09:54.403697   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:54.403706   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:54.403725   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:54.460623   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:54.460661   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:54.478728   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:54.478757   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 01:09:50.589697   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:53.089733   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:53.781208   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:56.282166   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:54.918107   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:56.918502   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	W0814 01:09:54.548615   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:54.548640   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:54.548654   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:54.624350   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:54.624385   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:57.164202   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:57.176107   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:57.176174   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:57.211204   61804 cri.go:89] found id: ""
	I0814 01:09:57.211230   61804 logs.go:276] 0 containers: []
	W0814 01:09:57.211238   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:57.211245   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:57.211305   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:57.243004   61804 cri.go:89] found id: ""
	I0814 01:09:57.243035   61804 logs.go:276] 0 containers: []
	W0814 01:09:57.243046   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:57.243052   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:57.243113   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:57.275315   61804 cri.go:89] found id: ""
	I0814 01:09:57.275346   61804 logs.go:276] 0 containers: []
	W0814 01:09:57.275357   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:57.275365   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:57.275435   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:57.311856   61804 cri.go:89] found id: ""
	I0814 01:09:57.311878   61804 logs.go:276] 0 containers: []
	W0814 01:09:57.311885   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:57.311890   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:57.311944   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:57.345305   61804 cri.go:89] found id: ""
	I0814 01:09:57.345335   61804 logs.go:276] 0 containers: []
	W0814 01:09:57.345347   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:57.345355   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:57.345419   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:57.378001   61804 cri.go:89] found id: ""
	I0814 01:09:57.378033   61804 logs.go:276] 0 containers: []
	W0814 01:09:57.378058   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:57.378066   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:57.378127   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:57.410664   61804 cri.go:89] found id: ""
	I0814 01:09:57.410691   61804 logs.go:276] 0 containers: []
	W0814 01:09:57.410700   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:57.410706   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:57.410766   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:57.443477   61804 cri.go:89] found id: ""
	I0814 01:09:57.443505   61804 logs.go:276] 0 containers: []
	W0814 01:09:57.443514   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:57.443523   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:57.443538   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:57.497674   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:57.497710   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:57.511347   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:57.511380   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:57.580111   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:57.580137   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:57.580153   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:57.660119   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:57.660166   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:55.089771   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:57.090272   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:59.591289   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:58.780363   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:00.781165   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:59.417990   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:01.419950   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:00.203685   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:10:00.224480   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:10:00.224552   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:10:00.265353   61804 cri.go:89] found id: ""
	I0814 01:10:00.265379   61804 logs.go:276] 0 containers: []
	W0814 01:10:00.265388   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:10:00.265395   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:10:00.265449   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:10:00.301086   61804 cri.go:89] found id: ""
	I0814 01:10:00.301112   61804 logs.go:276] 0 containers: []
	W0814 01:10:00.301122   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:10:00.301129   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:10:00.301203   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:10:00.335369   61804 cri.go:89] found id: ""
	I0814 01:10:00.335400   61804 logs.go:276] 0 containers: []
	W0814 01:10:00.335422   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:10:00.335442   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:10:00.335501   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:10:00.369341   61804 cri.go:89] found id: ""
	I0814 01:10:00.369367   61804 logs.go:276] 0 containers: []
	W0814 01:10:00.369377   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:10:00.369384   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:10:00.369446   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:10:00.403958   61804 cri.go:89] found id: ""
	I0814 01:10:00.403985   61804 logs.go:276] 0 containers: []
	W0814 01:10:00.403993   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:10:00.403998   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:10:00.404059   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:10:00.437921   61804 cri.go:89] found id: ""
	I0814 01:10:00.437944   61804 logs.go:276] 0 containers: []
	W0814 01:10:00.437952   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:10:00.437958   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:10:00.438020   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:10:00.471076   61804 cri.go:89] found id: ""
	I0814 01:10:00.471116   61804 logs.go:276] 0 containers: []
	W0814 01:10:00.471127   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:10:00.471135   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:10:00.471194   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:10:00.506002   61804 cri.go:89] found id: ""
	I0814 01:10:00.506026   61804 logs.go:276] 0 containers: []
	W0814 01:10:00.506034   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:10:00.506056   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:10:00.506074   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:10:00.576627   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:10:00.576653   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:10:00.576668   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:10:00.661108   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:10:00.661150   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:10:00.699083   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:10:00.699111   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:10:00.748944   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:10:00.748981   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:10:03.262338   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:10:03.274831   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:10:03.274909   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:10:03.308413   61804 cri.go:89] found id: ""
	I0814 01:10:03.308445   61804 logs.go:276] 0 containers: []
	W0814 01:10:03.308456   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:10:03.308463   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:10:03.308530   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:10:03.340763   61804 cri.go:89] found id: ""
	I0814 01:10:03.340789   61804 logs.go:276] 0 containers: []
	W0814 01:10:03.340798   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:10:03.340804   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:10:03.340872   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:10:03.375914   61804 cri.go:89] found id: ""
	I0814 01:10:03.375945   61804 logs.go:276] 0 containers: []
	W0814 01:10:03.375956   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:10:03.375964   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:10:03.376028   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:10:03.408904   61804 cri.go:89] found id: ""
	I0814 01:10:03.408934   61804 logs.go:276] 0 containers: []
	W0814 01:10:03.408944   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:10:03.408951   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:10:03.409015   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:10:03.443664   61804 cri.go:89] found id: ""
	I0814 01:10:03.443694   61804 logs.go:276] 0 containers: []
	W0814 01:10:03.443704   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:10:03.443712   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:10:03.443774   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:10:03.475742   61804 cri.go:89] found id: ""
	I0814 01:10:03.475775   61804 logs.go:276] 0 containers: []
	W0814 01:10:03.475786   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:10:03.475794   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:10:03.475856   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:10:03.509252   61804 cri.go:89] found id: ""
	I0814 01:10:03.509297   61804 logs.go:276] 0 containers: []
	W0814 01:10:03.509309   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:10:03.509315   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:10:03.509380   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:10:03.544311   61804 cri.go:89] found id: ""
	I0814 01:10:03.544332   61804 logs.go:276] 0 containers: []
	W0814 01:10:03.544341   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:10:03.544350   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:10:03.544365   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:10:03.620425   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:10:03.620459   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:10:03.658574   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:10:03.658601   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:10:03.708154   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:10:03.708187   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:10:03.721959   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:10:03.721986   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:10:03.789903   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:10:02.088526   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:04.092427   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:02.781595   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:05.280678   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:03.917268   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:05.917774   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:07.918699   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:06.290301   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:10:06.301935   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:10:06.301994   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:10:06.336211   61804 cri.go:89] found id: ""
	I0814 01:10:06.336231   61804 logs.go:276] 0 containers: []
	W0814 01:10:06.336239   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:10:06.336245   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:10:06.336293   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:10:06.369489   61804 cri.go:89] found id: ""
	I0814 01:10:06.369517   61804 logs.go:276] 0 containers: []
	W0814 01:10:06.369526   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:10:06.369532   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:10:06.369590   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:10:06.401142   61804 cri.go:89] found id: ""
	I0814 01:10:06.401167   61804 logs.go:276] 0 containers: []
	W0814 01:10:06.401176   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:10:06.401183   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:10:06.401233   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:10:06.432265   61804 cri.go:89] found id: ""
	I0814 01:10:06.432294   61804 logs.go:276] 0 containers: []
	W0814 01:10:06.432303   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:10:06.432308   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:10:06.432368   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:10:06.464786   61804 cri.go:89] found id: ""
	I0814 01:10:06.464815   61804 logs.go:276] 0 containers: []
	W0814 01:10:06.464826   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:10:06.464834   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:10:06.464928   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:10:06.497984   61804 cri.go:89] found id: ""
	I0814 01:10:06.498013   61804 logs.go:276] 0 containers: []
	W0814 01:10:06.498021   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:10:06.498027   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:10:06.498122   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:10:06.528722   61804 cri.go:89] found id: ""
	I0814 01:10:06.528750   61804 logs.go:276] 0 containers: []
	W0814 01:10:06.528760   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:10:06.528768   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:10:06.528836   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:10:06.559920   61804 cri.go:89] found id: ""
	I0814 01:10:06.559947   61804 logs.go:276] 0 containers: []
	W0814 01:10:06.559955   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:10:06.559964   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:10:06.559976   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:10:06.609227   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:10:06.609256   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:10:06.621627   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:10:06.621652   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:10:06.686110   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:10:06.686132   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:10:06.686145   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:10:06.767163   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:10:06.767201   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:10:09.302611   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:10:09.314804   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:10:09.314863   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:10:09.347222   61804 cri.go:89] found id: ""
	I0814 01:10:09.347248   61804 logs.go:276] 0 containers: []
	W0814 01:10:09.347257   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:10:09.347262   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:10:09.347311   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:10:09.382005   61804 cri.go:89] found id: ""
	I0814 01:10:09.382035   61804 logs.go:276] 0 containers: []
	W0814 01:10:09.382059   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:10:09.382067   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:10:09.382129   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:10:09.413728   61804 cri.go:89] found id: ""
	I0814 01:10:09.413759   61804 logs.go:276] 0 containers: []
	W0814 01:10:09.413771   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:10:09.413778   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:10:09.413843   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:10:09.446389   61804 cri.go:89] found id: ""
	I0814 01:10:09.446422   61804 logs.go:276] 0 containers: []
	W0814 01:10:09.446435   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:10:09.446455   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:10:09.446511   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:10:09.482224   61804 cri.go:89] found id: ""
	I0814 01:10:09.482253   61804 logs.go:276] 0 containers: []
	W0814 01:10:09.482261   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:10:09.482267   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:10:09.482330   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:10:06.589791   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:09.089933   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:07.782212   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:07.782245   61447 pod_ready.go:81] duration metric: took 4m0.007594209s for pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace to be "Ready" ...
	E0814 01:10:07.782257   61447 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0814 01:10:07.782267   61447 pod_ready.go:38] duration metric: took 4m3.607931792s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 01:10:07.782286   61447 api_server.go:52] waiting for apiserver process to appear ...
	I0814 01:10:07.782318   61447 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:10:07.782382   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:10:07.840346   61447 cri.go:89] found id: "ddba3ebb8413d6647bf6c8fe0bff16a1a10b35bcd7219cad5b66979c372cd92e"
	I0814 01:10:07.840370   61447 cri.go:89] found id: ""
	I0814 01:10:07.840378   61447 logs.go:276] 1 containers: [ddba3ebb8413d6647bf6c8fe0bff16a1a10b35bcd7219cad5b66979c372cd92e]
	I0814 01:10:07.840426   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:07.844721   61447 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:10:07.844775   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:10:07.879720   61447 cri.go:89] found id: "1632d4b88f7f0e7e802d1848acf6c67950cfa7cdfbe5b4dd7f6fbc467a969388"
	I0814 01:10:07.879748   61447 cri.go:89] found id: ""
	I0814 01:10:07.879756   61447 logs.go:276] 1 containers: [1632d4b88f7f0e7e802d1848acf6c67950cfa7cdfbe5b4dd7f6fbc467a969388]
	I0814 01:10:07.879805   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:07.883392   61447 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:10:07.883455   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:10:07.919395   61447 cri.go:89] found id: "7d3cb1d648607a62a84856164fa12f6d400c0d4316d7bda5d83a448870c145fc"
	I0814 01:10:07.919414   61447 cri.go:89] found id: ""
	I0814 01:10:07.919423   61447 logs.go:276] 1 containers: [7d3cb1d648607a62a84856164fa12f6d400c0d4316d7bda5d83a448870c145fc]
	I0814 01:10:07.919481   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:07.923650   61447 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:10:07.923715   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:10:07.960706   61447 cri.go:89] found id: "89953f1dc813e32748579ac4c2541c0f99b1443ae3956d85a367db06810107b2"
	I0814 01:10:07.960734   61447 cri.go:89] found id: ""
	I0814 01:10:07.960744   61447 logs.go:276] 1 containers: [89953f1dc813e32748579ac4c2541c0f99b1443ae3956d85a367db06810107b2]
	I0814 01:10:07.960792   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:07.964923   61447 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:10:07.964984   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:10:08.000107   61447 cri.go:89] found id: "0ec88a5a7a9d566fabdf1393667a95e5eac0ed7f2a2aaa326ee51d3e99a72b12"
	I0814 01:10:08.000127   61447 cri.go:89] found id: ""
	I0814 01:10:08.000134   61447 logs.go:276] 1 containers: [0ec88a5a7a9d566fabdf1393667a95e5eac0ed7f2a2aaa326ee51d3e99a72b12]
	I0814 01:10:08.000187   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:08.004313   61447 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:10:08.004375   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:10:08.039317   61447 cri.go:89] found id: "3ef9bf666bbbc4c4c8b8036d2fa7e15e882f2bb301fe7d3b63a483d8b38e6091"
	I0814 01:10:08.039346   61447 cri.go:89] found id: ""
	I0814 01:10:08.039356   61447 logs.go:276] 1 containers: [3ef9bf666bbbc4c4c8b8036d2fa7e15e882f2bb301fe7d3b63a483d8b38e6091]
	I0814 01:10:08.039433   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:08.043054   61447 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:10:08.043122   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:10:08.078708   61447 cri.go:89] found id: ""
	I0814 01:10:08.078745   61447 logs.go:276] 0 containers: []
	W0814 01:10:08.078756   61447 logs.go:278] No container was found matching "kindnet"
	I0814 01:10:08.078764   61447 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0814 01:10:08.078826   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0814 01:10:08.119964   61447 cri.go:89] found id: "d4d7da10edbe3c5e4d9a0997c7f8909523ed34eacac6b44dc982bf6ab7504eff"
	I0814 01:10:08.119989   61447 cri.go:89] found id: "bacb411cbea2000ee28b8a5eb1c371d4094e22dea31e33892860568e08ee9768"
	I0814 01:10:08.119995   61447 cri.go:89] found id: ""
	I0814 01:10:08.120004   61447 logs.go:276] 2 containers: [d4d7da10edbe3c5e4d9a0997c7f8909523ed34eacac6b44dc982bf6ab7504eff bacb411cbea2000ee28b8a5eb1c371d4094e22dea31e33892860568e08ee9768]
	I0814 01:10:08.120067   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:08.123852   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:08.127530   61447 logs.go:123] Gathering logs for dmesg ...
	I0814 01:10:08.127553   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:10:08.144431   61447 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:10:08.144466   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 01:10:08.267719   61447 logs.go:123] Gathering logs for coredns [7d3cb1d648607a62a84856164fa12f6d400c0d4316d7bda5d83a448870c145fc] ...
	I0814 01:10:08.267751   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7d3cb1d648607a62a84856164fa12f6d400c0d4316d7bda5d83a448870c145fc"
	I0814 01:10:08.308901   61447 logs.go:123] Gathering logs for kube-controller-manager [3ef9bf666bbbc4c4c8b8036d2fa7e15e882f2bb301fe7d3b63a483d8b38e6091] ...
	I0814 01:10:08.308936   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ef9bf666bbbc4c4c8b8036d2fa7e15e882f2bb301fe7d3b63a483d8b38e6091"
	I0814 01:10:08.357837   61447 logs.go:123] Gathering logs for storage-provisioner [d4d7da10edbe3c5e4d9a0997c7f8909523ed34eacac6b44dc982bf6ab7504eff] ...
	I0814 01:10:08.357868   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d4d7da10edbe3c5e4d9a0997c7f8909523ed34eacac6b44dc982bf6ab7504eff"
	I0814 01:10:08.393863   61447 logs.go:123] Gathering logs for storage-provisioner [bacb411cbea2000ee28b8a5eb1c371d4094e22dea31e33892860568e08ee9768] ...
	I0814 01:10:08.393890   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bacb411cbea2000ee28b8a5eb1c371d4094e22dea31e33892860568e08ee9768"
	I0814 01:10:08.430599   61447 logs.go:123] Gathering logs for kubelet ...
	I0814 01:10:08.430631   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:10:08.512420   61447 logs.go:123] Gathering logs for etcd [1632d4b88f7f0e7e802d1848acf6c67950cfa7cdfbe5b4dd7f6fbc467a969388] ...
	I0814 01:10:08.512460   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1632d4b88f7f0e7e802d1848acf6c67950cfa7cdfbe5b4dd7f6fbc467a969388"
	I0814 01:10:08.561482   61447 logs.go:123] Gathering logs for kube-scheduler [89953f1dc813e32748579ac4c2541c0f99b1443ae3956d85a367db06810107b2] ...
	I0814 01:10:08.561512   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 89953f1dc813e32748579ac4c2541c0f99b1443ae3956d85a367db06810107b2"
	I0814 01:10:08.598681   61447 logs.go:123] Gathering logs for kube-proxy [0ec88a5a7a9d566fabdf1393667a95e5eac0ed7f2a2aaa326ee51d3e99a72b12] ...
	I0814 01:10:08.598705   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ec88a5a7a9d566fabdf1393667a95e5eac0ed7f2a2aaa326ee51d3e99a72b12"
	I0814 01:10:08.634798   61447 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:10:08.634835   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:10:09.113197   61447 logs.go:123] Gathering logs for container status ...
	I0814 01:10:09.113249   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:10:09.166264   61447 logs.go:123] Gathering logs for kube-apiserver [ddba3ebb8413d6647bf6c8fe0bff16a1a10b35bcd7219cad5b66979c372cd92e] ...
	I0814 01:10:09.166294   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ddba3ebb8413d6647bf6c8fe0bff16a1a10b35bcd7219cad5b66979c372cd92e"
	I0814 01:10:10.417612   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:12.418303   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:12.911546   61689 pod_ready.go:81] duration metric: took 4m0.00009953s for pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace to be "Ready" ...
	E0814 01:10:12.911580   61689 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0814 01:10:12.911610   61689 pod_ready.go:38] duration metric: took 4m7.021956674s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 01:10:12.911643   61689 kubeadm.go:597] duration metric: took 4m14.591841657s to restartPrimaryControlPlane
	W0814 01:10:12.911710   61689 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0814 01:10:12.911741   61689 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0814 01:10:09.517482   61804 cri.go:89] found id: ""
	I0814 01:10:09.517511   61804 logs.go:276] 0 containers: []
	W0814 01:10:09.517529   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:10:09.517538   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:10:09.517600   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:10:09.550825   61804 cri.go:89] found id: ""
	I0814 01:10:09.550849   61804 logs.go:276] 0 containers: []
	W0814 01:10:09.550857   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:10:09.550863   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:10:09.550923   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:10:09.585090   61804 cri.go:89] found id: ""
	I0814 01:10:09.585122   61804 logs.go:276] 0 containers: []
	W0814 01:10:09.585129   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:10:09.585137   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:10:09.585148   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:10:09.636337   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:10:09.636367   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:10:09.649807   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:10:09.649837   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:10:09.720720   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:10:09.720743   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:10:09.720759   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:10:09.805985   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:10:09.806027   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:10:12.350767   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:10:12.364446   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:10:12.364516   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:10:12.396353   61804 cri.go:89] found id: ""
	I0814 01:10:12.396387   61804 logs.go:276] 0 containers: []
	W0814 01:10:12.396400   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:10:12.396409   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:10:12.396478   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:10:12.427988   61804 cri.go:89] found id: ""
	I0814 01:10:12.428010   61804 logs.go:276] 0 containers: []
	W0814 01:10:12.428022   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:10:12.428033   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:10:12.428094   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:10:12.461269   61804 cri.go:89] found id: ""
	I0814 01:10:12.461295   61804 logs.go:276] 0 containers: []
	W0814 01:10:12.461304   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:10:12.461310   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:10:12.461364   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:10:12.495746   61804 cri.go:89] found id: ""
	I0814 01:10:12.495772   61804 logs.go:276] 0 containers: []
	W0814 01:10:12.495783   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:10:12.495791   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:10:12.495850   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:10:12.528862   61804 cri.go:89] found id: ""
	I0814 01:10:12.528891   61804 logs.go:276] 0 containers: []
	W0814 01:10:12.528901   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:10:12.528909   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:10:12.528969   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:10:12.562169   61804 cri.go:89] found id: ""
	I0814 01:10:12.562196   61804 logs.go:276] 0 containers: []
	W0814 01:10:12.562206   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:10:12.562214   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:10:12.562279   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:10:12.601089   61804 cri.go:89] found id: ""
	I0814 01:10:12.601118   61804 logs.go:276] 0 containers: []
	W0814 01:10:12.601129   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:10:12.601137   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:10:12.601200   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:10:12.635250   61804 cri.go:89] found id: ""
	I0814 01:10:12.635276   61804 logs.go:276] 0 containers: []
	W0814 01:10:12.635285   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:10:12.635293   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:10:12.635306   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:10:12.686904   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:10:12.686937   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:10:12.702218   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:10:12.702244   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:10:12.767008   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:10:12.767034   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:10:12.767051   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:10:12.849601   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:10:12.849639   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:10:11.090068   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:13.090518   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:11.715364   61447 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:10:11.731610   61447 api_server.go:72] duration metric: took 4m15.320142444s to wait for apiserver process to appear ...
	I0814 01:10:11.731645   61447 api_server.go:88] waiting for apiserver healthz status ...
	I0814 01:10:11.731689   61447 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:10:11.731748   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:10:11.769722   61447 cri.go:89] found id: "ddba3ebb8413d6647bf6c8fe0bff16a1a10b35bcd7219cad5b66979c372cd92e"
	I0814 01:10:11.769754   61447 cri.go:89] found id: ""
	I0814 01:10:11.769763   61447 logs.go:276] 1 containers: [ddba3ebb8413d6647bf6c8fe0bff16a1a10b35bcd7219cad5b66979c372cd92e]
	I0814 01:10:11.769824   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:11.774334   61447 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:10:11.774403   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:10:11.808392   61447 cri.go:89] found id: "1632d4b88f7f0e7e802d1848acf6c67950cfa7cdfbe5b4dd7f6fbc467a969388"
	I0814 01:10:11.808412   61447 cri.go:89] found id: ""
	I0814 01:10:11.808419   61447 logs.go:276] 1 containers: [1632d4b88f7f0e7e802d1848acf6c67950cfa7cdfbe5b4dd7f6fbc467a969388]
	I0814 01:10:11.808464   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:11.812100   61447 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:10:11.812154   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:10:11.846105   61447 cri.go:89] found id: "7d3cb1d648607a62a84856164fa12f6d400c0d4316d7bda5d83a448870c145fc"
	I0814 01:10:11.846133   61447 cri.go:89] found id: ""
	I0814 01:10:11.846144   61447 logs.go:276] 1 containers: [7d3cb1d648607a62a84856164fa12f6d400c0d4316d7bda5d83a448870c145fc]
	I0814 01:10:11.846202   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:11.850271   61447 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:10:11.850330   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:10:11.889364   61447 cri.go:89] found id: "89953f1dc813e32748579ac4c2541c0f99b1443ae3956d85a367db06810107b2"
	I0814 01:10:11.889389   61447 cri.go:89] found id: ""
	I0814 01:10:11.889399   61447 logs.go:276] 1 containers: [89953f1dc813e32748579ac4c2541c0f99b1443ae3956d85a367db06810107b2]
	I0814 01:10:11.889446   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:11.893413   61447 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:10:11.893483   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:10:11.929675   61447 cri.go:89] found id: "0ec88a5a7a9d566fabdf1393667a95e5eac0ed7f2a2aaa326ee51d3e99a72b12"
	I0814 01:10:11.929696   61447 cri.go:89] found id: ""
	I0814 01:10:11.929704   61447 logs.go:276] 1 containers: [0ec88a5a7a9d566fabdf1393667a95e5eac0ed7f2a2aaa326ee51d3e99a72b12]
	I0814 01:10:11.929764   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:11.933454   61447 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:10:11.933513   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:10:11.971708   61447 cri.go:89] found id: "3ef9bf666bbbc4c4c8b8036d2fa7e15e882f2bb301fe7d3b63a483d8b38e6091"
	I0814 01:10:11.971734   61447 cri.go:89] found id: ""
	I0814 01:10:11.971743   61447 logs.go:276] 1 containers: [3ef9bf666bbbc4c4c8b8036d2fa7e15e882f2bb301fe7d3b63a483d8b38e6091]
	I0814 01:10:11.971801   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:11.975943   61447 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:10:11.976005   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:10:12.010171   61447 cri.go:89] found id: ""
	I0814 01:10:12.010198   61447 logs.go:276] 0 containers: []
	W0814 01:10:12.010209   61447 logs.go:278] No container was found matching "kindnet"
	I0814 01:10:12.010217   61447 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0814 01:10:12.010277   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0814 01:10:12.045333   61447 cri.go:89] found id: "d4d7da10edbe3c5e4d9a0997c7f8909523ed34eacac6b44dc982bf6ab7504eff"
	I0814 01:10:12.045354   61447 cri.go:89] found id: "bacb411cbea2000ee28b8a5eb1c371d4094e22dea31e33892860568e08ee9768"
	I0814 01:10:12.045359   61447 cri.go:89] found id: ""
	I0814 01:10:12.045367   61447 logs.go:276] 2 containers: [d4d7da10edbe3c5e4d9a0997c7f8909523ed34eacac6b44dc982bf6ab7504eff bacb411cbea2000ee28b8a5eb1c371d4094e22dea31e33892860568e08ee9768]
	I0814 01:10:12.045431   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:12.049476   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:12.053712   61447 logs.go:123] Gathering logs for kube-controller-manager [3ef9bf666bbbc4c4c8b8036d2fa7e15e882f2bb301fe7d3b63a483d8b38e6091] ...
	I0814 01:10:12.053732   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ef9bf666bbbc4c4c8b8036d2fa7e15e882f2bb301fe7d3b63a483d8b38e6091"
	I0814 01:10:12.109678   61447 logs.go:123] Gathering logs for storage-provisioner [d4d7da10edbe3c5e4d9a0997c7f8909523ed34eacac6b44dc982bf6ab7504eff] ...
	I0814 01:10:12.109706   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d4d7da10edbe3c5e4d9a0997c7f8909523ed34eacac6b44dc982bf6ab7504eff"
	I0814 01:10:12.146300   61447 logs.go:123] Gathering logs for storage-provisioner [bacb411cbea2000ee28b8a5eb1c371d4094e22dea31e33892860568e08ee9768] ...
	I0814 01:10:12.146327   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bacb411cbea2000ee28b8a5eb1c371d4094e22dea31e33892860568e08ee9768"
	I0814 01:10:12.186556   61447 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:10:12.186585   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:10:12.660273   61447 logs.go:123] Gathering logs for kubelet ...
	I0814 01:10:12.660307   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:10:12.739687   61447 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:10:12.739723   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 01:10:12.859358   61447 logs.go:123] Gathering logs for kube-apiserver [ddba3ebb8413d6647bf6c8fe0bff16a1a10b35bcd7219cad5b66979c372cd92e] ...
	I0814 01:10:12.859388   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ddba3ebb8413d6647bf6c8fe0bff16a1a10b35bcd7219cad5b66979c372cd92e"
	I0814 01:10:12.908682   61447 logs.go:123] Gathering logs for kube-proxy [0ec88a5a7a9d566fabdf1393667a95e5eac0ed7f2a2aaa326ee51d3e99a72b12] ...
	I0814 01:10:12.908712   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ec88a5a7a9d566fabdf1393667a95e5eac0ed7f2a2aaa326ee51d3e99a72b12"
	I0814 01:10:12.943374   61447 logs.go:123] Gathering logs for container status ...
	I0814 01:10:12.943403   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:10:12.985875   61447 logs.go:123] Gathering logs for dmesg ...
	I0814 01:10:12.985915   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:10:13.001173   61447 logs.go:123] Gathering logs for etcd [1632d4b88f7f0e7e802d1848acf6c67950cfa7cdfbe5b4dd7f6fbc467a969388] ...
	I0814 01:10:13.001206   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1632d4b88f7f0e7e802d1848acf6c67950cfa7cdfbe5b4dd7f6fbc467a969388"
	I0814 01:10:13.048387   61447 logs.go:123] Gathering logs for coredns [7d3cb1d648607a62a84856164fa12f6d400c0d4316d7bda5d83a448870c145fc] ...
	I0814 01:10:13.048419   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7d3cb1d648607a62a84856164fa12f6d400c0d4316d7bda5d83a448870c145fc"
	I0814 01:10:13.088258   61447 logs.go:123] Gathering logs for kube-scheduler [89953f1dc813e32748579ac4c2541c0f99b1443ae3956d85a367db06810107b2] ...
	I0814 01:10:13.088295   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 89953f1dc813e32748579ac4c2541c0f99b1443ae3956d85a367db06810107b2"
	I0814 01:10:15.634029   61447 api_server.go:253] Checking apiserver healthz at https://192.168.72.94:8443/healthz ...
	I0814 01:10:15.639313   61447 api_server.go:279] https://192.168.72.94:8443/healthz returned 200:
	ok
	I0814 01:10:15.640756   61447 api_server.go:141] control plane version: v1.31.0
	I0814 01:10:15.640778   61447 api_server.go:131] duration metric: took 3.909125329s to wait for apiserver health ...
	I0814 01:10:15.640785   61447 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 01:10:15.640808   61447 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:10:15.640853   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:10:15.687350   61447 cri.go:89] found id: "ddba3ebb8413d6647bf6c8fe0bff16a1a10b35bcd7219cad5b66979c372cd92e"
	I0814 01:10:15.687373   61447 cri.go:89] found id: ""
	I0814 01:10:15.687381   61447 logs.go:276] 1 containers: [ddba3ebb8413d6647bf6c8fe0bff16a1a10b35bcd7219cad5b66979c372cd92e]
	I0814 01:10:15.687460   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:15.691407   61447 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:10:15.691473   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:10:15.730526   61447 cri.go:89] found id: "1632d4b88f7f0e7e802d1848acf6c67950cfa7cdfbe5b4dd7f6fbc467a969388"
	I0814 01:10:15.730551   61447 cri.go:89] found id: ""
	I0814 01:10:15.730560   61447 logs.go:276] 1 containers: [1632d4b88f7f0e7e802d1848acf6c67950cfa7cdfbe5b4dd7f6fbc467a969388]
	I0814 01:10:15.730618   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:15.734328   61447 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:10:15.734390   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:10:15.773166   61447 cri.go:89] found id: "7d3cb1d648607a62a84856164fa12f6d400c0d4316d7bda5d83a448870c145fc"
	I0814 01:10:15.773185   61447 cri.go:89] found id: ""
	I0814 01:10:15.773192   61447 logs.go:276] 1 containers: [7d3cb1d648607a62a84856164fa12f6d400c0d4316d7bda5d83a448870c145fc]
	I0814 01:10:15.773236   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:15.778757   61447 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:10:15.778815   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:10:15.813960   61447 cri.go:89] found id: "89953f1dc813e32748579ac4c2541c0f99b1443ae3956d85a367db06810107b2"
	I0814 01:10:15.813984   61447 cri.go:89] found id: ""
	I0814 01:10:15.813993   61447 logs.go:276] 1 containers: [89953f1dc813e32748579ac4c2541c0f99b1443ae3956d85a367db06810107b2]
	I0814 01:10:15.814068   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:15.818154   61447 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:10:15.818206   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:10:15.859408   61447 cri.go:89] found id: "0ec88a5a7a9d566fabdf1393667a95e5eac0ed7f2a2aaa326ee51d3e99a72b12"
	I0814 01:10:15.859432   61447 cri.go:89] found id: ""
	I0814 01:10:15.859440   61447 logs.go:276] 1 containers: [0ec88a5a7a9d566fabdf1393667a95e5eac0ed7f2a2aaa326ee51d3e99a72b12]
	I0814 01:10:15.859487   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:15.864494   61447 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:10:15.864583   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:10:15.900903   61447 cri.go:89] found id: "3ef9bf666bbbc4c4c8b8036d2fa7e15e882f2bb301fe7d3b63a483d8b38e6091"
	I0814 01:10:15.900922   61447 cri.go:89] found id: ""
	I0814 01:10:15.900932   61447 logs.go:276] 1 containers: [3ef9bf666bbbc4c4c8b8036d2fa7e15e882f2bb301fe7d3b63a483d8b38e6091]
	I0814 01:10:15.900982   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:15.905238   61447 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:10:15.905298   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:10:15.941185   61447 cri.go:89] found id: ""
	I0814 01:10:15.941215   61447 logs.go:276] 0 containers: []
	W0814 01:10:15.941226   61447 logs.go:278] No container was found matching "kindnet"
	I0814 01:10:15.941233   61447 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0814 01:10:15.941293   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0814 01:10:15.980737   61447 cri.go:89] found id: "d4d7da10edbe3c5e4d9a0997c7f8909523ed34eacac6b44dc982bf6ab7504eff"
	I0814 01:10:15.980756   61447 cri.go:89] found id: "bacb411cbea2000ee28b8a5eb1c371d4094e22dea31e33892860568e08ee9768"
	I0814 01:10:15.980760   61447 cri.go:89] found id: ""
	I0814 01:10:15.980766   61447 logs.go:276] 2 containers: [d4d7da10edbe3c5e4d9a0997c7f8909523ed34eacac6b44dc982bf6ab7504eff bacb411cbea2000ee28b8a5eb1c371d4094e22dea31e33892860568e08ee9768]
	I0814 01:10:15.980809   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:15.985209   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:15.989469   61447 logs.go:123] Gathering logs for coredns [7d3cb1d648607a62a84856164fa12f6d400c0d4316d7bda5d83a448870c145fc] ...
	I0814 01:10:15.989492   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7d3cb1d648607a62a84856164fa12f6d400c0d4316d7bda5d83a448870c145fc"
	I0814 01:10:16.026888   61447 logs.go:123] Gathering logs for kube-proxy [0ec88a5a7a9d566fabdf1393667a95e5eac0ed7f2a2aaa326ee51d3e99a72b12] ...
	I0814 01:10:16.026917   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ec88a5a7a9d566fabdf1393667a95e5eac0ed7f2a2aaa326ee51d3e99a72b12"
	I0814 01:10:16.071726   61447 logs.go:123] Gathering logs for storage-provisioner [d4d7da10edbe3c5e4d9a0997c7f8909523ed34eacac6b44dc982bf6ab7504eff] ...
	I0814 01:10:16.071754   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d4d7da10edbe3c5e4d9a0997c7f8909523ed34eacac6b44dc982bf6ab7504eff"
	I0814 01:10:16.109685   61447 logs.go:123] Gathering logs for storage-provisioner [bacb411cbea2000ee28b8a5eb1c371d4094e22dea31e33892860568e08ee9768] ...
	I0814 01:10:16.109710   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bacb411cbea2000ee28b8a5eb1c371d4094e22dea31e33892860568e08ee9768"
	I0814 01:10:16.145898   61447 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:10:16.145928   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:10:15.387785   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:10:15.401850   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:10:15.401916   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:10:15.441217   61804 cri.go:89] found id: ""
	I0814 01:10:15.441240   61804 logs.go:276] 0 containers: []
	W0814 01:10:15.441255   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:10:15.441261   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:10:15.441312   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:10:15.475123   61804 cri.go:89] found id: ""
	I0814 01:10:15.475158   61804 logs.go:276] 0 containers: []
	W0814 01:10:15.475167   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:10:15.475172   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:10:15.475234   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:10:15.509696   61804 cri.go:89] found id: ""
	I0814 01:10:15.509725   61804 logs.go:276] 0 containers: []
	W0814 01:10:15.509733   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:10:15.509739   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:10:15.509797   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:10:15.542584   61804 cri.go:89] found id: ""
	I0814 01:10:15.542615   61804 logs.go:276] 0 containers: []
	W0814 01:10:15.542625   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:10:15.542632   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:10:15.542701   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:10:15.576508   61804 cri.go:89] found id: ""
	I0814 01:10:15.576540   61804 logs.go:276] 0 containers: []
	W0814 01:10:15.576552   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:10:15.576558   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:10:15.576622   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:10:15.613618   61804 cri.go:89] found id: ""
	I0814 01:10:15.613649   61804 logs.go:276] 0 containers: []
	W0814 01:10:15.613660   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:10:15.613669   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:10:15.613732   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:10:15.646153   61804 cri.go:89] found id: ""
	I0814 01:10:15.646173   61804 logs.go:276] 0 containers: []
	W0814 01:10:15.646182   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:10:15.646189   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:10:15.646241   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:10:15.681417   61804 cri.go:89] found id: ""
	I0814 01:10:15.681444   61804 logs.go:276] 0 containers: []
	W0814 01:10:15.681455   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:10:15.681466   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:10:15.681483   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:10:15.763989   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:10:15.764026   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:10:15.803304   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:10:15.803337   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:10:15.872591   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:10:15.872630   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:10:15.886469   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:10:15.886504   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:10:15.956403   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:10:18.457103   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:10:18.470059   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:10:18.470138   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:10:18.505369   61804 cri.go:89] found id: ""
	I0814 01:10:18.505399   61804 logs.go:276] 0 containers: []
	W0814 01:10:18.505410   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:10:18.505419   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:10:18.505481   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:10:18.536719   61804 cri.go:89] found id: ""
	I0814 01:10:18.536750   61804 logs.go:276] 0 containers: []
	W0814 01:10:18.536781   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:10:18.536790   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:10:18.536845   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:10:18.571048   61804 cri.go:89] found id: ""
	I0814 01:10:18.571077   61804 logs.go:276] 0 containers: []
	W0814 01:10:18.571089   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:10:18.571096   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:10:18.571161   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:10:18.605547   61804 cri.go:89] found id: ""
	I0814 01:10:18.605569   61804 logs.go:276] 0 containers: []
	W0814 01:10:18.605578   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:10:18.605585   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:10:18.605645   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:10:18.637177   61804 cri.go:89] found id: ""
	I0814 01:10:18.637199   61804 logs.go:276] 0 containers: []
	W0814 01:10:18.637207   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:10:18.637213   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:10:18.637275   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:10:18.674976   61804 cri.go:89] found id: ""
	I0814 01:10:18.675003   61804 logs.go:276] 0 containers: []
	W0814 01:10:18.675012   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:10:18.675017   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:10:18.675066   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:10:18.709808   61804 cri.go:89] found id: ""
	I0814 01:10:18.709832   61804 logs.go:276] 0 containers: []
	W0814 01:10:18.709840   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:10:18.709846   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:10:18.709902   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:10:18.743577   61804 cri.go:89] found id: ""
	I0814 01:10:18.743601   61804 logs.go:276] 0 containers: []
	W0814 01:10:18.743607   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:10:18.743615   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:10:18.743635   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:10:18.794913   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:10:18.794944   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:10:18.807665   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:10:18.807692   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:10:18.877814   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:10:18.877835   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:10:18.877847   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:10:18.962319   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:10:18.962356   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:10:16.533474   61447 logs.go:123] Gathering logs for container status ...
	I0814 01:10:16.533523   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:10:16.579098   61447 logs.go:123] Gathering logs for kube-apiserver [ddba3ebb8413d6647bf6c8fe0bff16a1a10b35bcd7219cad5b66979c372cd92e] ...
	I0814 01:10:16.579129   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ddba3ebb8413d6647bf6c8fe0bff16a1a10b35bcd7219cad5b66979c372cd92e"
	I0814 01:10:16.620711   61447 logs.go:123] Gathering logs for dmesg ...
	I0814 01:10:16.620744   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:10:16.633968   61447 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:10:16.634005   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 01:10:16.733947   61447 logs.go:123] Gathering logs for etcd [1632d4b88f7f0e7e802d1848acf6c67950cfa7cdfbe5b4dd7f6fbc467a969388] ...
	I0814 01:10:16.733985   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1632d4b88f7f0e7e802d1848acf6c67950cfa7cdfbe5b4dd7f6fbc467a969388"
	I0814 01:10:16.785475   61447 logs.go:123] Gathering logs for kube-scheduler [89953f1dc813e32748579ac4c2541c0f99b1443ae3956d85a367db06810107b2] ...
	I0814 01:10:16.785512   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 89953f1dc813e32748579ac4c2541c0f99b1443ae3956d85a367db06810107b2"
	I0814 01:10:16.826307   61447 logs.go:123] Gathering logs for kube-controller-manager [3ef9bf666bbbc4c4c8b8036d2fa7e15e882f2bb301fe7d3b63a483d8b38e6091] ...
	I0814 01:10:16.826334   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ef9bf666bbbc4c4c8b8036d2fa7e15e882f2bb301fe7d3b63a483d8b38e6091"
	I0814 01:10:16.879391   61447 logs.go:123] Gathering logs for kubelet ...
	I0814 01:10:16.879422   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:10:19.453998   61447 system_pods.go:59] 8 kube-system pods found
	I0814 01:10:19.454028   61447 system_pods.go:61] "coredns-6f6b679f8f-dz9zk" [67e29ce3-7f67-4b96-8030-c980773b5772] Running
	I0814 01:10:19.454034   61447 system_pods.go:61] "etcd-no-preload-776907" [b81b7341-dcd8-4374-8241-8797eb33d707] Running
	I0814 01:10:19.454050   61447 system_pods.go:61] "kube-apiserver-no-preload-776907" [33b066e2-28ef-46a7-95d7-b17806cdbde6] Running
	I0814 01:10:19.454056   61447 system_pods.go:61] "kube-controller-manager-no-preload-776907" [1de07b1f-7e0d-4704-84dc-fbb1280fc3bf] Running
	I0814 01:10:19.454060   61447 system_pods.go:61] "kube-proxy-pgm9t" [efad60b0-c62e-4c47-974b-98fdca9d3496] Running
	I0814 01:10:19.454065   61447 system_pods.go:61] "kube-scheduler-no-preload-776907" [6a57c2f5-6194-4e84-bfd3-985a6ff2333d] Running
	I0814 01:10:19.454074   61447 system_pods.go:61] "metrics-server-6867b74b74-gb2dt" [c950c58e-c5c3-4535-b10f-f4379ff03409] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 01:10:19.454079   61447 system_pods.go:61] "storage-provisioner" [d0ba9510-e0a5-4558-98e3-a9510920f93a] Running
	I0814 01:10:19.454090   61447 system_pods.go:74] duration metric: took 3.813297982s to wait for pod list to return data ...
	I0814 01:10:19.454101   61447 default_sa.go:34] waiting for default service account to be created ...
	I0814 01:10:19.456941   61447 default_sa.go:45] found service account: "default"
	I0814 01:10:19.456969   61447 default_sa.go:55] duration metric: took 2.858057ms for default service account to be created ...
	I0814 01:10:19.456980   61447 system_pods.go:116] waiting for k8s-apps to be running ...
	I0814 01:10:19.461101   61447 system_pods.go:86] 8 kube-system pods found
	I0814 01:10:19.461125   61447 system_pods.go:89] "coredns-6f6b679f8f-dz9zk" [67e29ce3-7f67-4b96-8030-c980773b5772] Running
	I0814 01:10:19.461133   61447 system_pods.go:89] "etcd-no-preload-776907" [b81b7341-dcd8-4374-8241-8797eb33d707] Running
	I0814 01:10:19.461138   61447 system_pods.go:89] "kube-apiserver-no-preload-776907" [33b066e2-28ef-46a7-95d7-b17806cdbde6] Running
	I0814 01:10:19.461144   61447 system_pods.go:89] "kube-controller-manager-no-preload-776907" [1de07b1f-7e0d-4704-84dc-fbb1280fc3bf] Running
	I0814 01:10:19.461150   61447 system_pods.go:89] "kube-proxy-pgm9t" [efad60b0-c62e-4c47-974b-98fdca9d3496] Running
	I0814 01:10:19.461155   61447 system_pods.go:89] "kube-scheduler-no-preload-776907" [6a57c2f5-6194-4e84-bfd3-985a6ff2333d] Running
	I0814 01:10:19.461166   61447 system_pods.go:89] "metrics-server-6867b74b74-gb2dt" [c950c58e-c5c3-4535-b10f-f4379ff03409] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 01:10:19.461178   61447 system_pods.go:89] "storage-provisioner" [d0ba9510-e0a5-4558-98e3-a9510920f93a] Running
	I0814 01:10:19.461191   61447 system_pods.go:126] duration metric: took 4.203785ms to wait for k8s-apps to be running ...
	I0814 01:10:19.461203   61447 system_svc.go:44] waiting for kubelet service to be running ....
	I0814 01:10:19.461253   61447 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 01:10:19.476698   61447 system_svc.go:56] duration metric: took 15.486945ms WaitForService to wait for kubelet
	I0814 01:10:19.476735   61447 kubeadm.go:582] duration metric: took 4m23.065272349s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 01:10:19.476762   61447 node_conditions.go:102] verifying NodePressure condition ...
	I0814 01:10:19.480352   61447 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0814 01:10:19.480377   61447 node_conditions.go:123] node cpu capacity is 2
	I0814 01:10:19.480392   61447 node_conditions.go:105] duration metric: took 3.624166ms to run NodePressure ...
	I0814 01:10:19.480407   61447 start.go:241] waiting for startup goroutines ...
	I0814 01:10:19.480426   61447 start.go:246] waiting for cluster config update ...
	I0814 01:10:19.480440   61447 start.go:255] writing updated cluster config ...
	I0814 01:10:19.480790   61447 ssh_runner.go:195] Run: rm -f paused
	I0814 01:10:19.529809   61447 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0814 01:10:19.531666   61447 out.go:177] * Done! kubectl is now configured to use "no-preload-776907" cluster and "default" namespace by default
	I0814 01:10:15.590230   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:18.089286   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:21.500596   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:10:21.513404   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:10:21.513479   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:10:21.554150   61804 cri.go:89] found id: ""
	I0814 01:10:21.554179   61804 logs.go:276] 0 containers: []
	W0814 01:10:21.554188   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:10:21.554194   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:10:21.554251   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:10:21.588785   61804 cri.go:89] found id: ""
	I0814 01:10:21.588807   61804 logs.go:276] 0 containers: []
	W0814 01:10:21.588815   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:10:21.588820   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:10:21.588870   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:10:21.621537   61804 cri.go:89] found id: ""
	I0814 01:10:21.621572   61804 logs.go:276] 0 containers: []
	W0814 01:10:21.621581   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:10:21.621587   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:10:21.621640   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:10:21.660651   61804 cri.go:89] found id: ""
	I0814 01:10:21.660680   61804 logs.go:276] 0 containers: []
	W0814 01:10:21.660690   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:10:21.660698   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:10:21.660763   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:10:21.697233   61804 cri.go:89] found id: ""
	I0814 01:10:21.697259   61804 logs.go:276] 0 containers: []
	W0814 01:10:21.697269   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:10:21.697276   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:10:21.697347   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:10:21.728389   61804 cri.go:89] found id: ""
	I0814 01:10:21.728416   61804 logs.go:276] 0 containers: []
	W0814 01:10:21.728428   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:10:21.728435   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:10:21.728498   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:10:21.761502   61804 cri.go:89] found id: ""
	I0814 01:10:21.761534   61804 logs.go:276] 0 containers: []
	W0814 01:10:21.761546   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:10:21.761552   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:10:21.761624   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:10:21.796569   61804 cri.go:89] found id: ""
	I0814 01:10:21.796598   61804 logs.go:276] 0 containers: []
	W0814 01:10:21.796610   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:10:21.796621   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:10:21.796637   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:10:21.845444   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:10:21.845483   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:10:21.858017   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:10:21.858057   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:10:21.930417   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:10:21.930443   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:10:21.930460   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:10:22.005912   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:10:22.005951   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:10:20.089593   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:22.089797   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:24.591315   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:24.545241   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:10:24.559341   61804 kubeadm.go:597] duration metric: took 4m4.643567639s to restartPrimaryControlPlane
	W0814 01:10:24.559407   61804 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0814 01:10:24.559430   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0814 01:10:28.294241   61804 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (3.734785326s)
	I0814 01:10:28.294319   61804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 01:10:28.311148   61804 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 01:10:28.321145   61804 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 01:10:28.335025   61804 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 01:10:28.335042   61804 kubeadm.go:157] found existing configuration files:
	
	I0814 01:10:28.335084   61804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 01:10:28.348778   61804 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 01:10:28.348838   61804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 01:10:28.362209   61804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 01:10:28.374981   61804 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 01:10:28.375054   61804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 01:10:28.385686   61804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 01:10:28.396608   61804 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 01:10:28.396681   61804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 01:10:28.410155   61804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 01:10:28.419462   61804 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 01:10:28.419524   61804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 01:10:28.429089   61804 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0814 01:10:28.506715   61804 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0814 01:10:28.506816   61804 kubeadm.go:310] [preflight] Running pre-flight checks
	I0814 01:10:28.668770   61804 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0814 01:10:28.668908   61804 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0814 01:10:28.669020   61804 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0814 01:10:28.865442   61804 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0814 01:10:28.866971   61804 out.go:204]   - Generating certificates and keys ...
	I0814 01:10:28.867065   61804 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0814 01:10:28.867151   61804 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0814 01:10:28.867270   61804 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0814 01:10:28.867370   61804 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0814 01:10:28.867486   61804 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0814 01:10:28.867575   61804 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0814 01:10:28.867668   61804 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0814 01:10:28.867762   61804 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0814 01:10:28.867854   61804 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0814 01:10:28.867969   61804 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0814 01:10:28.868026   61804 kubeadm.go:310] [certs] Using the existing "sa" key
	I0814 01:10:28.868095   61804 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0814 01:10:29.109820   61804 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0814 01:10:29.305485   61804 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0814 01:10:29.447627   61804 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0814 01:10:29.519749   61804 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0814 01:10:29.534507   61804 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0814 01:10:29.535858   61804 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0814 01:10:29.535915   61804 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0814 01:10:29.679100   61804 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0814 01:10:27.089933   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:29.590579   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:29.681457   61804 out.go:204]   - Booting up control plane ...
	I0814 01:10:29.681596   61804 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0814 01:10:29.686193   61804 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0814 01:10:29.690458   61804 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0814 01:10:29.690602   61804 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0814 01:10:29.692526   61804 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0814 01:10:32.089926   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:34.090129   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:39.266092   61689 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.354324468s)
	I0814 01:10:39.266176   61689 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 01:10:39.281039   61689 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 01:10:39.290328   61689 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 01:10:39.299179   61689 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 01:10:39.299200   61689 kubeadm.go:157] found existing configuration files:
	
	I0814 01:10:39.299240   61689 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0814 01:10:39.307972   61689 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 01:10:39.308029   61689 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 01:10:39.316639   61689 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0814 01:10:39.324834   61689 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 01:10:39.324907   61689 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 01:10:39.333911   61689 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0814 01:10:39.342294   61689 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 01:10:39.342358   61689 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 01:10:39.351209   61689 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0814 01:10:39.361364   61689 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 01:10:39.361429   61689 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 01:10:39.370737   61689 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0814 01:10:39.422751   61689 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0814 01:10:39.422819   61689 kubeadm.go:310] [preflight] Running pre-flight checks
	I0814 01:10:39.536672   61689 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0814 01:10:39.536827   61689 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0814 01:10:39.536965   61689 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0814 01:10:39.546793   61689 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0814 01:10:36.590409   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:39.090160   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:39.548749   61689 out.go:204]   - Generating certificates and keys ...
	I0814 01:10:39.548852   61689 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0814 01:10:39.548936   61689 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0814 01:10:39.549054   61689 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0814 01:10:39.549147   61689 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0814 01:10:39.549236   61689 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0814 01:10:39.549354   61689 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0814 01:10:39.549454   61689 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0814 01:10:39.549540   61689 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0814 01:10:39.549647   61689 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0814 01:10:39.549725   61689 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0814 01:10:39.549779   61689 kubeadm.go:310] [certs] Using the existing "sa" key
	I0814 01:10:39.549857   61689 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0814 01:10:39.626351   61689 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0814 01:10:39.760278   61689 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0814 01:10:39.866008   61689 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0814 01:10:39.999161   61689 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0814 01:10:40.196721   61689 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0814 01:10:40.197188   61689 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0814 01:10:40.199882   61689 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0814 01:10:40.201618   61689 out.go:204]   - Booting up control plane ...
	I0814 01:10:40.201746   61689 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0814 01:10:40.201813   61689 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0814 01:10:40.201869   61689 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0814 01:10:40.219199   61689 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0814 01:10:40.227902   61689 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0814 01:10:40.227973   61689 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0814 01:10:40.361233   61689 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0814 01:10:40.361348   61689 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0814 01:10:40.862332   61689 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.269742ms
	I0814 01:10:40.862432   61689 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0814 01:10:41.590443   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:43.590766   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:45.864038   61689 kubeadm.go:310] [api-check] The API server is healthy after 5.001460061s
	I0814 01:10:45.878388   61689 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0814 01:10:45.896709   61689 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0814 01:10:45.940134   61689 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0814 01:10:45.940348   61689 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-585256 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0814 01:10:45.955748   61689 kubeadm.go:310] [bootstrap-token] Using token: 8dipep.54emqs990as2h2yu
	I0814 01:10:45.957107   61689 out.go:204]   - Configuring RBAC rules ...
	I0814 01:10:45.957260   61689 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0814 01:10:45.967198   61689 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0814 01:10:45.981109   61689 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0814 01:10:45.984971   61689 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0814 01:10:45.990218   61689 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0814 01:10:45.994132   61689 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0814 01:10:46.271392   61689 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0814 01:10:46.713198   61689 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0814 01:10:47.271788   61689 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0814 01:10:47.271821   61689 kubeadm.go:310] 
	I0814 01:10:47.271873   61689 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0814 01:10:47.271880   61689 kubeadm.go:310] 
	I0814 01:10:47.271970   61689 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0814 01:10:47.271983   61689 kubeadm.go:310] 
	I0814 01:10:47.272035   61689 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0814 01:10:47.272118   61689 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0814 01:10:47.272195   61689 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0814 01:10:47.272219   61689 kubeadm.go:310] 
	I0814 01:10:47.272313   61689 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0814 01:10:47.272340   61689 kubeadm.go:310] 
	I0814 01:10:47.272418   61689 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0814 01:10:47.272431   61689 kubeadm.go:310] 
	I0814 01:10:47.272493   61689 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0814 01:10:47.272603   61689 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0814 01:10:47.272718   61689 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0814 01:10:47.272736   61689 kubeadm.go:310] 
	I0814 01:10:47.272851   61689 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0814 01:10:47.272978   61689 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0814 01:10:47.272988   61689 kubeadm.go:310] 
	I0814 01:10:47.273093   61689 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 8dipep.54emqs990as2h2yu \
	I0814 01:10:47.273238   61689 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f608549e9c065598e2d494ba753eb8a7bbe7260a53a76decd30e6e17b5fe24c3 \
	I0814 01:10:47.273276   61689 kubeadm.go:310] 	--control-plane 
	I0814 01:10:47.273290   61689 kubeadm.go:310] 
	I0814 01:10:47.273405   61689 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0814 01:10:47.273413   61689 kubeadm.go:310] 
	I0814 01:10:47.273513   61689 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 8dipep.54emqs990as2h2yu \
	I0814 01:10:47.273659   61689 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f608549e9c065598e2d494ba753eb8a7bbe7260a53a76decd30e6e17b5fe24c3 
	I0814 01:10:47.274832   61689 kubeadm.go:310] W0814 01:10:39.407507    2549 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0814 01:10:47.275253   61689 kubeadm.go:310] W0814 01:10:39.408398    2549 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0814 01:10:47.275402   61689 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0814 01:10:47.275444   61689 cni.go:84] Creating CNI manager for ""
	I0814 01:10:47.275455   61689 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 01:10:47.277239   61689 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0814 01:10:47.278570   61689 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0814 01:10:47.289683   61689 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0814 01:10:47.306392   61689 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0814 01:10:47.306474   61689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:10:47.306474   61689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-585256 minikube.k8s.io/updated_at=2024_08_14T01_10_47_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a5ac17629aabe8562ef9220195ba6559cf416caf minikube.k8s.io/name=default-k8s-diff-port-585256 minikube.k8s.io/primary=true
	I0814 01:10:47.471053   61689 ops.go:34] apiserver oom_adj: -16
	I0814 01:10:47.471227   61689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:10:47.971669   61689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:10:46.089776   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:48.589378   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:48.472147   61689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:10:48.971874   61689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:10:49.471867   61689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:10:49.972002   61689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:10:50.471298   61689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:10:50.971656   61689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:10:51.471610   61689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:10:51.548562   61689 kubeadm.go:1113] duration metric: took 4.24215834s to wait for elevateKubeSystemPrivileges
	I0814 01:10:51.548600   61689 kubeadm.go:394] duration metric: took 4m53.28604263s to StartCluster
	I0814 01:10:51.548621   61689 settings.go:142] acquiring lock: {Name:mkb0f793aa2a6618ff3457f9cd2d34beec5f1b5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 01:10:51.548708   61689 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19429-9425/kubeconfig
	I0814 01:10:51.551834   61689 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19429-9425/kubeconfig: {Name:mkd99f966962f24d3ec49056c344f5320df43dba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 01:10:51.552154   61689 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.110 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0814 01:10:51.552236   61689 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0814 01:10:51.552311   61689 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-585256"
	I0814 01:10:51.552343   61689 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-585256"
	I0814 01:10:51.552341   61689 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-585256"
	W0814 01:10:51.552354   61689 addons.go:243] addon storage-provisioner should already be in state true
	I0814 01:10:51.552384   61689 host.go:66] Checking if "default-k8s-diff-port-585256" exists ...
	I0814 01:10:51.552387   61689 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-585256"
	W0814 01:10:51.552396   61689 addons.go:243] addon metrics-server should already be in state true
	I0814 01:10:51.552416   61689 host.go:66] Checking if "default-k8s-diff-port-585256" exists ...
	I0814 01:10:51.552423   61689 config.go:182] Loaded profile config "default-k8s-diff-port-585256": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 01:10:51.552805   61689 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:10:51.552842   61689 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:10:51.552855   61689 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:10:51.552865   61689 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:10:51.553056   61689 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-585256"
	I0814 01:10:51.553092   61689 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-585256"
	I0814 01:10:51.553476   61689 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:10:51.553519   61689 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:10:51.553870   61689 out.go:177] * Verifying Kubernetes components...
	I0814 01:10:51.555358   61689 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 01:10:51.569380   61689 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36961
	I0814 01:10:51.569570   61689 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38335
	I0814 01:10:51.569920   61689 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:10:51.570057   61689 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:10:51.570516   61689 main.go:141] libmachine: Using API Version  1
	I0814 01:10:51.570536   61689 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:10:51.570648   61689 main.go:141] libmachine: Using API Version  1
	I0814 01:10:51.570672   61689 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:10:51.570891   61689 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:10:51.570981   61689 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:10:51.571148   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetState
	I0814 01:10:51.571564   61689 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:10:51.571600   61689 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:10:51.572161   61689 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40351
	I0814 01:10:51.572637   61689 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:10:51.573134   61689 main.go:141] libmachine: Using API Version  1
	I0814 01:10:51.573153   61689 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:10:51.574142   61689 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:10:51.574576   61689 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:10:51.574600   61689 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:10:51.575008   61689 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-585256"
	W0814 01:10:51.575026   61689 addons.go:243] addon default-storageclass should already be in state true
	I0814 01:10:51.575056   61689 host.go:66] Checking if "default-k8s-diff-port-585256" exists ...
	I0814 01:10:51.575459   61689 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:10:51.575500   61689 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:10:51.587910   61689 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35335
	I0814 01:10:51.588640   61689 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:10:51.589298   61689 main.go:141] libmachine: Using API Version  1
	I0814 01:10:51.589318   61689 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:10:51.589938   61689 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:10:51.590198   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetState
	I0814 01:10:51.591151   61689 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40625
	I0814 01:10:51.591786   61689 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:10:51.592257   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .DriverName
	I0814 01:10:51.592427   61689 main.go:141] libmachine: Using API Version  1
	I0814 01:10:51.592444   61689 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:10:51.592742   61689 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:10:51.592959   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetState
	I0814 01:10:51.594517   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .DriverName
	I0814 01:10:51.594851   61689 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0814 01:10:51.596245   61689 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 01:10:51.596263   61689 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0814 01:10:51.596277   61689 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0814 01:10:51.596296   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHHostname
	I0814 01:10:51.597335   61689 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 01:10:51.597351   61689 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0814 01:10:51.597365   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHHostname
	I0814 01:10:51.599147   61689 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40567
	I0814 01:10:51.599559   61689 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:10:51.600041   61689 main.go:141] libmachine: Using API Version  1
	I0814 01:10:51.600062   61689 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:10:51.600442   61689 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:10:51.601105   61689 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:10:51.601131   61689 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:10:51.601316   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:10:51.601345   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:10:51.601367   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:10:51.601408   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHPort
	I0814 01:10:51.601889   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:10:51.601893   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 01:10:51.602060   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHUsername
	I0814 01:10:51.602226   61689 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/default-k8s-diff-port-585256/id_rsa Username:docker}
	I0814 01:10:51.606415   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:10:51.606437   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:10:51.606582   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHPort
	I0814 01:10:51.606793   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 01:10:51.607035   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHUsername
	I0814 01:10:51.607200   61689 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/default-k8s-diff-port-585256/id_rsa Username:docker}
	I0814 01:10:51.623773   61689 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33265
	I0814 01:10:51.624272   61689 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:10:51.624752   61689 main.go:141] libmachine: Using API Version  1
	I0814 01:10:51.624772   61689 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:10:51.625130   61689 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:10:51.625309   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetState
	I0814 01:10:51.627055   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .DriverName
	I0814 01:10:51.627259   61689 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0814 01:10:51.627272   61689 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0814 01:10:51.627284   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHHostname
	I0814 01:10:51.630492   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:10:51.630890   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:10:51.630904   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:10:51.631066   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHPort
	I0814 01:10:51.631226   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 01:10:51.631389   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHUsername
	I0814 01:10:51.631501   61689 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/default-k8s-diff-port-585256/id_rsa Username:docker}
	I0814 01:10:51.744471   61689 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 01:10:51.762256   61689 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-585256" to be "Ready" ...
	I0814 01:10:51.782968   61689 node_ready.go:49] node "default-k8s-diff-port-585256" has status "Ready":"True"
	I0814 01:10:51.782999   61689 node_ready.go:38] duration metric: took 20.706198ms for node "default-k8s-diff-port-585256" to be "Ready" ...
	I0814 01:10:51.783011   61689 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 01:10:51.796967   61689 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-hngz9" in "kube-system" namespace to be "Ready" ...
	I0814 01:10:51.866263   61689 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 01:10:51.867193   61689 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0814 01:10:51.880992   61689 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0814 01:10:51.881017   61689 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0814 01:10:51.927059   61689 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0814 01:10:51.927081   61689 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0814 01:10:51.987114   61689 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0814 01:10:51.987134   61689 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0814 01:10:52.053818   61689 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0814 01:10:52.977726   61689 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.111426777s)
	I0814 01:10:52.977791   61689 main.go:141] libmachine: Making call to close driver server
	I0814 01:10:52.977789   61689 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.110564484s)
	I0814 01:10:52.977844   61689 main.go:141] libmachine: Making call to close driver server
	I0814 01:10:52.977863   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .Close
	I0814 01:10:52.977805   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .Close
	I0814 01:10:52.978191   61689 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:10:52.978210   61689 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:10:52.978217   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | Closing plugin on server side
	I0814 01:10:52.978222   61689 main.go:141] libmachine: Making call to close driver server
	I0814 01:10:52.978230   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | Closing plugin on server side
	I0814 01:10:52.978236   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .Close
	I0814 01:10:52.978282   61689 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:10:52.978310   61689 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:10:52.978325   61689 main.go:141] libmachine: Making call to close driver server
	I0814 01:10:52.978335   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .Close
	I0814 01:10:52.978869   61689 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:10:52.978909   61689 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:10:52.979017   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | Closing plugin on server side
	I0814 01:10:52.981465   61689 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:10:52.981488   61689 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:10:53.039845   61689 main.go:141] libmachine: Making call to close driver server
	I0814 01:10:53.039866   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .Close
	I0814 01:10:53.040156   61689 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:10:53.040174   61689 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:10:53.040217   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | Closing plugin on server side
	I0814 01:10:53.239968   61689 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.186108272s)
	I0814 01:10:53.240018   61689 main.go:141] libmachine: Making call to close driver server
	I0814 01:10:53.240035   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .Close
	I0814 01:10:53.240360   61689 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:10:53.240378   61689 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:10:53.240387   61689 main.go:141] libmachine: Making call to close driver server
	I0814 01:10:53.240395   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .Close
	I0814 01:10:53.240672   61689 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:10:53.240686   61689 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:10:53.240696   61689 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-585256"
	I0814 01:10:53.242401   61689 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0814 01:10:50.591245   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:52.584492   61115 pod_ready.go:81] duration metric: took 4m0.000968161s for pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace to be "Ready" ...
	E0814 01:10:52.584532   61115 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace to be "Ready" (will not retry!)
	I0814 01:10:52.584557   61115 pod_ready.go:38] duration metric: took 4m8.538973262s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 01:10:52.584585   61115 kubeadm.go:597] duration metric: took 4m16.433276087s to restartPrimaryControlPlane
	W0814 01:10:52.584639   61115 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0814 01:10:52.584666   61115 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0814 01:10:53.243906   61689 addons.go:510] duration metric: took 1.691669156s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0814 01:10:53.804696   61689 pod_ready.go:102] pod "coredns-6f6b679f8f-hngz9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:56.305075   61689 pod_ready.go:102] pod "coredns-6f6b679f8f-hngz9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:57.805174   61689 pod_ready.go:92] pod "coredns-6f6b679f8f-hngz9" in "kube-system" namespace has status "Ready":"True"
	I0814 01:10:57.805202   61689 pod_ready.go:81] duration metric: took 6.008208867s for pod "coredns-6f6b679f8f-hngz9" in "kube-system" namespace to be "Ready" ...
	I0814 01:10:57.805214   61689 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-jmqk7" in "kube-system" namespace to be "Ready" ...
	I0814 01:10:57.809693   61689 pod_ready.go:92] pod "coredns-6f6b679f8f-jmqk7" in "kube-system" namespace has status "Ready":"True"
	I0814 01:10:57.809714   61689 pod_ready.go:81] duration metric: took 4.491999ms for pod "coredns-6f6b679f8f-jmqk7" in "kube-system" namespace to be "Ready" ...
	I0814 01:10:57.809726   61689 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-585256" in "kube-system" namespace to be "Ready" ...
	I0814 01:10:59.816199   61689 pod_ready.go:92] pod "etcd-default-k8s-diff-port-585256" in "kube-system" namespace has status "Ready":"True"
	I0814 01:10:59.816228   61689 pod_ready.go:81] duration metric: took 2.006493576s for pod "etcd-default-k8s-diff-port-585256" in "kube-system" namespace to be "Ready" ...
	I0814 01:10:59.816241   61689 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-585256" in "kube-system" namespace to be "Ready" ...
	I0814 01:10:59.821351   61689 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-585256" in "kube-system" namespace has status "Ready":"True"
	I0814 01:10:59.821374   61689 pod_ready.go:81] duration metric: took 5.126272ms for pod "kube-apiserver-default-k8s-diff-port-585256" in "kube-system" namespace to be "Ready" ...
	I0814 01:10:59.821384   61689 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-585256" in "kube-system" namespace to be "Ready" ...
	I0814 01:10:59.825182   61689 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-585256" in "kube-system" namespace has status "Ready":"True"
	I0814 01:10:59.825200   61689 pod_ready.go:81] duration metric: took 3.810193ms for pod "kube-controller-manager-default-k8s-diff-port-585256" in "kube-system" namespace to be "Ready" ...
	I0814 01:10:59.825209   61689 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rg8h9" in "kube-system" namespace to be "Ready" ...
	I0814 01:10:59.829240   61689 pod_ready.go:92] pod "kube-proxy-rg8h9" in "kube-system" namespace has status "Ready":"True"
	I0814 01:10:59.829259   61689 pod_ready.go:81] duration metric: took 4.043044ms for pod "kube-proxy-rg8h9" in "kube-system" namespace to be "Ready" ...
	I0814 01:10:59.829269   61689 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-585256" in "kube-system" namespace to be "Ready" ...
	I0814 01:11:00.602253   61689 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-585256" in "kube-system" namespace has status "Ready":"True"
	I0814 01:11:00.602276   61689 pod_ready.go:81] duration metric: took 773.000181ms for pod "kube-scheduler-default-k8s-diff-port-585256" in "kube-system" namespace to be "Ready" ...
	I0814 01:11:00.602285   61689 pod_ready.go:38] duration metric: took 8.819260447s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 01:11:00.602301   61689 api_server.go:52] waiting for apiserver process to appear ...
	I0814 01:11:00.602352   61689 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:11:00.620930   61689 api_server.go:72] duration metric: took 9.068741768s to wait for apiserver process to appear ...
	I0814 01:11:00.620954   61689 api_server.go:88] waiting for apiserver healthz status ...
	I0814 01:11:00.620973   61689 api_server.go:253] Checking apiserver healthz at https://192.168.39.110:8444/healthz ...
	I0814 01:11:00.625960   61689 api_server.go:279] https://192.168.39.110:8444/healthz returned 200:
	ok
	I0814 01:11:00.626930   61689 api_server.go:141] control plane version: v1.31.0
	I0814 01:11:00.626948   61689 api_server.go:131] duration metric: took 5.98825ms to wait for apiserver health ...
	I0814 01:11:00.626956   61689 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 01:11:00.805157   61689 system_pods.go:59] 9 kube-system pods found
	I0814 01:11:00.805183   61689 system_pods.go:61] "coredns-6f6b679f8f-hngz9" [213f9a45-596b-47b3-9c37-ceae021433ea] Running
	I0814 01:11:00.805187   61689 system_pods.go:61] "coredns-6f6b679f8f-jmqk7" [397fb54b-40cd-4c4e-9503-c077f814c6e5] Running
	I0814 01:11:00.805190   61689 system_pods.go:61] "etcd-default-k8s-diff-port-585256" [2fa04b3c-b311-4f0f-82e5-e512db3dd11b] Running
	I0814 01:11:00.805194   61689 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-585256" [ef1c1aeb-9cee-47d6-8cf5-14535208af62] Running
	I0814 01:11:00.805197   61689 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-585256" [ff5c5123-b01f-4023-b8ec-169065ddb88a] Running
	I0814 01:11:00.805200   61689 system_pods.go:61] "kube-proxy-rg8h9" [b2601104-a6f5-4065-87d5-c027d583f647] Running
	I0814 01:11:00.805203   61689 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-585256" [31e655e4-00c7-443a-9ee8-058a4020852d] Running
	I0814 01:11:00.805209   61689 system_pods.go:61] "metrics-server-6867b74b74-lzfpz" [2dd31ad2-c384-4edd-8d5c-561bc2fa72e4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 01:11:00.805213   61689 system_pods.go:61] "storage-provisioner" [1636777b-2347-4c48-b72a-3b5445c4862a] Running
	I0814 01:11:00.805219   61689 system_pods.go:74] duration metric: took 178.259422ms to wait for pod list to return data ...
	I0814 01:11:00.805226   61689 default_sa.go:34] waiting for default service account to be created ...
	I0814 01:11:01.001973   61689 default_sa.go:45] found service account: "default"
	I0814 01:11:01.002000   61689 default_sa.go:55] duration metric: took 196.764266ms for default service account to be created ...
	I0814 01:11:01.002010   61689 system_pods.go:116] waiting for k8s-apps to be running ...
	I0814 01:11:01.203660   61689 system_pods.go:86] 9 kube-system pods found
	I0814 01:11:01.203683   61689 system_pods.go:89] "coredns-6f6b679f8f-hngz9" [213f9a45-596b-47b3-9c37-ceae021433ea] Running
	I0814 01:11:01.203688   61689 system_pods.go:89] "coredns-6f6b679f8f-jmqk7" [397fb54b-40cd-4c4e-9503-c077f814c6e5] Running
	I0814 01:11:01.203695   61689 system_pods.go:89] "etcd-default-k8s-diff-port-585256" [2fa04b3c-b311-4f0f-82e5-e512db3dd11b] Running
	I0814 01:11:01.203702   61689 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-585256" [ef1c1aeb-9cee-47d6-8cf5-14535208af62] Running
	I0814 01:11:01.203708   61689 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-585256" [ff5c5123-b01f-4023-b8ec-169065ddb88a] Running
	I0814 01:11:01.203713   61689 system_pods.go:89] "kube-proxy-rg8h9" [b2601104-a6f5-4065-87d5-c027d583f647] Running
	I0814 01:11:01.203719   61689 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-585256" [31e655e4-00c7-443a-9ee8-058a4020852d] Running
	I0814 01:11:01.203727   61689 system_pods.go:89] "metrics-server-6867b74b74-lzfpz" [2dd31ad2-c384-4edd-8d5c-561bc2fa72e4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 01:11:01.203733   61689 system_pods.go:89] "storage-provisioner" [1636777b-2347-4c48-b72a-3b5445c4862a] Running
	I0814 01:11:01.203744   61689 system_pods.go:126] duration metric: took 201.72785ms to wait for k8s-apps to be running ...
	I0814 01:11:01.203752   61689 system_svc.go:44] waiting for kubelet service to be running ....
	I0814 01:11:01.203810   61689 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 01:11:01.218903   61689 system_svc.go:56] duration metric: took 15.144054ms WaitForService to wait for kubelet
	I0814 01:11:01.218925   61689 kubeadm.go:582] duration metric: took 9.666741267s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 01:11:01.218950   61689 node_conditions.go:102] verifying NodePressure condition ...
	I0814 01:11:01.403320   61689 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0814 01:11:01.403350   61689 node_conditions.go:123] node cpu capacity is 2
	I0814 01:11:01.403363   61689 node_conditions.go:105] duration metric: took 184.40754ms to run NodePressure ...
	I0814 01:11:01.403377   61689 start.go:241] waiting for startup goroutines ...
	I0814 01:11:01.403385   61689 start.go:246] waiting for cluster config update ...
	I0814 01:11:01.403398   61689 start.go:255] writing updated cluster config ...
	I0814 01:11:01.403690   61689 ssh_runner.go:195] Run: rm -f paused
	I0814 01:11:01.451211   61689 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0814 01:11:01.453288   61689 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-585256" cluster and "default" namespace by default
	I0814 01:11:09.693028   61804 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0814 01:11:09.693700   61804 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 01:11:09.693975   61804 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 01:11:18.892614   61115 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.307924274s)
	I0814 01:11:18.892692   61115 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 01:11:18.907571   61115 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 01:11:18.917775   61115 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 01:11:18.927492   61115 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 01:11:18.927521   61115 kubeadm.go:157] found existing configuration files:
	
	I0814 01:11:18.927588   61115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 01:11:18.936787   61115 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 01:11:18.936840   61115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 01:11:18.946163   61115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 01:11:18.954567   61115 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 01:11:18.954613   61115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 01:11:18.963437   61115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 01:11:18.971647   61115 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 01:11:18.971691   61115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 01:11:18.980676   61115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 01:11:18.989638   61115 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 01:11:18.989681   61115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 01:11:18.998834   61115 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0814 01:11:19.044209   61115 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0814 01:11:19.044286   61115 kubeadm.go:310] [preflight] Running pre-flight checks
	I0814 01:11:19.152983   61115 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0814 01:11:19.153147   61115 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0814 01:11:19.153253   61115 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0814 01:11:19.160933   61115 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0814 01:11:14.694223   61804 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 01:11:14.694446   61804 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 01:11:19.162856   61115 out.go:204]   - Generating certificates and keys ...
	I0814 01:11:19.162972   61115 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0814 01:11:19.163044   61115 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0814 01:11:19.163121   61115 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0814 01:11:19.163213   61115 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0814 01:11:19.163322   61115 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0814 01:11:19.163396   61115 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0814 01:11:19.163467   61115 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0814 01:11:19.163527   61115 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0814 01:11:19.163755   61115 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0814 01:11:19.163860   61115 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0814 01:11:19.163917   61115 kubeadm.go:310] [certs] Using the existing "sa" key
	I0814 01:11:19.163987   61115 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0814 01:11:19.615014   61115 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0814 01:11:19.777877   61115 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0814 01:11:19.917278   61115 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0814 01:11:20.190113   61115 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0814 01:11:20.351945   61115 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0814 01:11:20.352522   61115 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0814 01:11:20.355239   61115 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0814 01:11:20.356550   61115 out.go:204]   - Booting up control plane ...
	I0814 01:11:20.356683   61115 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0814 01:11:20.356784   61115 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0814 01:11:20.356993   61115 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0814 01:11:20.376382   61115 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0814 01:11:20.381926   61115 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0814 01:11:20.382001   61115 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0814 01:11:20.510283   61115 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0814 01:11:20.510394   61115 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0814 01:11:21.016575   61115 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 505.997518ms
	I0814 01:11:21.016716   61115 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0814 01:11:26.018203   61115 kubeadm.go:310] [api-check] The API server is healthy after 5.00166081s
	I0814 01:11:26.035867   61115 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0814 01:11:26.053660   61115 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0814 01:11:26.084727   61115 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0814 01:11:26.084987   61115 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-901410 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0814 01:11:26.100115   61115 kubeadm.go:310] [bootstrap-token] Using token: t7ews1.hirn7pq8otu9l2lh
	I0814 01:11:26.101532   61115 out.go:204]   - Configuring RBAC rules ...
	I0814 01:11:26.101691   61115 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0814 01:11:26.107165   61115 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0814 01:11:26.117715   61115 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0814 01:11:26.121222   61115 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0814 01:11:26.124371   61115 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0814 01:11:26.128216   61115 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0814 01:11:26.426496   61115 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0814 01:11:26.868163   61115 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0814 01:11:27.426401   61115 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0814 01:11:27.427484   61115 kubeadm.go:310] 
	I0814 01:11:27.427587   61115 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0814 01:11:27.427604   61115 kubeadm.go:310] 
	I0814 01:11:27.427727   61115 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0814 01:11:27.427743   61115 kubeadm.go:310] 
	I0814 01:11:27.427770   61115 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0814 01:11:27.427846   61115 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0814 01:11:27.427928   61115 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0814 01:11:27.427939   61115 kubeadm.go:310] 
	I0814 01:11:27.428020   61115 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0814 01:11:27.428027   61115 kubeadm.go:310] 
	I0814 01:11:27.428109   61115 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0814 01:11:27.428116   61115 kubeadm.go:310] 
	I0814 01:11:27.428192   61115 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0814 01:11:27.428289   61115 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0814 01:11:27.428389   61115 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0814 01:11:27.428397   61115 kubeadm.go:310] 
	I0814 01:11:27.428511   61115 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0814 01:11:27.428625   61115 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0814 01:11:27.428640   61115 kubeadm.go:310] 
	I0814 01:11:27.428778   61115 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token t7ews1.hirn7pq8otu9l2lh \
	I0814 01:11:27.428920   61115 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f608549e9c065598e2d494ba753eb8a7bbe7260a53a76decd30e6e17b5fe24c3 \
	I0814 01:11:27.428964   61115 kubeadm.go:310] 	--control-plane 
	I0814 01:11:27.428971   61115 kubeadm.go:310] 
	I0814 01:11:27.429085   61115 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0814 01:11:27.429097   61115 kubeadm.go:310] 
	I0814 01:11:27.429229   61115 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token t7ews1.hirn7pq8otu9l2lh \
	I0814 01:11:27.429381   61115 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f608549e9c065598e2d494ba753eb8a7bbe7260a53a76decd30e6e17b5fe24c3 
	I0814 01:11:27.430485   61115 kubeadm.go:310] W0814 01:11:19.012996    2597 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0814 01:11:27.430895   61115 kubeadm.go:310] W0814 01:11:19.013634    2597 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0814 01:11:27.431062   61115 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0814 01:11:27.431092   61115 cni.go:84] Creating CNI manager for ""
	I0814 01:11:27.431102   61115 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 01:11:27.432987   61115 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0814 01:11:24.694861   61804 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 01:11:24.695123   61804 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 01:11:27.434183   61115 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0814 01:11:27.446168   61115 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0814 01:11:27.466651   61115 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0814 01:11:27.466760   61115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-901410 minikube.k8s.io/updated_at=2024_08_14T01_11_27_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a5ac17629aabe8562ef9220195ba6559cf416caf minikube.k8s.io/name=embed-certs-901410 minikube.k8s.io/primary=true
	I0814 01:11:27.466760   61115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:11:27.495784   61115 ops.go:34] apiserver oom_adj: -16
	I0814 01:11:27.670097   61115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:11:28.170891   61115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:11:28.670320   61115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:11:29.170197   61115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:11:29.670157   61115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:11:30.170664   61115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:11:30.670254   61115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:11:31.170767   61115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:11:31.671004   61115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:11:31.762872   61115 kubeadm.go:1113] duration metric: took 4.296174293s to wait for elevateKubeSystemPrivileges
	I0814 01:11:31.762902   61115 kubeadm.go:394] duration metric: took 4m55.664668706s to StartCluster
	I0814 01:11:31.762924   61115 settings.go:142] acquiring lock: {Name:mkb0f793aa2a6618ff3457f9cd2d34beec5f1b5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 01:11:31.763010   61115 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19429-9425/kubeconfig
	I0814 01:11:31.764625   61115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19429-9425/kubeconfig: {Name:mkd99f966962f24d3ec49056c344f5320df43dba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 01:11:31.764876   61115 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.210 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0814 01:11:31.764951   61115 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0814 01:11:31.765038   61115 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-901410"
	I0814 01:11:31.765052   61115 addons.go:69] Setting default-storageclass=true in profile "embed-certs-901410"
	I0814 01:11:31.765070   61115 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-901410"
	I0814 01:11:31.765068   61115 addons.go:69] Setting metrics-server=true in profile "embed-certs-901410"
	I0814 01:11:31.765086   61115 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-901410"
	I0814 01:11:31.765092   61115 config.go:182] Loaded profile config "embed-certs-901410": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 01:11:31.765111   61115 addons.go:234] Setting addon metrics-server=true in "embed-certs-901410"
	W0814 01:11:31.765126   61115 addons.go:243] addon metrics-server should already be in state true
	I0814 01:11:31.765163   61115 host.go:66] Checking if "embed-certs-901410" exists ...
	W0814 01:11:31.765083   61115 addons.go:243] addon storage-provisioner should already be in state true
	I0814 01:11:31.765199   61115 host.go:66] Checking if "embed-certs-901410" exists ...
	I0814 01:11:31.765481   61115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:11:31.765516   61115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:11:31.765554   61115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:11:31.765570   61115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:11:31.765588   61115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:11:31.765614   61115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:11:31.766459   61115 out.go:177] * Verifying Kubernetes components...
	I0814 01:11:31.767835   61115 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 01:11:31.781637   61115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34599
	I0814 01:11:31.782146   61115 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:11:31.782517   61115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32983
	I0814 01:11:31.782700   61115 main.go:141] libmachine: Using API Version  1
	I0814 01:11:31.782732   61115 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:11:31.783038   61115 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:11:31.783052   61115 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:11:31.783213   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetState
	I0814 01:11:31.783540   61115 main.go:141] libmachine: Using API Version  1
	I0814 01:11:31.783569   61115 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:11:31.783897   61115 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:11:31.784326   61115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39503
	I0814 01:11:31.784458   61115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:11:31.784487   61115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:11:31.784791   61115 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:11:31.785281   61115 main.go:141] libmachine: Using API Version  1
	I0814 01:11:31.785306   61115 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:11:31.785665   61115 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:11:31.786175   61115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:11:31.786218   61115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:11:31.786466   61115 addons.go:234] Setting addon default-storageclass=true in "embed-certs-901410"
	W0814 01:11:31.786484   61115 addons.go:243] addon default-storageclass should already be in state true
	I0814 01:11:31.786513   61115 host.go:66] Checking if "embed-certs-901410" exists ...
	I0814 01:11:31.786853   61115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:11:31.786881   61115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:11:31.801208   61115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41561
	I0814 01:11:31.801592   61115 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:11:31.802016   61115 main.go:141] libmachine: Using API Version  1
	I0814 01:11:31.802032   61115 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:11:31.802382   61115 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:11:31.802555   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetState
	I0814 01:11:31.803106   61115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40669
	I0814 01:11:31.803589   61115 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:11:31.804133   61115 main.go:141] libmachine: Using API Version  1
	I0814 01:11:31.804159   61115 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:11:31.804462   61115 main.go:141] libmachine: (embed-certs-901410) Calling .DriverName
	I0814 01:11:31.804532   61115 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:11:31.804716   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetState
	I0814 01:11:31.805759   61115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39529
	I0814 01:11:31.806197   61115 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:11:31.806546   61115 main.go:141] libmachine: (embed-certs-901410) Calling .DriverName
	I0814 01:11:31.806590   61115 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0814 01:11:31.806667   61115 main.go:141] libmachine: Using API Version  1
	I0814 01:11:31.806692   61115 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:11:31.806982   61115 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:11:31.807572   61115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:11:31.807609   61115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:11:31.808223   61115 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 01:11:31.808225   61115 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0814 01:11:31.808301   61115 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0814 01:11:31.808335   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHHostname
	I0814 01:11:31.810018   61115 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 01:11:31.810057   61115 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0814 01:11:31.810125   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHHostname
	I0814 01:11:31.812029   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:11:31.812728   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:11:31.812862   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:11:31.813062   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHPort
	I0814 01:11:31.813261   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 01:11:31.813284   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:11:31.813420   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHUsername
	I0814 01:11:31.813562   61115 sshutil.go:53] new ssh client: &{IP:192.168.50.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/embed-certs-901410/id_rsa Username:docker}
	I0814 01:11:31.813864   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:11:31.813880   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:11:31.814032   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHPort
	I0814 01:11:31.814236   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 01:11:31.814398   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHUsername
	I0814 01:11:31.814542   61115 sshutil.go:53] new ssh client: &{IP:192.168.50.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/embed-certs-901410/id_rsa Username:docker}
	I0814 01:11:31.825081   61115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34269
	I0814 01:11:31.825523   61115 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:11:31.825944   61115 main.go:141] libmachine: Using API Version  1
	I0814 01:11:31.825967   61115 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:11:31.826327   61115 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:11:31.826537   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetState
	I0814 01:11:31.831060   61115 main.go:141] libmachine: (embed-certs-901410) Calling .DriverName
	I0814 01:11:31.831292   61115 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0814 01:11:31.831315   61115 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0814 01:11:31.831334   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHHostname
	I0814 01:11:31.834552   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:11:31.834934   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:11:31.834962   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:11:31.835102   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHPort
	I0814 01:11:31.835304   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 01:11:31.835476   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHUsername
	I0814 01:11:31.835610   61115 sshutil.go:53] new ssh client: &{IP:192.168.50.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/embed-certs-901410/id_rsa Username:docker}
	I0814 01:11:31.960224   61115 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 01:11:31.980097   61115 node_ready.go:35] waiting up to 6m0s for node "embed-certs-901410" to be "Ready" ...
	I0814 01:11:31.993130   61115 node_ready.go:49] node "embed-certs-901410" has status "Ready":"True"
	I0814 01:11:31.993152   61115 node_ready.go:38] duration metric: took 13.020022ms for node "embed-certs-901410" to be "Ready" ...
	I0814 01:11:31.993164   61115 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 01:11:31.998448   61115 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-901410" in "kube-system" namespace to be "Ready" ...
	I0814 01:11:32.075908   61115 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0814 01:11:32.075933   61115 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0814 01:11:32.114559   61115 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 01:11:32.137251   61115 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0814 01:11:32.144383   61115 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0814 01:11:32.144404   61115 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0814 01:11:32.207930   61115 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0814 01:11:32.207957   61115 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0814 01:11:32.235306   61115 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0814 01:11:32.769968   61115 main.go:141] libmachine: Making call to close driver server
	I0814 01:11:32.769994   61115 main.go:141] libmachine: (embed-certs-901410) Calling .Close
	I0814 01:11:32.770140   61115 main.go:141] libmachine: Making call to close driver server
	I0814 01:11:32.770164   61115 main.go:141] libmachine: (embed-certs-901410) Calling .Close
	I0814 01:11:32.770300   61115 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:11:32.770337   61115 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:11:32.770348   61115 main.go:141] libmachine: Making call to close driver server
	I0814 01:11:32.770351   61115 main.go:141] libmachine: (embed-certs-901410) DBG | Closing plugin on server side
	I0814 01:11:32.770360   61115 main.go:141] libmachine: (embed-certs-901410) Calling .Close
	I0814 01:11:32.770412   61115 main.go:141] libmachine: (embed-certs-901410) DBG | Closing plugin on server side
	I0814 01:11:32.770434   61115 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:11:32.770447   61115 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:11:32.770461   61115 main.go:141] libmachine: Making call to close driver server
	I0814 01:11:32.770472   61115 main.go:141] libmachine: (embed-certs-901410) Calling .Close
	I0814 01:11:32.770656   61115 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:11:32.770696   61115 main.go:141] libmachine: (embed-certs-901410) DBG | Closing plugin on server side
	I0814 01:11:32.770706   61115 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:11:32.770767   61115 main.go:141] libmachine: (embed-certs-901410) DBG | Closing plugin on server side
	I0814 01:11:32.770945   61115 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:11:32.770960   61115 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:11:32.779423   61115 main.go:141] libmachine: Making call to close driver server
	I0814 01:11:32.779437   61115 main.go:141] libmachine: (embed-certs-901410) Calling .Close
	I0814 01:11:32.779661   61115 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:11:32.779675   61115 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:11:32.779702   61115 main.go:141] libmachine: (embed-certs-901410) DBG | Closing plugin on server side
	I0814 01:11:33.063157   61115 main.go:141] libmachine: Making call to close driver server
	I0814 01:11:33.063187   61115 main.go:141] libmachine: (embed-certs-901410) Calling .Close
	I0814 01:11:33.064055   61115 main.go:141] libmachine: (embed-certs-901410) DBG | Closing plugin on server side
	I0814 01:11:33.064101   61115 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:11:33.064110   61115 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:11:33.064120   61115 main.go:141] libmachine: Making call to close driver server
	I0814 01:11:33.064127   61115 main.go:141] libmachine: (embed-certs-901410) Calling .Close
	I0814 01:11:33.064378   61115 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:11:33.064397   61115 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:11:33.064409   61115 addons.go:475] Verifying addon metrics-server=true in "embed-certs-901410"
	I0814 01:11:33.064458   61115 main.go:141] libmachine: (embed-certs-901410) DBG | Closing plugin on server side
	I0814 01:11:33.066122   61115 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0814 01:11:33.067534   61115 addons.go:510] duration metric: took 1.302585898s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0814 01:11:34.004078   61115 pod_ready.go:102] pod "etcd-embed-certs-901410" in "kube-system" namespace has status "Ready":"False"
	I0814 01:11:36.005391   61115 pod_ready.go:102] pod "etcd-embed-certs-901410" in "kube-system" namespace has status "Ready":"False"
	I0814 01:11:38.505031   61115 pod_ready.go:102] pod "etcd-embed-certs-901410" in "kube-system" namespace has status "Ready":"False"
	I0814 01:11:39.507006   61115 pod_ready.go:92] pod "etcd-embed-certs-901410" in "kube-system" namespace has status "Ready":"True"
	I0814 01:11:39.507026   61115 pod_ready.go:81] duration metric: took 7.508554233s for pod "etcd-embed-certs-901410" in "kube-system" namespace to be "Ready" ...
	I0814 01:11:39.507035   61115 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-901410" in "kube-system" namespace to be "Ready" ...
	I0814 01:11:39.517719   61115 pod_ready.go:92] pod "kube-apiserver-embed-certs-901410" in "kube-system" namespace has status "Ready":"True"
	I0814 01:11:39.517739   61115 pod_ready.go:81] duration metric: took 10.698211ms for pod "kube-apiserver-embed-certs-901410" in "kube-system" namespace to be "Ready" ...
	I0814 01:11:39.517751   61115 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-901410" in "kube-system" namespace to be "Ready" ...
	I0814 01:11:39.522245   61115 pod_ready.go:92] pod "kube-controller-manager-embed-certs-901410" in "kube-system" namespace has status "Ready":"True"
	I0814 01:11:39.522267   61115 pod_ready.go:81] duration metric: took 4.507786ms for pod "kube-controller-manager-embed-certs-901410" in "kube-system" namespace to be "Ready" ...
	I0814 01:11:39.522280   61115 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fqmzw" in "kube-system" namespace to be "Ready" ...
	I0814 01:11:39.527880   61115 pod_ready.go:92] pod "kube-proxy-fqmzw" in "kube-system" namespace has status "Ready":"True"
	I0814 01:11:39.527897   61115 pod_ready.go:81] duration metric: took 5.609617ms for pod "kube-proxy-fqmzw" in "kube-system" namespace to be "Ready" ...
	I0814 01:11:39.527904   61115 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-901410" in "kube-system" namespace to be "Ready" ...
	I0814 01:11:39.532430   61115 pod_ready.go:92] pod "kube-scheduler-embed-certs-901410" in "kube-system" namespace has status "Ready":"True"
	I0814 01:11:39.532448   61115 pod_ready.go:81] duration metric: took 4.536902ms for pod "kube-scheduler-embed-certs-901410" in "kube-system" namespace to be "Ready" ...
	I0814 01:11:39.532456   61115 pod_ready.go:38] duration metric: took 7.539280742s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 01:11:39.532471   61115 api_server.go:52] waiting for apiserver process to appear ...
	I0814 01:11:39.532537   61115 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:11:39.547608   61115 api_server.go:72] duration metric: took 7.782698582s to wait for apiserver process to appear ...
	I0814 01:11:39.547635   61115 api_server.go:88] waiting for apiserver healthz status ...
	I0814 01:11:39.547652   61115 api_server.go:253] Checking apiserver healthz at https://192.168.50.210:8443/healthz ...
	I0814 01:11:39.552021   61115 api_server.go:279] https://192.168.50.210:8443/healthz returned 200:
	ok
	I0814 01:11:39.552955   61115 api_server.go:141] control plane version: v1.31.0
	I0814 01:11:39.552972   61115 api_server.go:131] duration metric: took 5.330974ms to wait for apiserver health ...
	I0814 01:11:39.552979   61115 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 01:11:39.704928   61115 system_pods.go:59] 9 kube-system pods found
	I0814 01:11:39.704952   61115 system_pods.go:61] "coredns-6f6b679f8f-bq2xk" [6593bc2b-ef8f-4738-8674-dcaea675b88b] Running
	I0814 01:11:39.704959   61115 system_pods.go:61] "coredns-6f6b679f8f-lwd2j" [75f6e3fe-c5ac-4dbc-bbbb-bfb91796aaff] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0814 01:11:39.704964   61115 system_pods.go:61] "etcd-embed-certs-901410" [60eb6469-1be4-401b-9382-977428a0ead5] Running
	I0814 01:11:39.704970   61115 system_pods.go:61] "kube-apiserver-embed-certs-901410" [802d6cc2-d1d4-485c-98d8-e5b4afa9e632] Running
	I0814 01:11:39.704974   61115 system_pods.go:61] "kube-controller-manager-embed-certs-901410" [12e308db-7ca5-4d33-b62a-e144e7dd06c5] Running
	I0814 01:11:39.704977   61115 system_pods.go:61] "kube-proxy-fqmzw" [f9d63b14-ce56-4d0b-8511-1198b306b70e] Running
	I0814 01:11:39.704980   61115 system_pods.go:61] "kube-scheduler-embed-certs-901410" [668258a9-02d2-416d-ac07-b2b87deea00d] Running
	I0814 01:11:39.704985   61115 system_pods.go:61] "metrics-server-6867b74b74-mwl74" [065b6973-cd9d-4091-96b9-8dff2c5f85eb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 01:11:39.704989   61115 system_pods.go:61] "storage-provisioner" [e0f82856-b50c-4a5f-b0c7-4cd81e4b896e] Running
	I0814 01:11:39.704995   61115 system_pods.go:74] duration metric: took 152.010903ms to wait for pod list to return data ...
	I0814 01:11:39.705004   61115 default_sa.go:34] waiting for default service account to be created ...
	I0814 01:11:39.902622   61115 default_sa.go:45] found service account: "default"
	I0814 01:11:39.902662   61115 default_sa.go:55] duration metric: took 197.651811ms for default service account to be created ...
	I0814 01:11:39.902674   61115 system_pods.go:116] waiting for k8s-apps to be running ...
	I0814 01:11:40.105740   61115 system_pods.go:86] 9 kube-system pods found
	I0814 01:11:40.105767   61115 system_pods.go:89] "coredns-6f6b679f8f-bq2xk" [6593bc2b-ef8f-4738-8674-dcaea675b88b] Running
	I0814 01:11:40.105775   61115 system_pods.go:89] "coredns-6f6b679f8f-lwd2j" [75f6e3fe-c5ac-4dbc-bbbb-bfb91796aaff] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0814 01:11:40.105781   61115 system_pods.go:89] "etcd-embed-certs-901410" [60eb6469-1be4-401b-9382-977428a0ead5] Running
	I0814 01:11:40.105787   61115 system_pods.go:89] "kube-apiserver-embed-certs-901410" [802d6cc2-d1d4-485c-98d8-e5b4afa9e632] Running
	I0814 01:11:40.105791   61115 system_pods.go:89] "kube-controller-manager-embed-certs-901410" [12e308db-7ca5-4d33-b62a-e144e7dd06c5] Running
	I0814 01:11:40.105794   61115 system_pods.go:89] "kube-proxy-fqmzw" [f9d63b14-ce56-4d0b-8511-1198b306b70e] Running
	I0814 01:11:40.105798   61115 system_pods.go:89] "kube-scheduler-embed-certs-901410" [668258a9-02d2-416d-ac07-b2b87deea00d] Running
	I0814 01:11:40.105804   61115 system_pods.go:89] "metrics-server-6867b74b74-mwl74" [065b6973-cd9d-4091-96b9-8dff2c5f85eb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 01:11:40.105809   61115 system_pods.go:89] "storage-provisioner" [e0f82856-b50c-4a5f-b0c7-4cd81e4b896e] Running
	I0814 01:11:40.105815   61115 system_pods.go:126] duration metric: took 203.134555ms to wait for k8s-apps to be running ...
	I0814 01:11:40.105824   61115 system_svc.go:44] waiting for kubelet service to be running ....
	I0814 01:11:40.105866   61115 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 01:11:40.121399   61115 system_svc.go:56] duration metric: took 15.565745ms WaitForService to wait for kubelet
	I0814 01:11:40.121427   61115 kubeadm.go:582] duration metric: took 8.356517219s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 01:11:40.121445   61115 node_conditions.go:102] verifying NodePressure condition ...
	I0814 01:11:40.303687   61115 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0814 01:11:40.303720   61115 node_conditions.go:123] node cpu capacity is 2
	I0814 01:11:40.303732   61115 node_conditions.go:105] duration metric: took 182.281943ms to run NodePressure ...
	I0814 01:11:40.303745   61115 start.go:241] waiting for startup goroutines ...
	I0814 01:11:40.303754   61115 start.go:246] waiting for cluster config update ...
	I0814 01:11:40.303768   61115 start.go:255] writing updated cluster config ...
	I0814 01:11:40.304122   61115 ssh_runner.go:195] Run: rm -f paused
	I0814 01:11:40.350855   61115 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0814 01:11:40.352610   61115 out.go:177] * Done! kubectl is now configured to use "embed-certs-901410" cluster and "default" namespace by default
	I0814 01:11:44.695887   61804 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 01:11:44.696122   61804 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 01:12:24.697922   61804 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 01:12:24.698217   61804 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 01:12:24.698256   61804 kubeadm.go:310] 
	I0814 01:12:24.698318   61804 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0814 01:12:24.698406   61804 kubeadm.go:310] 		timed out waiting for the condition
	I0814 01:12:24.698434   61804 kubeadm.go:310] 
	I0814 01:12:24.698484   61804 kubeadm.go:310] 	This error is likely caused by:
	I0814 01:12:24.698530   61804 kubeadm.go:310] 		- The kubelet is not running
	I0814 01:12:24.698640   61804 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0814 01:12:24.698651   61804 kubeadm.go:310] 
	I0814 01:12:24.698784   61804 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0814 01:12:24.698841   61804 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0814 01:12:24.698874   61804 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0814 01:12:24.698878   61804 kubeadm.go:310] 
	I0814 01:12:24.699009   61804 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0814 01:12:24.699119   61804 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0814 01:12:24.699128   61804 kubeadm.go:310] 
	I0814 01:12:24.699294   61804 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0814 01:12:24.699431   61804 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0814 01:12:24.699536   61804 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0814 01:12:24.699635   61804 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0814 01:12:24.699647   61804 kubeadm.go:310] 
	I0814 01:12:24.700201   61804 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0814 01:12:24.700300   61804 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0814 01:12:24.700391   61804 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0814 01:12:24.700527   61804 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0814 01:12:24.700577   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0814 01:12:30.038180   61804 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.337582505s)
	I0814 01:12:30.038256   61804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 01:12:30.052476   61804 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 01:12:30.062330   61804 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 01:12:30.062357   61804 kubeadm.go:157] found existing configuration files:
	
	I0814 01:12:30.062409   61804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 01:12:30.072303   61804 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 01:12:30.072355   61804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 01:12:30.081331   61804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 01:12:30.090105   61804 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 01:12:30.090163   61804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 01:12:30.099446   61804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 01:12:30.108290   61804 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 01:12:30.108346   61804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 01:12:30.117872   61804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 01:12:30.126357   61804 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 01:12:30.126424   61804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 01:12:30.136277   61804 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0814 01:12:30.342736   61804 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0814 01:14:26.274820   61804 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0814 01:14:26.274958   61804 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0814 01:14:26.276512   61804 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0814 01:14:26.276601   61804 kubeadm.go:310] [preflight] Running pre-flight checks
	I0814 01:14:26.276743   61804 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0814 01:14:26.276887   61804 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0814 01:14:26.277017   61804 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0814 01:14:26.277097   61804 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0814 01:14:26.278845   61804 out.go:204]   - Generating certificates and keys ...
	I0814 01:14:26.278935   61804 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0814 01:14:26.279005   61804 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0814 01:14:26.279103   61804 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0814 01:14:26.279187   61804 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0814 01:14:26.279278   61804 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0814 01:14:26.279351   61804 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0814 01:14:26.279433   61804 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0814 01:14:26.279515   61804 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0814 01:14:26.279623   61804 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0814 01:14:26.279725   61804 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0814 01:14:26.279776   61804 kubeadm.go:310] [certs] Using the existing "sa" key
	I0814 01:14:26.279858   61804 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0814 01:14:26.279933   61804 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0814 01:14:26.280086   61804 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0814 01:14:26.280188   61804 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0814 01:14:26.280289   61804 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0814 01:14:26.280424   61804 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0814 01:14:26.280517   61804 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0814 01:14:26.280573   61804 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0814 01:14:26.280648   61804 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0814 01:14:26.281982   61804 out.go:204]   - Booting up control plane ...
	I0814 01:14:26.282070   61804 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0814 01:14:26.282159   61804 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0814 01:14:26.282249   61804 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0814 01:14:26.282389   61804 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0814 01:14:26.282564   61804 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0814 01:14:26.282624   61804 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0814 01:14:26.282685   61804 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 01:14:26.282866   61804 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 01:14:26.282971   61804 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 01:14:26.283161   61804 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 01:14:26.283235   61804 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 01:14:26.283494   61804 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 01:14:26.283611   61804 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 01:14:26.283768   61804 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 01:14:26.283830   61804 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 01:14:26.284021   61804 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 01:14:26.284032   61804 kubeadm.go:310] 
	I0814 01:14:26.284069   61804 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0814 01:14:26.284126   61804 kubeadm.go:310] 		timed out waiting for the condition
	I0814 01:14:26.284135   61804 kubeadm.go:310] 
	I0814 01:14:26.284188   61804 kubeadm.go:310] 	This error is likely caused by:
	I0814 01:14:26.284234   61804 kubeadm.go:310] 		- The kubelet is not running
	I0814 01:14:26.284336   61804 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0814 01:14:26.284344   61804 kubeadm.go:310] 
	I0814 01:14:26.284429   61804 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0814 01:14:26.284463   61804 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0814 01:14:26.284490   61804 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0814 01:14:26.284499   61804 kubeadm.go:310] 
	I0814 01:14:26.284587   61804 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0814 01:14:26.284726   61804 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0814 01:14:26.284747   61804 kubeadm.go:310] 
	I0814 01:14:26.284889   61804 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0814 01:14:26.285007   61804 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0814 01:14:26.285083   61804 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0814 01:14:26.285158   61804 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0814 01:14:26.285174   61804 kubeadm.go:310] 
	I0814 01:14:26.285220   61804 kubeadm.go:394] duration metric: took 8m6.417053649s to StartCluster
	I0814 01:14:26.285266   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:14:26.285318   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:14:26.327320   61804 cri.go:89] found id: ""
	I0814 01:14:26.327351   61804 logs.go:276] 0 containers: []
	W0814 01:14:26.327359   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:14:26.327366   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:14:26.327435   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:14:26.362074   61804 cri.go:89] found id: ""
	I0814 01:14:26.362101   61804 logs.go:276] 0 containers: []
	W0814 01:14:26.362109   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:14:26.362115   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:14:26.362192   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:14:26.395777   61804 cri.go:89] found id: ""
	I0814 01:14:26.395802   61804 logs.go:276] 0 containers: []
	W0814 01:14:26.395814   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:14:26.395821   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:14:26.395884   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:14:26.429263   61804 cri.go:89] found id: ""
	I0814 01:14:26.429290   61804 logs.go:276] 0 containers: []
	W0814 01:14:26.429299   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:14:26.429307   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:14:26.429370   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:14:26.463278   61804 cri.go:89] found id: ""
	I0814 01:14:26.463307   61804 logs.go:276] 0 containers: []
	W0814 01:14:26.463314   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:14:26.463321   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:14:26.463381   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:14:26.496454   61804 cri.go:89] found id: ""
	I0814 01:14:26.496493   61804 logs.go:276] 0 containers: []
	W0814 01:14:26.496513   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:14:26.496521   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:14:26.496591   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:14:26.530536   61804 cri.go:89] found id: ""
	I0814 01:14:26.530567   61804 logs.go:276] 0 containers: []
	W0814 01:14:26.530579   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:14:26.530587   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:14:26.530659   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:14:26.564201   61804 cri.go:89] found id: ""
	I0814 01:14:26.564232   61804 logs.go:276] 0 containers: []
	W0814 01:14:26.564245   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:14:26.564258   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:14:26.564274   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:14:26.614225   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:14:26.614263   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:14:26.632126   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:14:26.632162   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:14:26.733732   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:14:26.733757   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:14:26.733773   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:14:26.849177   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:14:26.849218   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0814 01:14:26.885741   61804 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0814 01:14:26.885794   61804 out.go:239] * 
	W0814 01:14:26.885846   61804 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0814 01:14:26.885871   61804 out.go:239] * 
	W0814 01:14:26.886747   61804 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0814 01:14:26.889874   61804 out.go:177] 
	W0814 01:14:26.891040   61804 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0814 01:14:26.891083   61804 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0814 01:14:26.891101   61804 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0814 01:14:26.892501   61804 out.go:177] 
	
	
	==> CRI-O <==
	Aug 14 01:20:42 embed-certs-901410 crio[719]: time="2024-08-14 01:20:42.389893817Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598442389863707,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4c699c9b-8cb9-4e9b-9070-59b4fbc8bf69 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 01:20:42 embed-certs-901410 crio[719]: time="2024-08-14 01:20:42.390354031Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fb3f12b6-b64a-409e-916e-d73d13002125 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 01:20:42 embed-certs-901410 crio[719]: time="2024-08-14 01:20:42.390417234Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fb3f12b6-b64a-409e-916e-d73d13002125 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 01:20:42 embed-certs-901410 crio[719]: time="2024-08-14 01:20:42.390618983Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:41fb6b83dfb3f8f5909f7ee7957b423f57086bfef6610cebdf4982ec8169f750,PodSandboxId:456824ba216bc02d7eea01f29a435927718740b95335ec0605a839a5396144cb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723597893698047975,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-bq2xk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6593bc2b-ef8f-4738-8674-dcaea675b88b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31ff007001de7d87571616c153cc4f440e2731c8b4ea3189746ca0c2b48fc1dd,PodSandboxId:1aeb9620f6e92bb3059530d1e00fd469e38cd2cf9e954759228673529d289306,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723597893623304476,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-lwd2j,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 75f6e3fe-c5ac-4dbc-bbbb-bfb91796aaff,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:494f8cefbe325338845c1fc777d9263142510e7e99b8ff1217f99009a69f7db0,PodSandboxId:d22748ea915f0112abd8b3b2fb5387e403c18daabe81b7ccabc4d7628f290dbc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAI
NER_RUNNING,CreatedAt:1723597893175687740,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f82856-b50c-4a5f-b0c7-4cd81e4b896e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3217f55ca95d151e1c155a2f1a0579f0da830b734b1aece44be67ebda5d316ec,PodSandboxId:59094e46534ecc6cf847e184e4c1b9df403daf0ed3a6ff0eb7ffebafced70784,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt
:1723597892495185110,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fqmzw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9d63b14-ce56-4d0b-8511-1198b306b70e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3eb4c3d012388b816b911cdc033948549417639e9a68f72586d72a9b0a9614b,PodSandboxId:00be091a5308bf9986dd3b0b658dd5d29deed7448be32fd8bfebfdc626d6310d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723597881630040350,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-901410,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9fb6ac68784a32ac3c43783c2aebbb5,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e65af6cb886c8d6ed105e20eb2a92ce2351df090146b9668765c9487e8fe148,PodSandboxId:16ffbc8b427803d25768aa74bfbf40b3f96b30cfa716709f51387a164c705913,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723597881610673438,Labe
ls:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-901410,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb842eb0c22d098ebfbdd3f6dcb5e402,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9eb586f43234ce42831ea6736853ad2af69f18c7b5bde338b29f19749c8b60b8,PodSandboxId:b3fad8a44c7c9d047bc07a3eda3bf5c694b82a2ca714d5c873472bd6668e49b0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723597881556133733,Labels:map[string]string{io.kubernet
es.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-901410,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9eb85559dc39794c5c6b039a2647d929,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9118fcad6781b261307d099b3d7883f3508d0b188641ebc28db65e60502c975,PodSandboxId:d491ad9827cf45f4ec888575f176a81f87ce619d0294a6a4eb58ffe9cafadcff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723597881576634978,Labels:map[string]string{io.kubernetes.container
.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-901410,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 974f8dae03a593e482ff3abf15b255b4,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6efc64c66f052fcd425420e6d1adc2be719b96dcee74ae7ecf504620233a36c,PodSandboxId:27c7b14f7d5570f869dabb48fd19795527668dc71e7e276cd6f823d2aba11740,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723597599439548471,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-901410,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9eb85559dc39794c5c6b039a2647d929,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fb3f12b6-b64a-409e-916e-d73d13002125 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 01:20:42 embed-certs-901410 crio[719]: time="2024-08-14 01:20:42.432380447Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=672fa13f-c514-4931-9a82-873caf9ffafa name=/runtime.v1.RuntimeService/Version
	Aug 14 01:20:42 embed-certs-901410 crio[719]: time="2024-08-14 01:20:42.432494020Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=672fa13f-c514-4931-9a82-873caf9ffafa name=/runtime.v1.RuntimeService/Version
	Aug 14 01:20:42 embed-certs-901410 crio[719]: time="2024-08-14 01:20:42.433779240Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=474d0e23-6190-4fc5-8547-3237162b4710 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 01:20:42 embed-certs-901410 crio[719]: time="2024-08-14 01:20:42.434381912Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598442434355831,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=474d0e23-6190-4fc5-8547-3237162b4710 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 01:20:42 embed-certs-901410 crio[719]: time="2024-08-14 01:20:42.434959584Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=93444fdd-6a47-490a-a939-38aa9a91a422 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 01:20:42 embed-certs-901410 crio[719]: time="2024-08-14 01:20:42.435067635Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=93444fdd-6a47-490a-a939-38aa9a91a422 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 01:20:42 embed-certs-901410 crio[719]: time="2024-08-14 01:20:42.435345397Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:41fb6b83dfb3f8f5909f7ee7957b423f57086bfef6610cebdf4982ec8169f750,PodSandboxId:456824ba216bc02d7eea01f29a435927718740b95335ec0605a839a5396144cb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723597893698047975,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-bq2xk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6593bc2b-ef8f-4738-8674-dcaea675b88b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31ff007001de7d87571616c153cc4f440e2731c8b4ea3189746ca0c2b48fc1dd,PodSandboxId:1aeb9620f6e92bb3059530d1e00fd469e38cd2cf9e954759228673529d289306,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723597893623304476,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-lwd2j,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 75f6e3fe-c5ac-4dbc-bbbb-bfb91796aaff,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:494f8cefbe325338845c1fc777d9263142510e7e99b8ff1217f99009a69f7db0,PodSandboxId:d22748ea915f0112abd8b3b2fb5387e403c18daabe81b7ccabc4d7628f290dbc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAI
NER_RUNNING,CreatedAt:1723597893175687740,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f82856-b50c-4a5f-b0c7-4cd81e4b896e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3217f55ca95d151e1c155a2f1a0579f0da830b734b1aece44be67ebda5d316ec,PodSandboxId:59094e46534ecc6cf847e184e4c1b9df403daf0ed3a6ff0eb7ffebafced70784,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt
:1723597892495185110,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fqmzw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9d63b14-ce56-4d0b-8511-1198b306b70e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3eb4c3d012388b816b911cdc033948549417639e9a68f72586d72a9b0a9614b,PodSandboxId:00be091a5308bf9986dd3b0b658dd5d29deed7448be32fd8bfebfdc626d6310d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723597881630040350,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-901410,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9fb6ac68784a32ac3c43783c2aebbb5,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e65af6cb886c8d6ed105e20eb2a92ce2351df090146b9668765c9487e8fe148,PodSandboxId:16ffbc8b427803d25768aa74bfbf40b3f96b30cfa716709f51387a164c705913,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723597881610673438,Labe
ls:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-901410,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb842eb0c22d098ebfbdd3f6dcb5e402,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9eb586f43234ce42831ea6736853ad2af69f18c7b5bde338b29f19749c8b60b8,PodSandboxId:b3fad8a44c7c9d047bc07a3eda3bf5c694b82a2ca714d5c873472bd6668e49b0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723597881556133733,Labels:map[string]string{io.kubernet
es.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-901410,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9eb85559dc39794c5c6b039a2647d929,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9118fcad6781b261307d099b3d7883f3508d0b188641ebc28db65e60502c975,PodSandboxId:d491ad9827cf45f4ec888575f176a81f87ce619d0294a6a4eb58ffe9cafadcff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723597881576634978,Labels:map[string]string{io.kubernetes.container
.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-901410,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 974f8dae03a593e482ff3abf15b255b4,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6efc64c66f052fcd425420e6d1adc2be719b96dcee74ae7ecf504620233a36c,PodSandboxId:27c7b14f7d5570f869dabb48fd19795527668dc71e7e276cd6f823d2aba11740,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723597599439548471,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-901410,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9eb85559dc39794c5c6b039a2647d929,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=93444fdd-6a47-490a-a939-38aa9a91a422 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 01:20:42 embed-certs-901410 crio[719]: time="2024-08-14 01:20:42.474084782Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=029c8349-df1b-4251-90e0-567c87b52d16 name=/runtime.v1.RuntimeService/Version
	Aug 14 01:20:42 embed-certs-901410 crio[719]: time="2024-08-14 01:20:42.474204379Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=029c8349-df1b-4251-90e0-567c87b52d16 name=/runtime.v1.RuntimeService/Version
	Aug 14 01:20:42 embed-certs-901410 crio[719]: time="2024-08-14 01:20:42.475979735Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=70159a07-133f-406b-9e86-d38266e322be name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 01:20:42 embed-certs-901410 crio[719]: time="2024-08-14 01:20:42.476590164Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598442476564622,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=70159a07-133f-406b-9e86-d38266e322be name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 01:20:42 embed-certs-901410 crio[719]: time="2024-08-14 01:20:42.477417437Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=356762b8-229d-4fa5-a746-99ad7a65bcce name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 01:20:42 embed-certs-901410 crio[719]: time="2024-08-14 01:20:42.477516301Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=356762b8-229d-4fa5-a746-99ad7a65bcce name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 01:20:42 embed-certs-901410 crio[719]: time="2024-08-14 01:20:42.477799607Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:41fb6b83dfb3f8f5909f7ee7957b423f57086bfef6610cebdf4982ec8169f750,PodSandboxId:456824ba216bc02d7eea01f29a435927718740b95335ec0605a839a5396144cb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723597893698047975,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-bq2xk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6593bc2b-ef8f-4738-8674-dcaea675b88b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31ff007001de7d87571616c153cc4f440e2731c8b4ea3189746ca0c2b48fc1dd,PodSandboxId:1aeb9620f6e92bb3059530d1e00fd469e38cd2cf9e954759228673529d289306,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723597893623304476,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-lwd2j,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 75f6e3fe-c5ac-4dbc-bbbb-bfb91796aaff,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:494f8cefbe325338845c1fc777d9263142510e7e99b8ff1217f99009a69f7db0,PodSandboxId:d22748ea915f0112abd8b3b2fb5387e403c18daabe81b7ccabc4d7628f290dbc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAI
NER_RUNNING,CreatedAt:1723597893175687740,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f82856-b50c-4a5f-b0c7-4cd81e4b896e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3217f55ca95d151e1c155a2f1a0579f0da830b734b1aece44be67ebda5d316ec,PodSandboxId:59094e46534ecc6cf847e184e4c1b9df403daf0ed3a6ff0eb7ffebafced70784,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt
:1723597892495185110,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fqmzw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9d63b14-ce56-4d0b-8511-1198b306b70e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3eb4c3d012388b816b911cdc033948549417639e9a68f72586d72a9b0a9614b,PodSandboxId:00be091a5308bf9986dd3b0b658dd5d29deed7448be32fd8bfebfdc626d6310d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723597881630040350,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-901410,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9fb6ac68784a32ac3c43783c2aebbb5,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e65af6cb886c8d6ed105e20eb2a92ce2351df090146b9668765c9487e8fe148,PodSandboxId:16ffbc8b427803d25768aa74bfbf40b3f96b30cfa716709f51387a164c705913,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723597881610673438,Labe
ls:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-901410,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb842eb0c22d098ebfbdd3f6dcb5e402,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9eb586f43234ce42831ea6736853ad2af69f18c7b5bde338b29f19749c8b60b8,PodSandboxId:b3fad8a44c7c9d047bc07a3eda3bf5c694b82a2ca714d5c873472bd6668e49b0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723597881556133733,Labels:map[string]string{io.kubernet
es.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-901410,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9eb85559dc39794c5c6b039a2647d929,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9118fcad6781b261307d099b3d7883f3508d0b188641ebc28db65e60502c975,PodSandboxId:d491ad9827cf45f4ec888575f176a81f87ce619d0294a6a4eb58ffe9cafadcff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723597881576634978,Labels:map[string]string{io.kubernetes.container
.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-901410,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 974f8dae03a593e482ff3abf15b255b4,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6efc64c66f052fcd425420e6d1adc2be719b96dcee74ae7ecf504620233a36c,PodSandboxId:27c7b14f7d5570f869dabb48fd19795527668dc71e7e276cd6f823d2aba11740,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723597599439548471,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-901410,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9eb85559dc39794c5c6b039a2647d929,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=356762b8-229d-4fa5-a746-99ad7a65bcce name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 01:20:42 embed-certs-901410 crio[719]: time="2024-08-14 01:20:42.510584291Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d0c17ca2-55be-42e7-9059-1af77f39025d name=/runtime.v1.RuntimeService/Version
	Aug 14 01:20:42 embed-certs-901410 crio[719]: time="2024-08-14 01:20:42.510691363Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d0c17ca2-55be-42e7-9059-1af77f39025d name=/runtime.v1.RuntimeService/Version
	Aug 14 01:20:42 embed-certs-901410 crio[719]: time="2024-08-14 01:20:42.511592034Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dbb80717-1c44-4518-bf61-d41472194ccc name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 01:20:42 embed-certs-901410 crio[719]: time="2024-08-14 01:20:42.511978432Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598442511957833,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dbb80717-1c44-4518-bf61-d41472194ccc name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 01:20:42 embed-certs-901410 crio[719]: time="2024-08-14 01:20:42.512674543Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4e4b4751-67cb-4189-8fad-24ef27381578 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 01:20:42 embed-certs-901410 crio[719]: time="2024-08-14 01:20:42.512726954Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4e4b4751-67cb-4189-8fad-24ef27381578 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 01:20:42 embed-certs-901410 crio[719]: time="2024-08-14 01:20:42.512929154Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:41fb6b83dfb3f8f5909f7ee7957b423f57086bfef6610cebdf4982ec8169f750,PodSandboxId:456824ba216bc02d7eea01f29a435927718740b95335ec0605a839a5396144cb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723597893698047975,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-bq2xk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6593bc2b-ef8f-4738-8674-dcaea675b88b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31ff007001de7d87571616c153cc4f440e2731c8b4ea3189746ca0c2b48fc1dd,PodSandboxId:1aeb9620f6e92bb3059530d1e00fd469e38cd2cf9e954759228673529d289306,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723597893623304476,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-lwd2j,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 75f6e3fe-c5ac-4dbc-bbbb-bfb91796aaff,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:494f8cefbe325338845c1fc777d9263142510e7e99b8ff1217f99009a69f7db0,PodSandboxId:d22748ea915f0112abd8b3b2fb5387e403c18daabe81b7ccabc4d7628f290dbc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAI
NER_RUNNING,CreatedAt:1723597893175687740,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f82856-b50c-4a5f-b0c7-4cd81e4b896e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3217f55ca95d151e1c155a2f1a0579f0da830b734b1aece44be67ebda5d316ec,PodSandboxId:59094e46534ecc6cf847e184e4c1b9df403daf0ed3a6ff0eb7ffebafced70784,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt
:1723597892495185110,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fqmzw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9d63b14-ce56-4d0b-8511-1198b306b70e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3eb4c3d012388b816b911cdc033948549417639e9a68f72586d72a9b0a9614b,PodSandboxId:00be091a5308bf9986dd3b0b658dd5d29deed7448be32fd8bfebfdc626d6310d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723597881630040350,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-901410,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9fb6ac68784a32ac3c43783c2aebbb5,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e65af6cb886c8d6ed105e20eb2a92ce2351df090146b9668765c9487e8fe148,PodSandboxId:16ffbc8b427803d25768aa74bfbf40b3f96b30cfa716709f51387a164c705913,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723597881610673438,Labe
ls:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-901410,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb842eb0c22d098ebfbdd3f6dcb5e402,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9eb586f43234ce42831ea6736853ad2af69f18c7b5bde338b29f19749c8b60b8,PodSandboxId:b3fad8a44c7c9d047bc07a3eda3bf5c694b82a2ca714d5c873472bd6668e49b0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723597881556133733,Labels:map[string]string{io.kubernet
es.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-901410,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9eb85559dc39794c5c6b039a2647d929,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9118fcad6781b261307d099b3d7883f3508d0b188641ebc28db65e60502c975,PodSandboxId:d491ad9827cf45f4ec888575f176a81f87ce619d0294a6a4eb58ffe9cafadcff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723597881576634978,Labels:map[string]string{io.kubernetes.container
.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-901410,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 974f8dae03a593e482ff3abf15b255b4,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6efc64c66f052fcd425420e6d1adc2be719b96dcee74ae7ecf504620233a36c,PodSandboxId:27c7b14f7d5570f869dabb48fd19795527668dc71e7e276cd6f823d2aba11740,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723597599439548471,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-901410,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9eb85559dc39794c5c6b039a2647d929,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4e4b4751-67cb-4189-8fad-24ef27381578 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	41fb6b83dfb3f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   456824ba216bc       coredns-6f6b679f8f-bq2xk
	31ff007001de7       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   1aeb9620f6e92       coredns-6f6b679f8f-lwd2j
	494f8cefbe325       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   d22748ea915f0       storage-provisioner
	3217f55ca95d1       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   9 minutes ago       Running             kube-proxy                0                   59094e46534ec       kube-proxy-fqmzw
	d3eb4c3d01238       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   9 minutes ago       Running             kube-controller-manager   2                   00be091a5308b       kube-controller-manager-embed-certs-901410
	5e65af6cb886c       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago       Running             etcd                      2                   16ffbc8b42780       etcd-embed-certs-901410
	d9118fcad6781       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   9 minutes ago       Running             kube-scheduler            2                   d491ad9827cf4       kube-scheduler-embed-certs-901410
	9eb586f43234c       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   9 minutes ago       Running             kube-apiserver            2                   b3fad8a44c7c9       kube-apiserver-embed-certs-901410
	b6efc64c66f05       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   14 minutes ago      Exited              kube-apiserver            1                   27c7b14f7d557       kube-apiserver-embed-certs-901410
	
	
	==> coredns [31ff007001de7d87571616c153cc4f440e2731c8b4ea3189746ca0c2b48fc1dd] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [41fb6b83dfb3f8f5909f7ee7957b423f57086bfef6610cebdf4982ec8169f750] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               embed-certs-901410
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-901410
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a5ac17629aabe8562ef9220195ba6559cf416caf
	                    minikube.k8s.io/name=embed-certs-901410
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_14T01_11_27_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 14 Aug 2024 01:11:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-901410
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 14 Aug 2024 01:20:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 14 Aug 2024 01:16:42 +0000   Wed, 14 Aug 2024 01:11:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 14 Aug 2024 01:16:42 +0000   Wed, 14 Aug 2024 01:11:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 14 Aug 2024 01:16:42 +0000   Wed, 14 Aug 2024 01:11:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 14 Aug 2024 01:16:42 +0000   Wed, 14 Aug 2024 01:11:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.210
	  Hostname:    embed-certs-901410
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 93e154269592459d97e1c17229f46f37
	  System UUID:                93e15426-9592-459d-97e1-c17229f46f37
	  Boot ID:                    300eaa70-a88c-442b-b909-4a6828c5fd21
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-bq2xk                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m11s
	  kube-system                 coredns-6f6b679f8f-lwd2j                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m11s
	  kube-system                 etcd-embed-certs-901410                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m16s
	  kube-system                 kube-apiserver-embed-certs-901410             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m16s
	  kube-system                 kube-controller-manager-embed-certs-901410    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m16s
	  kube-system                 kube-proxy-fqmzw                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m11s
	  kube-system                 kube-scheduler-embed-certs-901410             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m16s
	  kube-system                 metrics-server-6867b74b74-mwl74               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m10s
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m9s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  9m21s (x8 over 9m22s)  kubelet          Node embed-certs-901410 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m21s (x8 over 9m22s)  kubelet          Node embed-certs-901410 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m21s (x7 over 9m22s)  kubelet          Node embed-certs-901410 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m16s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m16s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m16s                  kubelet          Node embed-certs-901410 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m16s                  kubelet          Node embed-certs-901410 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m16s                  kubelet          Node embed-certs-901410 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m12s                  node-controller  Node embed-certs-901410 event: Registered Node embed-certs-901410 in Controller
	
	
	==> dmesg <==
	[  +0.062072] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.046627] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.031549] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.807074] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.624816] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.508933] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.060346] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.066118] systemd-fstab-generator[647]: Ignoring "noauto" option for root device
	[  +0.162115] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[  +0.135580] systemd-fstab-generator[673]: Ignoring "noauto" option for root device
	[  +0.254977] systemd-fstab-generator[703]: Ignoring "noauto" option for root device
	[  +3.908429] systemd-fstab-generator[801]: Ignoring "noauto" option for root device
	[  +1.835118] systemd-fstab-generator[921]: Ignoring "noauto" option for root device
	[  +0.065900] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.493363] kauditd_printk_skb: 69 callbacks suppressed
	[  +6.246596] kauditd_printk_skb: 85 callbacks suppressed
	[Aug14 01:11] kauditd_printk_skb: 6 callbacks suppressed
	[  +1.549789] systemd-fstab-generator[2623]: Ignoring "noauto" option for root device
	[  +4.598913] kauditd_printk_skb: 54 callbacks suppressed
	[  +1.447837] systemd-fstab-generator[2943]: Ignoring "noauto" option for root device
	[  +5.379031] systemd-fstab-generator[3054]: Ignoring "noauto" option for root device
	[  +0.091855] kauditd_printk_skb: 14 callbacks suppressed
	[  +9.952538] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [5e65af6cb886c8d6ed105e20eb2a92ce2351df090146b9668765c9487e8fe148] <==
	{"level":"info","ts":"2024-08-14T01:11:21.887382Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-14T01:11:21.887456Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-14T01:11:21.887484Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-14T01:11:21.890385Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"76d7bf11a8e4dc23 switched to configuration voters=(8563523299037207587)"}
	{"level":"info","ts":"2024-08-14T01:11:21.890709Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"92c5f3445ccd6516","local-member-id":"76d7bf11a8e4dc23","added-peer-id":"76d7bf11a8e4dc23","added-peer-peer-urls":["https://192.168.50.210:2380"]}
	{"level":"info","ts":"2024-08-14T01:11:22.251065Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"76d7bf11a8e4dc23 is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-14T01:11:22.251110Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"76d7bf11a8e4dc23 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-14T01:11:22.251142Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"76d7bf11a8e4dc23 received MsgPreVoteResp from 76d7bf11a8e4dc23 at term 1"}
	{"level":"info","ts":"2024-08-14T01:11:22.251153Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"76d7bf11a8e4dc23 became candidate at term 2"}
	{"level":"info","ts":"2024-08-14T01:11:22.251158Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"76d7bf11a8e4dc23 received MsgVoteResp from 76d7bf11a8e4dc23 at term 2"}
	{"level":"info","ts":"2024-08-14T01:11:22.251167Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"76d7bf11a8e4dc23 became leader at term 2"}
	{"level":"info","ts":"2024-08-14T01:11:22.251174Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 76d7bf11a8e4dc23 elected leader 76d7bf11a8e4dc23 at term 2"}
	{"level":"info","ts":"2024-08-14T01:11:22.255182Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-14T01:11:22.259800Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"76d7bf11a8e4dc23","local-member-attributes":"{Name:embed-certs-901410 ClientURLs:[https://192.168.50.210:2379]}","request-path":"/0/members/76d7bf11a8e4dc23/attributes","cluster-id":"92c5f3445ccd6516","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-14T01:11:22.260632Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"92c5f3445ccd6516","local-member-id":"76d7bf11a8e4dc23","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-14T01:11:22.260717Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-14T01:11:22.260750Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-14T01:11:22.260788Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-14T01:11:22.261085Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-14T01:11:22.268167Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-14T01:11:22.275131Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.210:2379"}
	{"level":"info","ts":"2024-08-14T01:11:22.275675Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-14T01:11:22.279433Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-14T01:11:22.279460Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-14T01:11:22.279842Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 01:20:42 up 14 min,  0 users,  load average: 0.64, 0.30, 0.14
	Linux embed-certs-901410 5.10.207 #1 SMP Tue Aug 13 22:05:29 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [9eb586f43234ce42831ea6736853ad2af69f18c7b5bde338b29f19749c8b60b8] <==
	W0814 01:16:25.122729       1 handler_proxy.go:99] no RequestInfo found in the context
	E0814 01:16:25.122816       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0814 01:16:25.123787       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0814 01:16:25.123876       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0814 01:17:25.124892       1 handler_proxy.go:99] no RequestInfo found in the context
	E0814 01:17:25.125179       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0814 01:17:25.125352       1 handler_proxy.go:99] no RequestInfo found in the context
	E0814 01:17:25.125415       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0814 01:17:25.126392       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0814 01:17:25.126488       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0814 01:19:25.126904       1 handler_proxy.go:99] no RequestInfo found in the context
	W0814 01:19:25.126937       1 handler_proxy.go:99] no RequestInfo found in the context
	E0814 01:19:25.127345       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0814 01:19:25.127235       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0814 01:19:25.128559       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0814 01:19:25.128688       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [b6efc64c66f052fcd425420e6d1adc2be719b96dcee74ae7ecf504620233a36c] <==
	W0814 01:11:17.623235       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 01:11:17.634916       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 01:11:17.669376       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 01:11:17.781813       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 01:11:17.797649       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 01:11:17.807698       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 01:11:17.811235       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 01:11:17.854353       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 01:11:17.891616       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 01:11:17.906372       1 logging.go:55] [core] [Channel #10 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 01:11:17.912888       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 01:11:17.918379       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 01:11:17.939965       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 01:11:17.943329       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 01:11:17.974275       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 01:11:17.982093       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 01:11:18.001569       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 01:11:18.094658       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 01:11:18.099079       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 01:11:18.132635       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 01:11:18.141433       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 01:11:18.143834       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 01:11:18.324951       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 01:11:18.436610       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 01:11:18.462600       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [d3eb4c3d012388b816b911cdc033948549417639e9a68f72586d72a9b0a9614b] <==
	E0814 01:15:31.126253       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 01:15:31.565596       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0814 01:16:01.132793       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 01:16:01.576081       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0814 01:16:31.140061       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 01:16:31.585832       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0814 01:16:42.707775       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-901410"
	E0814 01:17:01.146873       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 01:17:01.593134       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0814 01:17:24.716645       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="388.298µs"
	E0814 01:17:31.153282       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 01:17:31.600569       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0814 01:17:35.707087       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="120.727µs"
	E0814 01:18:01.161467       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 01:18:01.609752       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0814 01:18:31.168117       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 01:18:31.617585       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0814 01:19:01.174741       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 01:19:01.624666       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0814 01:19:31.182081       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 01:19:31.633344       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0814 01:20:01.187927       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 01:20:01.640920       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0814 01:20:31.196160       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 01:20:31.648237       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [3217f55ca95d151e1c155a2f1a0579f0da830b734b1aece44be67ebda5d316ec] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0814 01:11:33.037319       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0814 01:11:33.066538       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.210"]
	E0814 01:11:33.066608       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0814 01:11:33.137963       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0814 01:11:33.138036       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0814 01:11:33.138090       1 server_linux.go:169] "Using iptables Proxier"
	I0814 01:11:33.144440       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0814 01:11:33.144732       1 server.go:483] "Version info" version="v1.31.0"
	I0814 01:11:33.144744       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0814 01:11:33.146307       1 config.go:197] "Starting service config controller"
	I0814 01:11:33.146332       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0814 01:11:33.146360       1 config.go:104] "Starting endpoint slice config controller"
	I0814 01:11:33.146365       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0814 01:11:33.147999       1 config.go:326] "Starting node config controller"
	I0814 01:11:33.148066       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0814 01:11:33.247140       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0814 01:11:33.247230       1 shared_informer.go:320] Caches are synced for service config
	I0814 01:11:33.249169       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [d9118fcad6781b261307d099b3d7883f3508d0b188641ebc28db65e60502c975] <==
	W0814 01:11:24.546067       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0814 01:11:24.547345       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0814 01:11:24.546209       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0814 01:11:24.547362       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0814 01:11:24.546291       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0814 01:11:24.547394       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0814 01:11:24.546336       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0814 01:11:24.547415       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0814 01:11:24.546398       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0814 01:11:24.547430       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0814 01:11:24.546481       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0814 01:11:24.547446       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0814 01:11:24.550142       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0814 01:11:24.550197       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0814 01:11:24.550603       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0814 01:11:24.550656       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0814 01:11:24.550756       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0814 01:11:24.550791       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0814 01:11:24.550838       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0814 01:11:24.550876       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0814 01:11:24.550888       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0814 01:11:24.550970       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0814 01:11:25.444056       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0814 01:11:25.444169       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0814 01:11:26.147963       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 14 01:19:28 embed-certs-901410 kubelet[2950]: E0814 01:19:28.691539    2950 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-mwl74" podUID="065b6973-cd9d-4091-96b9-8dff2c5f85eb"
	Aug 14 01:19:36 embed-certs-901410 kubelet[2950]: E0814 01:19:36.863121    2950 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598376862331322,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 01:19:36 embed-certs-901410 kubelet[2950]: E0814 01:19:36.863549    2950 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598376862331322,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 01:19:41 embed-certs-901410 kubelet[2950]: E0814 01:19:41.692285    2950 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-mwl74" podUID="065b6973-cd9d-4091-96b9-8dff2c5f85eb"
	Aug 14 01:19:46 embed-certs-901410 kubelet[2950]: E0814 01:19:46.865454    2950 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598386865170971,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 01:19:46 embed-certs-901410 kubelet[2950]: E0814 01:19:46.866295    2950 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598386865170971,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 01:19:56 embed-certs-901410 kubelet[2950]: E0814 01:19:56.692981    2950 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-mwl74" podUID="065b6973-cd9d-4091-96b9-8dff2c5f85eb"
	Aug 14 01:19:56 embed-certs-901410 kubelet[2950]: E0814 01:19:56.868472    2950 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598396867691227,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 01:19:56 embed-certs-901410 kubelet[2950]: E0814 01:19:56.868999    2950 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598396867691227,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 01:20:06 embed-certs-901410 kubelet[2950]: E0814 01:20:06.870108    2950 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598406869839168,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 01:20:06 embed-certs-901410 kubelet[2950]: E0814 01:20:06.870179    2950 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598406869839168,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 01:20:10 embed-certs-901410 kubelet[2950]: E0814 01:20:10.694156    2950 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-mwl74" podUID="065b6973-cd9d-4091-96b9-8dff2c5f85eb"
	Aug 14 01:20:16 embed-certs-901410 kubelet[2950]: E0814 01:20:16.871276    2950 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598416870973174,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 01:20:16 embed-certs-901410 kubelet[2950]: E0814 01:20:16.871315    2950 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598416870973174,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 01:20:23 embed-certs-901410 kubelet[2950]: E0814 01:20:23.692923    2950 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-mwl74" podUID="065b6973-cd9d-4091-96b9-8dff2c5f85eb"
	Aug 14 01:20:26 embed-certs-901410 kubelet[2950]: E0814 01:20:26.709178    2950 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 14 01:20:26 embed-certs-901410 kubelet[2950]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 14 01:20:26 embed-certs-901410 kubelet[2950]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 14 01:20:26 embed-certs-901410 kubelet[2950]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 14 01:20:26 embed-certs-901410 kubelet[2950]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 14 01:20:26 embed-certs-901410 kubelet[2950]: E0814 01:20:26.872811    2950 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598426872427135,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 01:20:26 embed-certs-901410 kubelet[2950]: E0814 01:20:26.872836    2950 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598426872427135,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 01:20:36 embed-certs-901410 kubelet[2950]: E0814 01:20:36.692869    2950 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-mwl74" podUID="065b6973-cd9d-4091-96b9-8dff2c5f85eb"
	Aug 14 01:20:36 embed-certs-901410 kubelet[2950]: E0814 01:20:36.873970    2950 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598436873694208,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 01:20:36 embed-certs-901410 kubelet[2950]: E0814 01:20:36.874051    2950 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598436873694208,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [494f8cefbe325338845c1fc777d9263142510e7e99b8ff1217f99009a69f7db0] <==
	I0814 01:11:33.371576       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0814 01:11:33.407831       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0814 01:11:33.407876       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0814 01:11:33.432157       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0814 01:11:33.432300       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-901410_1ea964c2-b206-4cc5-93d4-c9d812387ab1!
	I0814 01:11:33.432356       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"86f447d8-c26e-4e0d-89f9-4906967e1531", APIVersion:"v1", ResourceVersion:"419", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-901410_1ea964c2-b206-4cc5-93d4-c9d812387ab1 became leader
	I0814 01:11:33.533508       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-901410_1ea964c2-b206-4cc5-93d4-c9d812387ab1!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-901410 -n embed-certs-901410
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-901410 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-mwl74
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-901410 describe pod metrics-server-6867b74b74-mwl74
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-901410 describe pod metrics-server-6867b74b74-mwl74: exit status 1 (62.82535ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-mwl74" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-901410 describe pod metrics-server-6867b74b74-mwl74: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
E0814 01:15:05.519568   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/addons-937866/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
E0814 01:17:14.185845   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/functional-770612/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
E0814 01:22:14.186210   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/functional-770612/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
E0814 01:23:08.597423   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/addons-937866/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-179312 -n old-k8s-version-179312
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-179312 -n old-k8s-version-179312: exit status 2 (228.792758ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-179312" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-179312 -n old-k8s-version-179312
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-179312 -n old-k8s-version-179312: exit status 2 (212.779049ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-179312 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-179312 logs -n 25: (1.532189001s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| unpause | -p pause-074686                                        | pause-074686                 | jenkins | v1.33.1 | 14 Aug 24 00:56 UTC | 14 Aug 24 00:56 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| pause   | -p pause-074686                                        | pause-074686                 | jenkins | v1.33.1 | 14 Aug 24 00:56 UTC | 14 Aug 24 00:56 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| delete  | -p pause-074686                                        | pause-074686                 | jenkins | v1.33.1 | 14 Aug 24 00:56 UTC | 14 Aug 24 00:56 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| delete  | -p pause-074686                                        | pause-074686                 | jenkins | v1.33.1 | 14 Aug 24 00:56 UTC | 14 Aug 24 00:56 UTC |
	| delete  | -p                                                     | disable-driver-mounts-655306 | jenkins | v1.33.1 | 14 Aug 24 00:56 UTC | 14 Aug 24 00:56 UTC |
	|         | disable-driver-mounts-655306                           |                              |         |         |                     |                     |
	| start   | -p no-preload-776907                                   | no-preload-776907            | jenkins | v1.33.1 | 14 Aug 24 00:56 UTC | 14 Aug 24 00:58 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-769488                              | cert-expiration-769488       | jenkins | v1.33.1 | 14 Aug 24 00:57 UTC | 14 Aug 24 00:58 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-769488                              | cert-expiration-769488       | jenkins | v1.33.1 | 14 Aug 24 00:58 UTC | 14 Aug 24 00:58 UTC |
	| start   | -p                                                     | default-k8s-diff-port-585256 | jenkins | v1.33.1 | 14 Aug 24 00:58 UTC | 14 Aug 24 00:58 UTC |
	|         | default-k8s-diff-port-585256                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-901410            | embed-certs-901410           | jenkins | v1.33.1 | 14 Aug 24 00:58 UTC | 14 Aug 24 00:58 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-901410                                  | embed-certs-901410           | jenkins | v1.33.1 | 14 Aug 24 00:58 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-776907             | no-preload-776907            | jenkins | v1.33.1 | 14 Aug 24 00:58 UTC | 14 Aug 24 00:58 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-776907                                   | no-preload-776907            | jenkins | v1.33.1 | 14 Aug 24 00:58 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-585256  | default-k8s-diff-port-585256 | jenkins | v1.33.1 | 14 Aug 24 00:59 UTC | 14 Aug 24 00:59 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-585256 | jenkins | v1.33.1 | 14 Aug 24 00:59 UTC |                     |
	|         | default-k8s-diff-port-585256                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-179312        | old-k8s-version-179312       | jenkins | v1.33.1 | 14 Aug 24 01:00 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-901410                 | embed-certs-901410           | jenkins | v1.33.1 | 14 Aug 24 01:00 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-901410                                  | embed-certs-901410           | jenkins | v1.33.1 | 14 Aug 24 01:00 UTC | 14 Aug 24 01:11 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-776907                  | no-preload-776907            | jenkins | v1.33.1 | 14 Aug 24 01:01 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-776907                                   | no-preload-776907            | jenkins | v1.33.1 | 14 Aug 24 01:01 UTC | 14 Aug 24 01:10 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-585256       | default-k8s-diff-port-585256 | jenkins | v1.33.1 | 14 Aug 24 01:01 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-179312                              | old-k8s-version-179312       | jenkins | v1.33.1 | 14 Aug 24 01:01 UTC | 14 Aug 24 01:01 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-585256 | jenkins | v1.33.1 | 14 Aug 24 01:01 UTC | 14 Aug 24 01:11 UTC |
	|         | default-k8s-diff-port-585256                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-179312             | old-k8s-version-179312       | jenkins | v1.33.1 | 14 Aug 24 01:01 UTC | 14 Aug 24 01:01 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-179312                              | old-k8s-version-179312       | jenkins | v1.33.1 | 14 Aug 24 01:01 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/14 01:01:39
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0814 01:01:39.512898   61804 out.go:291] Setting OutFile to fd 1 ...
	I0814 01:01:39.513038   61804 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 01:01:39.513051   61804 out.go:304] Setting ErrFile to fd 2...
	I0814 01:01:39.513057   61804 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 01:01:39.513259   61804 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19429-9425/.minikube/bin
	I0814 01:01:39.513864   61804 out.go:298] Setting JSON to false
	I0814 01:01:39.514866   61804 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6245,"bootTime":1723591054,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0814 01:01:39.514924   61804 start.go:139] virtualization: kvm guest
	I0814 01:01:39.516858   61804 out.go:177] * [old-k8s-version-179312] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0814 01:01:39.518018   61804 out.go:177]   - MINIKUBE_LOCATION=19429
	I0814 01:01:39.518036   61804 notify.go:220] Checking for updates...
	I0814 01:01:39.520190   61804 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0814 01:01:39.521372   61804 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19429-9425/kubeconfig
	I0814 01:01:39.522536   61804 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19429-9425/.minikube
	I0814 01:01:39.523748   61804 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0814 01:01:39.524905   61804 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0814 01:01:39.526506   61804 config.go:182] Loaded profile config "old-k8s-version-179312": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0814 01:01:39.526919   61804 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:01:39.526976   61804 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:01:39.541877   61804 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35025
	I0814 01:01:39.542250   61804 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:01:39.542776   61804 main.go:141] libmachine: Using API Version  1
	I0814 01:01:39.542796   61804 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:01:39.543149   61804 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:01:39.543304   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .DriverName
	I0814 01:01:39.544990   61804 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0814 01:01:39.546103   61804 driver.go:392] Setting default libvirt URI to qemu:///system
	I0814 01:01:39.546426   61804 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:01:39.546461   61804 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:01:39.561404   61804 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42995
	I0814 01:01:39.561820   61804 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:01:39.562277   61804 main.go:141] libmachine: Using API Version  1
	I0814 01:01:39.562305   61804 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:01:39.562609   61804 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:01:39.562824   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .DriverName
	I0814 01:01:39.598760   61804 out.go:177] * Using the kvm2 driver based on existing profile
	I0814 01:01:39.599899   61804 start.go:297] selected driver: kvm2
	I0814 01:01:39.599912   61804 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-179312 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-179312 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.123 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 01:01:39.600052   61804 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0814 01:01:39.600706   61804 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 01:01:39.600767   61804 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19429-9425/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0814 01:01:39.616316   61804 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0814 01:01:39.616678   61804 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 01:01:39.616712   61804 cni.go:84] Creating CNI manager for ""
	I0814 01:01:39.616719   61804 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 01:01:39.616748   61804 start.go:340] cluster config:
	{Name:old-k8s-version-179312 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-179312 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.123 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 01:01:39.616839   61804 iso.go:125] acquiring lock: {Name:mk654171f0e78c238a265344dbbd1eacb21d0f1b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 01:01:39.618491   61804 out.go:177] * Starting "old-k8s-version-179312" primary control-plane node in "old-k8s-version-179312" cluster
	I0814 01:01:36.022382   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:01:39.094354   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:01:38.136107   61689 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0814 01:01:38.136146   61689 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0814 01:01:38.136159   61689 cache.go:56] Caching tarball of preloaded images
	I0814 01:01:38.136234   61689 preload.go:172] Found /home/jenkins/minikube-integration/19429-9425/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0814 01:01:38.136245   61689 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0814 01:01:38.136360   61689 profile.go:143] Saving config to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/default-k8s-diff-port-585256/config.json ...
	I0814 01:01:38.136567   61689 start.go:360] acquireMachinesLock for default-k8s-diff-port-585256: {Name:mk8ab1b491404dd6cc95a37a50c74a14d6d0e4c9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 01:01:39.619632   61804 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0814 01:01:39.619674   61804 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0814 01:01:39.619694   61804 cache.go:56] Caching tarball of preloaded images
	I0814 01:01:39.619767   61804 preload.go:172] Found /home/jenkins/minikube-integration/19429-9425/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0814 01:01:39.619781   61804 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0814 01:01:39.619899   61804 profile.go:143] Saving config to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312/config.json ...
	I0814 01:01:39.620085   61804 start.go:360] acquireMachinesLock for old-k8s-version-179312: {Name:mk8ab1b491404dd6cc95a37a50c74a14d6d0e4c9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 01:01:45.174229   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:01:48.246337   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:01:54.326275   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:01:57.398310   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:02:03.478349   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:02:06.550262   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:02:12.630330   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:02:15.702383   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:02:21.782321   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:02:24.854346   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:02:30.934349   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:02:34.006298   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:02:40.086361   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:02:43.158326   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:02:49.238298   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:02:52.310357   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:02:58.390361   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:03:01.462356   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:03:07.542292   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:03:10.614310   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:03:16.694325   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:03:19.766305   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:03:25.846331   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:03:28.918369   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:03:34.998360   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:03:38.070357   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:03:44.150338   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:03:47.222336   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:03:53.302301   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:03:56.374355   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:04:02.454379   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:04:05.526325   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:04:11.606322   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:04:14.678359   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:04:20.758332   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:04:23.830339   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:04:29.910318   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:04:32.982355   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:04:39.062376   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:04:42.134351   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:04:48.214321   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:04:51.286357   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:04:57.366282   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:05:00.438378   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:05:06.518254   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:05:09.590272   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:05:12.594550   61447 start.go:364] duration metric: took 3m55.982517455s to acquireMachinesLock for "no-preload-776907"
	I0814 01:05:12.594617   61447 start.go:96] Skipping create...Using existing machine configuration
	I0814 01:05:12.594639   61447 fix.go:54] fixHost starting: 
	I0814 01:05:12.595017   61447 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:05:12.595051   61447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:05:12.611377   61447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40079
	I0814 01:05:12.611848   61447 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:05:12.612405   61447 main.go:141] libmachine: Using API Version  1
	I0814 01:05:12.612433   61447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:05:12.612810   61447 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:05:12.613004   61447 main.go:141] libmachine: (no-preload-776907) Calling .DriverName
	I0814 01:05:12.613170   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetState
	I0814 01:05:12.614831   61447 fix.go:112] recreateIfNeeded on no-preload-776907: state=Stopped err=<nil>
	I0814 01:05:12.614852   61447 main.go:141] libmachine: (no-preload-776907) Calling .DriverName
	W0814 01:05:12.615027   61447 fix.go:138] unexpected machine state, will restart: <nil>
	I0814 01:05:12.616713   61447 out.go:177] * Restarting existing kvm2 VM for "no-preload-776907" ...
	I0814 01:05:12.591919   61115 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 01:05:12.591979   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetMachineName
	I0814 01:05:12.592302   61115 buildroot.go:166] provisioning hostname "embed-certs-901410"
	I0814 01:05:12.592333   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetMachineName
	I0814 01:05:12.592567   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHHostname
	I0814 01:05:12.594384   61115 machine.go:97] duration metric: took 4m37.436734696s to provisionDockerMachine
	I0814 01:05:12.594452   61115 fix.go:56] duration metric: took 4m37.45620173s for fixHost
	I0814 01:05:12.594468   61115 start.go:83] releasing machines lock for "embed-certs-901410", held for 4m37.456229846s
	W0814 01:05:12.594503   61115 start.go:714] error starting host: provision: host is not running
	W0814 01:05:12.594696   61115 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0814 01:05:12.594717   61115 start.go:729] Will try again in 5 seconds ...
	I0814 01:05:12.617855   61447 main.go:141] libmachine: (no-preload-776907) Calling .Start
	I0814 01:05:12.618047   61447 main.go:141] libmachine: (no-preload-776907) Ensuring networks are active...
	I0814 01:05:12.619058   61447 main.go:141] libmachine: (no-preload-776907) Ensuring network default is active
	I0814 01:05:12.619398   61447 main.go:141] libmachine: (no-preload-776907) Ensuring network mk-no-preload-776907 is active
	I0814 01:05:12.619763   61447 main.go:141] libmachine: (no-preload-776907) Getting domain xml...
	I0814 01:05:12.620437   61447 main.go:141] libmachine: (no-preload-776907) Creating domain...
	I0814 01:05:13.819938   61447 main.go:141] libmachine: (no-preload-776907) Waiting to get IP...
	I0814 01:05:13.820741   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:13.821142   61447 main.go:141] libmachine: (no-preload-776907) DBG | unable to find current IP address of domain no-preload-776907 in network mk-no-preload-776907
	I0814 01:05:13.821244   61447 main.go:141] libmachine: (no-preload-776907) DBG | I0814 01:05:13.821137   62559 retry.go:31] will retry after 224.897937ms: waiting for machine to come up
	I0814 01:05:14.047611   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:14.048046   61447 main.go:141] libmachine: (no-preload-776907) DBG | unable to find current IP address of domain no-preload-776907 in network mk-no-preload-776907
	I0814 01:05:14.048073   61447 main.go:141] libmachine: (no-preload-776907) DBG | I0814 01:05:14.047999   62559 retry.go:31] will retry after 289.797156ms: waiting for machine to come up
	I0814 01:05:14.339577   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:14.339966   61447 main.go:141] libmachine: (no-preload-776907) DBG | unable to find current IP address of domain no-preload-776907 in network mk-no-preload-776907
	I0814 01:05:14.339990   61447 main.go:141] libmachine: (no-preload-776907) DBG | I0814 01:05:14.339923   62559 retry.go:31] will retry after 335.55372ms: waiting for machine to come up
	I0814 01:05:14.677277   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:14.677646   61447 main.go:141] libmachine: (no-preload-776907) DBG | unable to find current IP address of domain no-preload-776907 in network mk-no-preload-776907
	I0814 01:05:14.677850   61447 main.go:141] libmachine: (no-preload-776907) DBG | I0814 01:05:14.677612   62559 retry.go:31] will retry after 376.666569ms: waiting for machine to come up
	I0814 01:05:15.056486   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:15.057008   61447 main.go:141] libmachine: (no-preload-776907) DBG | unable to find current IP address of domain no-preload-776907 in network mk-no-preload-776907
	I0814 01:05:15.057046   61447 main.go:141] libmachine: (no-preload-776907) DBG | I0814 01:05:15.056935   62559 retry.go:31] will retry after 594.277419ms: waiting for machine to come up
	I0814 01:05:15.652571   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:15.653122   61447 main.go:141] libmachine: (no-preload-776907) DBG | unable to find current IP address of domain no-preload-776907 in network mk-no-preload-776907
	I0814 01:05:15.653156   61447 main.go:141] libmachine: (no-preload-776907) DBG | I0814 01:05:15.653066   62559 retry.go:31] will retry after 827.123674ms: waiting for machine to come up
	I0814 01:05:16.482405   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:16.482799   61447 main.go:141] libmachine: (no-preload-776907) DBG | unable to find current IP address of domain no-preload-776907 in network mk-no-preload-776907
	I0814 01:05:16.482827   61447 main.go:141] libmachine: (no-preload-776907) DBG | I0814 01:05:16.482746   62559 retry.go:31] will retry after 897.843008ms: waiting for machine to come up
	I0814 01:05:17.595257   61115 start.go:360] acquireMachinesLock for embed-certs-901410: {Name:mk8ab1b491404dd6cc95a37a50c74a14d6d0e4c9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 01:05:17.381838   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:17.382282   61447 main.go:141] libmachine: (no-preload-776907) DBG | unable to find current IP address of domain no-preload-776907 in network mk-no-preload-776907
	I0814 01:05:17.382309   61447 main.go:141] libmachine: (no-preload-776907) DBG | I0814 01:05:17.382233   62559 retry.go:31] will retry after 1.346474914s: waiting for machine to come up
	I0814 01:05:18.730384   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:18.730837   61447 main.go:141] libmachine: (no-preload-776907) DBG | unable to find current IP address of domain no-preload-776907 in network mk-no-preload-776907
	I0814 01:05:18.730865   61447 main.go:141] libmachine: (no-preload-776907) DBG | I0814 01:05:18.730770   62559 retry.go:31] will retry after 1.755579596s: waiting for machine to come up
	I0814 01:05:20.488719   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:20.489235   61447 main.go:141] libmachine: (no-preload-776907) DBG | unable to find current IP address of domain no-preload-776907 in network mk-no-preload-776907
	I0814 01:05:20.489269   61447 main.go:141] libmachine: (no-preload-776907) DBG | I0814 01:05:20.489180   62559 retry.go:31] will retry after 1.82357845s: waiting for machine to come up
	I0814 01:05:22.315099   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:22.315508   61447 main.go:141] libmachine: (no-preload-776907) DBG | unable to find current IP address of domain no-preload-776907 in network mk-no-preload-776907
	I0814 01:05:22.315543   61447 main.go:141] libmachine: (no-preload-776907) DBG | I0814 01:05:22.315458   62559 retry.go:31] will retry after 1.799604975s: waiting for machine to come up
	I0814 01:05:24.116869   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:24.117361   61447 main.go:141] libmachine: (no-preload-776907) DBG | unable to find current IP address of domain no-preload-776907 in network mk-no-preload-776907
	I0814 01:05:24.117389   61447 main.go:141] libmachine: (no-preload-776907) DBG | I0814 01:05:24.117302   62559 retry.go:31] will retry after 2.588913034s: waiting for machine to come up
	I0814 01:05:26.708996   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:26.709436   61447 main.go:141] libmachine: (no-preload-776907) DBG | unable to find current IP address of domain no-preload-776907 in network mk-no-preload-776907
	I0814 01:05:26.709462   61447 main.go:141] libmachine: (no-preload-776907) DBG | I0814 01:05:26.709395   62559 retry.go:31] will retry after 3.736481406s: waiting for machine to come up
	I0814 01:05:30.449552   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:30.450068   61447 main.go:141] libmachine: (no-preload-776907) Found IP for machine: 192.168.72.94
	I0814 01:05:30.450093   61447 main.go:141] libmachine: (no-preload-776907) Reserving static IP address...
	I0814 01:05:30.450109   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has current primary IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:30.450584   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "no-preload-776907", mac: "52:54:00:96:29:79", ip: "192.168.72.94"} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:30.450609   61447 main.go:141] libmachine: (no-preload-776907) Reserved static IP address: 192.168.72.94
	I0814 01:05:30.450629   61447 main.go:141] libmachine: (no-preload-776907) DBG | skip adding static IP to network mk-no-preload-776907 - found existing host DHCP lease matching {name: "no-preload-776907", mac: "52:54:00:96:29:79", ip: "192.168.72.94"}
	I0814 01:05:30.450640   61447 main.go:141] libmachine: (no-preload-776907) Waiting for SSH to be available...
	I0814 01:05:30.450652   61447 main.go:141] libmachine: (no-preload-776907) DBG | Getting to WaitForSSH function...
	I0814 01:05:30.452908   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:30.453222   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:30.453250   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:30.453351   61447 main.go:141] libmachine: (no-preload-776907) DBG | Using SSH client type: external
	I0814 01:05:30.453380   61447 main.go:141] libmachine: (no-preload-776907) DBG | Using SSH private key: /home/jenkins/minikube-integration/19429-9425/.minikube/machines/no-preload-776907/id_rsa (-rw-------)
	I0814 01:05:30.453413   61447 main.go:141] libmachine: (no-preload-776907) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.94 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19429-9425/.minikube/machines/no-preload-776907/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0814 01:05:30.453430   61447 main.go:141] libmachine: (no-preload-776907) DBG | About to run SSH command:
	I0814 01:05:30.453443   61447 main.go:141] libmachine: (no-preload-776907) DBG | exit 0
	I0814 01:05:30.574126   61447 main.go:141] libmachine: (no-preload-776907) DBG | SSH cmd err, output: <nil>: 
	I0814 01:05:30.574502   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetConfigRaw
	I0814 01:05:30.575125   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetIP
	I0814 01:05:30.577732   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:30.578169   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:30.578203   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:30.578449   61447 profile.go:143] Saving config to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/no-preload-776907/config.json ...
	I0814 01:05:30.578651   61447 machine.go:94] provisionDockerMachine start ...
	I0814 01:05:30.578669   61447 main.go:141] libmachine: (no-preload-776907) Calling .DriverName
	I0814 01:05:30.578916   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHHostname
	I0814 01:05:30.581363   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:30.581653   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:30.581678   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:30.581769   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHPort
	I0814 01:05:30.581944   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 01:05:30.582114   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 01:05:30.582230   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHUsername
	I0814 01:05:30.582389   61447 main.go:141] libmachine: Using SSH client type: native
	I0814 01:05:30.582631   61447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.94 22 <nil> <nil>}
	I0814 01:05:30.582641   61447 main.go:141] libmachine: About to run SSH command:
	hostname
	I0814 01:05:30.678219   61447 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0814 01:05:30.678248   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetMachineName
	I0814 01:05:30.678530   61447 buildroot.go:166] provisioning hostname "no-preload-776907"
	I0814 01:05:30.678560   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetMachineName
	I0814 01:05:30.678736   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHHostname
	I0814 01:05:30.681602   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:30.681914   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:30.681943   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:30.682058   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHPort
	I0814 01:05:30.682224   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 01:05:30.682373   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 01:05:30.682507   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHUsername
	I0814 01:05:30.682662   61447 main.go:141] libmachine: Using SSH client type: native
	I0814 01:05:30.682832   61447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.94 22 <nil> <nil>}
	I0814 01:05:30.682844   61447 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-776907 && echo "no-preload-776907" | sudo tee /etc/hostname
	I0814 01:05:30.790444   61447 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-776907
	
	I0814 01:05:30.790476   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHHostname
	I0814 01:05:30.793090   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:30.793357   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:30.793386   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:30.793503   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHPort
	I0814 01:05:30.793713   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 01:05:30.793885   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 01:05:30.794030   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHUsername
	I0814 01:05:30.794206   61447 main.go:141] libmachine: Using SSH client type: native
	I0814 01:05:30.794390   61447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.94 22 <nil> <nil>}
	I0814 01:05:30.794411   61447 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-776907' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-776907/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-776907' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0814 01:05:30.897761   61447 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 01:05:30.897818   61447 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19429-9425/.minikube CaCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19429-9425/.minikube}
	I0814 01:05:30.897869   61447 buildroot.go:174] setting up certificates
	I0814 01:05:30.897890   61447 provision.go:84] configureAuth start
	I0814 01:05:30.897915   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetMachineName
	I0814 01:05:30.898272   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetIP
	I0814 01:05:30.900961   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:30.901235   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:30.901268   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:30.901432   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHHostname
	I0814 01:05:30.903329   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:30.903604   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:30.903634   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:30.903799   61447 provision.go:143] copyHostCerts
	I0814 01:05:30.903866   61447 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem, removing ...
	I0814 01:05:30.903881   61447 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem
	I0814 01:05:30.903960   61447 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem (1082 bytes)
	I0814 01:05:30.904104   61447 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem, removing ...
	I0814 01:05:30.904126   61447 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem
	I0814 01:05:30.904165   61447 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem (1123 bytes)
	I0814 01:05:30.904259   61447 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem, removing ...
	I0814 01:05:30.904271   61447 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem
	I0814 01:05:30.904304   61447 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem (1675 bytes)
	I0814 01:05:30.904389   61447 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem org=jenkins.no-preload-776907 san=[127.0.0.1 192.168.72.94 localhost minikube no-preload-776907]
	I0814 01:05:31.219047   61447 provision.go:177] copyRemoteCerts
	I0814 01:05:31.219108   61447 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0814 01:05:31.219138   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHHostname
	I0814 01:05:31.222328   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:31.222679   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:31.222719   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:31.222858   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHPort
	I0814 01:05:31.223059   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 01:05:31.223199   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHUsername
	I0814 01:05:31.223368   61447 sshutil.go:53] new ssh client: &{IP:192.168.72.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/no-preload-776907/id_rsa Username:docker}
	I0814 01:05:31.299711   61447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0814 01:05:31.321459   61447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0814 01:05:31.342798   61447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0814 01:05:31.363610   61447 provision.go:87] duration metric: took 465.708315ms to configureAuth
	I0814 01:05:31.363636   61447 buildroot.go:189] setting minikube options for container-runtime
	I0814 01:05:31.363877   61447 config.go:182] Loaded profile config "no-preload-776907": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 01:05:31.363970   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHHostname
	I0814 01:05:31.366458   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:31.366723   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:31.366753   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:31.366948   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHPort
	I0814 01:05:31.367154   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 01:05:31.367300   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 01:05:31.367452   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHUsername
	I0814 01:05:31.367605   61447 main.go:141] libmachine: Using SSH client type: native
	I0814 01:05:31.367826   61447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.94 22 <nil> <nil>}
	I0814 01:05:31.367848   61447 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0814 01:05:31.826307   61689 start.go:364] duration metric: took 3m53.689696917s to acquireMachinesLock for "default-k8s-diff-port-585256"
	I0814 01:05:31.826378   61689 start.go:96] Skipping create...Using existing machine configuration
	I0814 01:05:31.826394   61689 fix.go:54] fixHost starting: 
	I0814 01:05:31.826794   61689 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:05:31.826829   61689 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:05:31.842943   61689 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38143
	I0814 01:05:31.843345   61689 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:05:31.843840   61689 main.go:141] libmachine: Using API Version  1
	I0814 01:05:31.843872   61689 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:05:31.844236   61689 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:05:31.844445   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .DriverName
	I0814 01:05:31.844653   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetState
	I0814 01:05:31.846298   61689 fix.go:112] recreateIfNeeded on default-k8s-diff-port-585256: state=Stopped err=<nil>
	I0814 01:05:31.846319   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .DriverName
	W0814 01:05:31.846504   61689 fix.go:138] unexpected machine state, will restart: <nil>
	I0814 01:05:31.848477   61689 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-585256" ...
	I0814 01:05:31.849592   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .Start
	I0814 01:05:31.849779   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Ensuring networks are active...
	I0814 01:05:31.850320   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Ensuring network default is active
	I0814 01:05:31.850622   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Ensuring network mk-default-k8s-diff-port-585256 is active
	I0814 01:05:31.850949   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Getting domain xml...
	I0814 01:05:31.851706   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Creating domain...
	I0814 01:05:31.612709   61447 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0814 01:05:31.612730   61447 machine.go:97] duration metric: took 1.0340672s to provisionDockerMachine
	I0814 01:05:31.612741   61447 start.go:293] postStartSetup for "no-preload-776907" (driver="kvm2")
	I0814 01:05:31.612763   61447 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0814 01:05:31.612794   61447 main.go:141] libmachine: (no-preload-776907) Calling .DriverName
	I0814 01:05:31.613074   61447 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0814 01:05:31.613098   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHHostname
	I0814 01:05:31.615600   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:31.615957   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:31.615985   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:31.616091   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHPort
	I0814 01:05:31.616244   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 01:05:31.616373   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHUsername
	I0814 01:05:31.616516   61447 sshutil.go:53] new ssh client: &{IP:192.168.72.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/no-preload-776907/id_rsa Username:docker}
	I0814 01:05:31.691987   61447 ssh_runner.go:195] Run: cat /etc/os-release
	I0814 01:05:31.695849   61447 info.go:137] Remote host: Buildroot 2023.02.9
	I0814 01:05:31.695872   61447 filesync.go:126] Scanning /home/jenkins/minikube-integration/19429-9425/.minikube/addons for local assets ...
	I0814 01:05:31.695940   61447 filesync.go:126] Scanning /home/jenkins/minikube-integration/19429-9425/.minikube/files for local assets ...
	I0814 01:05:31.696016   61447 filesync.go:149] local asset: /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem -> 165892.pem in /etc/ssl/certs
	I0814 01:05:31.696099   61447 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0814 01:05:31.704650   61447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem --> /etc/ssl/certs/165892.pem (1708 bytes)
	I0814 01:05:31.725889   61447 start.go:296] duration metric: took 113.131949ms for postStartSetup
	I0814 01:05:31.725939   61447 fix.go:56] duration metric: took 19.131305949s for fixHost
	I0814 01:05:31.725962   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHHostname
	I0814 01:05:31.728613   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:31.729001   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:31.729030   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:31.729178   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHPort
	I0814 01:05:31.729379   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 01:05:31.729556   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 01:05:31.729721   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHUsername
	I0814 01:05:31.729861   61447 main.go:141] libmachine: Using SSH client type: native
	I0814 01:05:31.730062   61447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.94 22 <nil> <nil>}
	I0814 01:05:31.730076   61447 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0814 01:05:31.826139   61447 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723597531.803704808
	
	I0814 01:05:31.826161   61447 fix.go:216] guest clock: 1723597531.803704808
	I0814 01:05:31.826172   61447 fix.go:229] Guest: 2024-08-14 01:05:31.803704808 +0000 UTC Remote: 2024-08-14 01:05:31.72594365 +0000 UTC m=+255.249076472 (delta=77.761158ms)
	I0814 01:05:31.826197   61447 fix.go:200] guest clock delta is within tolerance: 77.761158ms
	I0814 01:05:31.826208   61447 start.go:83] releasing machines lock for "no-preload-776907", held for 19.231627325s
	I0814 01:05:31.826240   61447 main.go:141] libmachine: (no-preload-776907) Calling .DriverName
	I0814 01:05:31.826536   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetIP
	I0814 01:05:31.829417   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:31.829824   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:31.829854   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:31.829986   61447 main.go:141] libmachine: (no-preload-776907) Calling .DriverName
	I0814 01:05:31.830482   61447 main.go:141] libmachine: (no-preload-776907) Calling .DriverName
	I0814 01:05:31.830633   61447 main.go:141] libmachine: (no-preload-776907) Calling .DriverName
	I0814 01:05:31.830697   61447 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0814 01:05:31.830804   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHHostname
	I0814 01:05:31.830894   61447 ssh_runner.go:195] Run: cat /version.json
	I0814 01:05:31.830914   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHHostname
	I0814 01:05:31.833641   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:31.833963   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:31.833992   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:31.834096   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:31.834260   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHPort
	I0814 01:05:31.834427   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 01:05:31.834549   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHUsername
	I0814 01:05:31.834575   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:31.834599   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:31.834696   61447 sshutil.go:53] new ssh client: &{IP:192.168.72.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/no-preload-776907/id_rsa Username:docker}
	I0814 01:05:31.834773   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHPort
	I0814 01:05:31.834917   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 01:05:31.835101   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHUsername
	I0814 01:05:31.835253   61447 sshutil.go:53] new ssh client: &{IP:192.168.72.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/no-preload-776907/id_rsa Username:docker}
	I0814 01:05:31.915928   61447 ssh_runner.go:195] Run: systemctl --version
	I0814 01:05:31.947877   61447 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0814 01:05:32.091869   61447 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0814 01:05:32.097278   61447 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0814 01:05:32.097333   61447 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0814 01:05:32.112225   61447 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0814 01:05:32.112243   61447 start.go:495] detecting cgroup driver to use...
	I0814 01:05:32.112317   61447 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0814 01:05:32.131562   61447 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0814 01:05:32.145858   61447 docker.go:217] disabling cri-docker service (if available) ...
	I0814 01:05:32.145917   61447 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0814 01:05:32.160887   61447 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0814 01:05:32.175742   61447 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0814 01:05:32.290421   61447 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0814 01:05:32.420159   61447 docker.go:233] disabling docker service ...
	I0814 01:05:32.420237   61447 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0814 01:05:32.434020   61447 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0814 01:05:32.451378   61447 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0814 01:05:32.601306   61447 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0814 01:05:32.714480   61447 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0814 01:05:32.727033   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 01:05:32.743611   61447 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0814 01:05:32.743681   61447 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:05:32.753404   61447 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0814 01:05:32.753471   61447 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:05:32.762934   61447 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:05:32.772193   61447 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:05:32.781270   61447 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0814 01:05:32.791271   61447 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:05:32.802788   61447 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:05:32.821431   61447 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:05:32.831529   61447 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0814 01:05:32.840975   61447 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0814 01:05:32.841033   61447 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0814 01:05:32.854037   61447 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0814 01:05:32.863437   61447 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 01:05:32.999601   61447 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0814 01:05:33.152806   61447 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0814 01:05:33.152868   61447 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0814 01:05:33.157209   61447 start.go:563] Will wait 60s for crictl version
	I0814 01:05:33.157266   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:05:33.160792   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0814 01:05:33.196825   61447 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0814 01:05:33.196903   61447 ssh_runner.go:195] Run: crio --version
	I0814 01:05:33.222886   61447 ssh_runner.go:195] Run: crio --version
	I0814 01:05:33.258900   61447 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0814 01:05:33.260059   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetIP
	I0814 01:05:33.263044   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:33.263422   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:33.263449   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:33.263749   61447 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0814 01:05:33.268315   61447 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 01:05:33.282628   61447 kubeadm.go:883] updating cluster {Name:no-preload-776907 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:no-preload-776907 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.94 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0814 01:05:33.282744   61447 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0814 01:05:33.282800   61447 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 01:05:33.319748   61447 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0814 01:05:33.319777   61447 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0814 01:05:33.319875   61447 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0814 01:05:33.319855   61447 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0814 01:05:33.319906   61447 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0814 01:05:33.319846   61447 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 01:05:33.319845   61447 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0814 01:05:33.320006   61447 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0814 01:05:33.320011   61447 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0814 01:05:33.320011   61447 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0814 01:05:33.321705   61447 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0814 01:05:33.321719   61447 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0814 01:05:33.321741   61447 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0814 01:05:33.321800   61447 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0814 01:05:33.321820   61447 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0814 01:05:33.321851   61447 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 01:05:33.321862   61447 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0814 01:05:33.321858   61447 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0814 01:05:33.549228   61447 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0814 01:05:33.558351   61447 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0814 01:05:33.561199   61447 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0814 01:05:33.570929   61447 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0814 01:05:33.573362   61447 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0814 01:05:33.606128   61447 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0814 01:05:33.623839   61447 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0814 01:05:33.721634   61447 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0814 01:05:33.721674   61447 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0814 01:05:33.721695   61447 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0814 01:05:33.721706   61447 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0814 01:05:33.721718   61447 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0814 01:05:33.721743   61447 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0814 01:05:33.721756   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:05:33.721790   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:05:33.721743   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:05:33.721822   61447 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0814 01:05:33.721851   61447 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0814 01:05:33.721904   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:05:33.733731   61447 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0814 01:05:33.733762   61447 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0814 01:05:33.733792   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:05:33.745957   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0814 01:05:33.745957   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0814 01:05:33.746027   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0814 01:05:33.746031   61447 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0814 01:05:33.746075   61447 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0814 01:05:33.746100   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0814 01:05:33.746110   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0814 01:05:33.746128   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:05:33.837313   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0814 01:05:33.837334   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0814 01:05:33.840696   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0814 01:05:33.840751   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0814 01:05:33.840821   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0814 01:05:33.840959   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0814 01:05:33.952383   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0814 01:05:33.952459   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0814 01:05:33.960252   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0814 01:05:33.966935   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0814 01:05:33.966980   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0814 01:05:33.966949   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0814 01:05:34.070125   61447 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0814 01:05:34.070241   61447 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0814 01:05:34.070361   61447 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0814 01:05:34.070427   61447 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0814 01:05:34.070495   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0814 01:05:34.091128   61447 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0814 01:05:34.091240   61447 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0814 01:05:34.092453   61447 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0814 01:05:34.092547   61447 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.15-0
	I0814 01:05:34.092649   61447 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0814 01:05:34.092743   61447 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0814 01:05:34.100595   61447 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0814 01:05:34.100616   61447 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0814 01:05:34.100663   61447 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0814 01:05:34.100799   61447 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0814 01:05:34.130869   61447 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0814 01:05:34.130914   61447 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0814 01:05:34.130931   61447 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0814 01:05:34.130968   61447 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0814 01:05:34.131021   61447 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0814 01:05:34.197462   61447 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 01:05:36.080029   61447 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (1.979348221s)
	I0814 01:05:36.080056   61447 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0814 01:05:36.080081   61447 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0814 01:05:36.080140   61447 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0814 01:05:36.080175   61447 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.882683519s)
	I0814 01:05:36.080139   61447 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0: (1.949094618s)
	I0814 01:05:36.080227   61447 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0814 01:05:36.080270   61447 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 01:05:36.080310   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:05:36.080232   61447 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0814 01:05:33.131411   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting to get IP...
	I0814 01:05:33.132448   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:33.132806   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | unable to find current IP address of domain default-k8s-diff-port-585256 in network mk-default-k8s-diff-port-585256
	I0814 01:05:33.132920   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | I0814 01:05:33.132799   62699 retry.go:31] will retry after 311.730649ms: waiting for machine to come up
	I0814 01:05:33.446380   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:33.446841   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | unable to find current IP address of domain default-k8s-diff-port-585256 in network mk-default-k8s-diff-port-585256
	I0814 01:05:33.446870   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | I0814 01:05:33.446794   62699 retry.go:31] will retry after 383.687115ms: waiting for machine to come up
	I0814 01:05:33.832368   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:33.832974   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | unable to find current IP address of domain default-k8s-diff-port-585256 in network mk-default-k8s-diff-port-585256
	I0814 01:05:33.833008   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | I0814 01:05:33.832808   62699 retry.go:31] will retry after 455.445491ms: waiting for machine to come up
	I0814 01:05:34.289395   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:34.289832   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | unable to find current IP address of domain default-k8s-diff-port-585256 in network mk-default-k8s-diff-port-585256
	I0814 01:05:34.289869   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | I0814 01:05:34.289782   62699 retry.go:31] will retry after 513.174411ms: waiting for machine to come up
	I0814 01:05:34.804399   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:34.804842   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | unable to find current IP address of domain default-k8s-diff-port-585256 in network mk-default-k8s-diff-port-585256
	I0814 01:05:34.804877   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | I0814 01:05:34.804793   62699 retry.go:31] will retry after 497.23394ms: waiting for machine to come up
	I0814 01:05:35.303286   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:35.303809   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | unable to find current IP address of domain default-k8s-diff-port-585256 in network mk-default-k8s-diff-port-585256
	I0814 01:05:35.303839   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | I0814 01:05:35.303757   62699 retry.go:31] will retry after 774.036418ms: waiting for machine to come up
	I0814 01:05:36.080026   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:36.080605   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | unable to find current IP address of domain default-k8s-diff-port-585256 in network mk-default-k8s-diff-port-585256
	I0814 01:05:36.080631   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | I0814 01:05:36.080572   62699 retry.go:31] will retry after 970.636476ms: waiting for machine to come up
	I0814 01:05:37.052546   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:37.052978   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | unable to find current IP address of domain default-k8s-diff-port-585256 in network mk-default-k8s-diff-port-585256
	I0814 01:05:37.053007   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | I0814 01:05:37.052929   62699 retry.go:31] will retry after 1.471882931s: waiting for machine to come up
	I0814 01:05:37.749423   61447 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.669254345s)
	I0814 01:05:37.749462   61447 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0814 01:05:37.749464   61447 ssh_runner.go:235] Completed: which crictl: (1.669139781s)
	I0814 01:05:37.749508   61447 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0814 01:05:37.749520   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 01:05:37.749573   61447 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0814 01:05:40.024973   61447 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.275431609s)
	I0814 01:05:40.024997   61447 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (2.275404079s)
	I0814 01:05:40.025019   61447 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0814 01:05:40.025049   61447 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0814 01:05:40.025050   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 01:05:40.025084   61447 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0814 01:05:38.526491   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:38.527039   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | unable to find current IP address of domain default-k8s-diff-port-585256 in network mk-default-k8s-diff-port-585256
	I0814 01:05:38.527074   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | I0814 01:05:38.526996   62699 retry.go:31] will retry after 1.14308512s: waiting for machine to come up
	I0814 01:05:39.672470   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:39.672869   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | unable to find current IP address of domain default-k8s-diff-port-585256 in network mk-default-k8s-diff-port-585256
	I0814 01:05:39.672893   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | I0814 01:05:39.672812   62699 retry.go:31] will retry after 2.208537111s: waiting for machine to come up
	I0814 01:05:41.883541   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:41.883981   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | unable to find current IP address of domain default-k8s-diff-port-585256 in network mk-default-k8s-diff-port-585256
	I0814 01:05:41.884004   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | I0814 01:05:41.883925   62699 retry.go:31] will retry after 1.996466385s: waiting for machine to come up
	I0814 01:05:43.619471   61447 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.594358195s)
	I0814 01:05:43.619507   61447 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0814 01:05:43.619537   61447 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0814 01:05:43.619541   61447 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.594466847s)
	I0814 01:05:43.619586   61447 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0814 01:05:43.619612   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 01:05:44.986974   61447 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (1.367364508s)
	I0814 01:05:44.987013   61447 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0814 01:05:44.987045   61447 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0814 01:05:44.987041   61447 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.367403978s)
	I0814 01:05:44.987087   61447 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0814 01:05:44.987109   61447 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0814 01:05:44.987207   61447 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0814 01:05:44.991463   61447 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0814 01:05:43.882980   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:43.883366   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | unable to find current IP address of domain default-k8s-diff-port-585256 in network mk-default-k8s-diff-port-585256
	I0814 01:05:43.883395   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | I0814 01:05:43.883327   62699 retry.go:31] will retry after 3.565128765s: waiting for machine to come up
	I0814 01:05:47.449997   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:47.450447   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | unable to find current IP address of domain default-k8s-diff-port-585256 in network mk-default-k8s-diff-port-585256
	I0814 01:05:47.450477   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | I0814 01:05:47.450398   62699 retry.go:31] will retry after 3.284570516s: waiting for machine to come up
	I0814 01:05:46.846330   61447 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (1.859214752s)
	I0814 01:05:46.846363   61447 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0814 01:05:46.846397   61447 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0814 01:05:46.846448   61447 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0814 01:05:47.484561   61447 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0814 01:05:47.484612   61447 cache_images.go:123] Successfully loaded all cached images
	I0814 01:05:47.484618   61447 cache_images.go:92] duration metric: took 14.164829321s to LoadCachedImages
	I0814 01:05:47.484632   61447 kubeadm.go:934] updating node { 192.168.72.94 8443 v1.31.0 crio true true} ...
	I0814 01:05:47.484813   61447 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-776907 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.94
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-776907 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0814 01:05:47.484897   61447 ssh_runner.go:195] Run: crio config
	I0814 01:05:47.530082   61447 cni.go:84] Creating CNI manager for ""
	I0814 01:05:47.530105   61447 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 01:05:47.530120   61447 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0814 01:05:47.530143   61447 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.94 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-776907 NodeName:no-preload-776907 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.94"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.94 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0814 01:05:47.530285   61447 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.94
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-776907"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.94
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.94"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0814 01:05:47.530350   61447 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0814 01:05:47.540091   61447 binaries.go:44] Found k8s binaries, skipping transfer
	I0814 01:05:47.540155   61447 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0814 01:05:47.548445   61447 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0814 01:05:47.563668   61447 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0814 01:05:47.578184   61447 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0814 01:05:47.593013   61447 ssh_runner.go:195] Run: grep 192.168.72.94	control-plane.minikube.internal$ /etc/hosts
	I0814 01:05:47.596371   61447 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.94	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 01:05:47.606895   61447 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 01:05:47.711714   61447 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 01:05:47.726979   61447 certs.go:68] Setting up /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/no-preload-776907 for IP: 192.168.72.94
	I0814 01:05:47.727006   61447 certs.go:194] generating shared ca certs ...
	I0814 01:05:47.727027   61447 certs.go:226] acquiring lock for ca certs: {Name:mke321171338291cf6d66f3acbfe43d3fabcafa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 01:05:47.727236   61447 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19429-9425/.minikube/ca.key
	I0814 01:05:47.727305   61447 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.key
	I0814 01:05:47.727321   61447 certs.go:256] generating profile certs ...
	I0814 01:05:47.727446   61447 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/no-preload-776907/client.key
	I0814 01:05:47.727532   61447 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/no-preload-776907/apiserver.key.b2b1ec25
	I0814 01:05:47.727583   61447 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/no-preload-776907/proxy-client.key
	I0814 01:05:47.727745   61447 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589.pem (1338 bytes)
	W0814 01:05:47.727796   61447 certs.go:480] ignoring /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589_empty.pem, impossibly tiny 0 bytes
	I0814 01:05:47.727811   61447 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem (1679 bytes)
	I0814 01:05:47.727846   61447 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem (1082 bytes)
	I0814 01:05:47.727882   61447 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem (1123 bytes)
	I0814 01:05:47.727907   61447 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem (1675 bytes)
	I0814 01:05:47.727948   61447 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem (1708 bytes)
	I0814 01:05:47.728598   61447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0814 01:05:47.758661   61447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0814 01:05:47.790036   61447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0814 01:05:47.814323   61447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0814 01:05:47.839537   61447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/no-preload-776907/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0814 01:05:47.867466   61447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/no-preload-776907/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0814 01:05:47.898996   61447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/no-preload-776907/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0814 01:05:47.923051   61447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/no-preload-776907/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0814 01:05:47.946004   61447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0814 01:05:47.967147   61447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589.pem --> /usr/share/ca-certificates/16589.pem (1338 bytes)
	I0814 01:05:47.988005   61447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem --> /usr/share/ca-certificates/165892.pem (1708 bytes)
	I0814 01:05:48.009704   61447 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0814 01:05:48.024096   61447 ssh_runner.go:195] Run: openssl version
	I0814 01:05:48.029499   61447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0814 01:05:48.038961   61447 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0814 01:05:48.042928   61447 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 13 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0814 01:05:48.042967   61447 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0814 01:05:48.048101   61447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0814 01:05:48.057498   61447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16589.pem && ln -fs /usr/share/ca-certificates/16589.pem /etc/ssl/certs/16589.pem"
	I0814 01:05:48.067275   61447 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16589.pem
	I0814 01:05:48.071457   61447 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 13 23:59 /usr/share/ca-certificates/16589.pem
	I0814 01:05:48.071503   61447 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16589.pem
	I0814 01:05:48.076924   61447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16589.pem /etc/ssl/certs/51391683.0"
	I0814 01:05:48.086951   61447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/165892.pem && ln -fs /usr/share/ca-certificates/165892.pem /etc/ssl/certs/165892.pem"
	I0814 01:05:48.097071   61447 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/165892.pem
	I0814 01:05:48.101070   61447 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 13 23:59 /usr/share/ca-certificates/165892.pem
	I0814 01:05:48.101116   61447 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/165892.pem
	I0814 01:05:48.106289   61447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/165892.pem /etc/ssl/certs/3ec20f2e.0"
	I0814 01:05:48.116109   61447 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0814 01:05:48.119931   61447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0814 01:05:48.124976   61447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0814 01:05:48.129900   61447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0814 01:05:48.135041   61447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0814 01:05:48.140528   61447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0814 01:05:48.145653   61447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0814 01:05:48.150733   61447 kubeadm.go:392] StartCluster: {Name:no-preload-776907 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:no-preload-776907 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.94 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 01:05:48.150833   61447 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0814 01:05:48.150869   61447 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 01:05:48.184513   61447 cri.go:89] found id: ""
	I0814 01:05:48.184585   61447 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0814 01:05:48.194089   61447 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0814 01:05:48.194107   61447 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0814 01:05:48.194145   61447 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0814 01:05:48.202993   61447 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0814 01:05:48.203917   61447 kubeconfig.go:125] found "no-preload-776907" server: "https://192.168.72.94:8443"
	I0814 01:05:48.205929   61447 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0814 01:05:48.214947   61447 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.94
	I0814 01:05:48.214974   61447 kubeadm.go:1160] stopping kube-system containers ...
	I0814 01:05:48.214985   61447 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0814 01:05:48.215023   61447 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 01:05:48.247731   61447 cri.go:89] found id: ""
	I0814 01:05:48.247803   61447 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0814 01:05:48.262901   61447 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 01:05:48.271600   61447 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 01:05:48.271616   61447 kubeadm.go:157] found existing configuration files:
	
	I0814 01:05:48.271652   61447 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 01:05:48.279915   61447 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 01:05:48.279963   61447 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 01:05:48.288458   61447 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 01:05:48.296996   61447 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 01:05:48.297049   61447 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 01:05:48.305625   61447 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 01:05:48.313796   61447 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 01:05:48.313837   61447 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 01:05:48.322211   61447 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 01:05:48.330289   61447 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 01:05:48.330350   61447 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 01:05:48.338604   61447 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 01:05:48.347106   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:05:48.452598   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:05:49.345180   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:05:49.535832   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:05:49.597770   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:05:49.711880   61447 api_server.go:52] waiting for apiserver process to appear ...
	I0814 01:05:49.711964   61447 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:05:50.212332   61447 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:05:50.712073   61447 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:05:50.726301   61447 api_server.go:72] duration metric: took 1.014425118s to wait for apiserver process to appear ...
	I0814 01:05:50.726335   61447 api_server.go:88] waiting for apiserver healthz status ...
	I0814 01:05:50.726369   61447 api_server.go:253] Checking apiserver healthz at https://192.168.72.94:8443/healthz ...
	I0814 01:05:52.086727   61804 start.go:364] duration metric: took 4m12.466611913s to acquireMachinesLock for "old-k8s-version-179312"
	I0814 01:05:52.086801   61804 start.go:96] Skipping create...Using existing machine configuration
	I0814 01:05:52.086811   61804 fix.go:54] fixHost starting: 
	I0814 01:05:52.087240   61804 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:05:52.087282   61804 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:05:52.104210   61804 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42343
	I0814 01:05:52.104679   61804 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:05:52.105122   61804 main.go:141] libmachine: Using API Version  1
	I0814 01:05:52.105146   61804 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:05:52.105462   61804 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:05:52.105656   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .DriverName
	I0814 01:05:52.105804   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetState
	I0814 01:05:52.107362   61804 fix.go:112] recreateIfNeeded on old-k8s-version-179312: state=Stopped err=<nil>
	I0814 01:05:52.107399   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .DriverName
	W0814 01:05:52.107543   61804 fix.go:138] unexpected machine state, will restart: <nil>
	I0814 01:05:52.109460   61804 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-179312" ...
	I0814 01:05:50.738825   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:50.739311   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Found IP for machine: 192.168.39.110
	I0814 01:05:50.739333   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Reserving static IP address...
	I0814 01:05:50.739353   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has current primary IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:50.739784   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-585256", mac: "52:54:00:00:bd:a3", ip: "192.168.39.110"} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:05:50.739819   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Reserved static IP address: 192.168.39.110
	I0814 01:05:50.739844   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | skip adding static IP to network mk-default-k8s-diff-port-585256 - found existing host DHCP lease matching {name: "default-k8s-diff-port-585256", mac: "52:54:00:00:bd:a3", ip: "192.168.39.110"}
	I0814 01:05:50.739871   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | Getting to WaitForSSH function...
	I0814 01:05:50.739888   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for SSH to be available...
	I0814 01:05:50.742187   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:50.742563   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:05:50.742597   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:50.742696   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | Using SSH client type: external
	I0814 01:05:50.742726   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | Using SSH private key: /home/jenkins/minikube-integration/19429-9425/.minikube/machines/default-k8s-diff-port-585256/id_rsa (-rw-------)
	I0814 01:05:50.742755   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.110 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19429-9425/.minikube/machines/default-k8s-diff-port-585256/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0814 01:05:50.742769   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | About to run SSH command:
	I0814 01:05:50.742784   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | exit 0
	I0814 01:05:50.870185   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | SSH cmd err, output: <nil>: 
	I0814 01:05:50.870601   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetConfigRaw
	I0814 01:05:50.871331   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetIP
	I0814 01:05:50.873990   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:50.874371   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:05:50.874401   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:50.874720   61689 profile.go:143] Saving config to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/default-k8s-diff-port-585256/config.json ...
	I0814 01:05:50.874962   61689 machine.go:94] provisionDockerMachine start ...
	I0814 01:05:50.874984   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .DriverName
	I0814 01:05:50.875223   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHHostname
	I0814 01:05:50.877460   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:50.877829   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:05:50.877868   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:50.877958   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHPort
	I0814 01:05:50.878140   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 01:05:50.878274   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 01:05:50.878440   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHUsername
	I0814 01:05:50.878596   61689 main.go:141] libmachine: Using SSH client type: native
	I0814 01:05:50.878828   61689 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0814 01:05:50.878844   61689 main.go:141] libmachine: About to run SSH command:
	hostname
	I0814 01:05:50.990920   61689 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0814 01:05:50.990952   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetMachineName
	I0814 01:05:50.991216   61689 buildroot.go:166] provisioning hostname "default-k8s-diff-port-585256"
	I0814 01:05:50.991244   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetMachineName
	I0814 01:05:50.991445   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHHostname
	I0814 01:05:50.994031   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:50.994353   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:05:50.994384   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:50.994595   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHPort
	I0814 01:05:50.994785   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 01:05:50.994936   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 01:05:50.995105   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHUsername
	I0814 01:05:50.995273   61689 main.go:141] libmachine: Using SSH client type: native
	I0814 01:05:50.995458   61689 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0814 01:05:50.995475   61689 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-585256 && echo "default-k8s-diff-port-585256" | sudo tee /etc/hostname
	I0814 01:05:51.115106   61689 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-585256
	
	I0814 01:05:51.115141   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHHostname
	I0814 01:05:51.118113   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:51.118480   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:05:51.118509   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:51.118726   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHPort
	I0814 01:05:51.118932   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 01:05:51.119097   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 01:05:51.119218   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHUsername
	I0814 01:05:51.119418   61689 main.go:141] libmachine: Using SSH client type: native
	I0814 01:05:51.119594   61689 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0814 01:05:51.119619   61689 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-585256' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-585256/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-585256' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0814 01:05:51.239368   61689 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 01:05:51.239404   61689 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19429-9425/.minikube CaCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19429-9425/.minikube}
	I0814 01:05:51.239430   61689 buildroot.go:174] setting up certificates
	I0814 01:05:51.239438   61689 provision.go:84] configureAuth start
	I0814 01:05:51.239450   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetMachineName
	I0814 01:05:51.239744   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetIP
	I0814 01:05:51.242426   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:51.242864   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:05:51.242894   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:51.243061   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHHostname
	I0814 01:05:51.245385   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:51.245774   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:05:51.245802   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:51.245950   61689 provision.go:143] copyHostCerts
	I0814 01:05:51.246001   61689 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem, removing ...
	I0814 01:05:51.246012   61689 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem
	I0814 01:05:51.246090   61689 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem (1082 bytes)
	I0814 01:05:51.246184   61689 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem, removing ...
	I0814 01:05:51.246192   61689 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem
	I0814 01:05:51.246211   61689 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem (1123 bytes)
	I0814 01:05:51.246268   61689 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem, removing ...
	I0814 01:05:51.246274   61689 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem
	I0814 01:05:51.246291   61689 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem (1675 bytes)
	I0814 01:05:51.246345   61689 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-585256 san=[127.0.0.1 192.168.39.110 default-k8s-diff-port-585256 localhost minikube]
	I0814 01:05:51.390720   61689 provision.go:177] copyRemoteCerts
	I0814 01:05:51.390779   61689 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0814 01:05:51.390828   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHHostname
	I0814 01:05:51.393583   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:51.394011   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:05:51.394065   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:51.394311   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHPort
	I0814 01:05:51.394493   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 01:05:51.394648   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHUsername
	I0814 01:05:51.394774   61689 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/default-k8s-diff-port-585256/id_rsa Username:docker}
	I0814 01:05:51.479700   61689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0814 01:05:51.501643   61689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0814 01:05:51.523469   61689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0814 01:05:51.548552   61689 provision.go:87] duration metric: took 309.100404ms to configureAuth
	I0814 01:05:51.548579   61689 buildroot.go:189] setting minikube options for container-runtime
	I0814 01:05:51.548811   61689 config.go:182] Loaded profile config "default-k8s-diff-port-585256": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 01:05:51.548902   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHHostname
	I0814 01:05:51.551955   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:51.552410   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:05:51.552439   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:51.552657   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHPort
	I0814 01:05:51.552846   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 01:05:51.553007   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 01:05:51.553131   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHUsername
	I0814 01:05:51.553293   61689 main.go:141] libmachine: Using SSH client type: native
	I0814 01:05:51.553506   61689 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0814 01:05:51.553536   61689 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0814 01:05:51.836027   61689 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0814 01:05:51.836048   61689 machine.go:97] duration metric: took 961.072984ms to provisionDockerMachine
	I0814 01:05:51.836060   61689 start.go:293] postStartSetup for "default-k8s-diff-port-585256" (driver="kvm2")
	I0814 01:05:51.836075   61689 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0814 01:05:51.836092   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .DriverName
	I0814 01:05:51.836448   61689 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0814 01:05:51.836483   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHHostname
	I0814 01:05:51.839252   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:51.839608   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:05:51.839634   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:51.839785   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHPort
	I0814 01:05:51.839998   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 01:05:51.840158   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHUsername
	I0814 01:05:51.840306   61689 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/default-k8s-diff-port-585256/id_rsa Username:docker}
	I0814 01:05:51.928323   61689 ssh_runner.go:195] Run: cat /etc/os-release
	I0814 01:05:51.932227   61689 info.go:137] Remote host: Buildroot 2023.02.9
	I0814 01:05:51.932252   61689 filesync.go:126] Scanning /home/jenkins/minikube-integration/19429-9425/.minikube/addons for local assets ...
	I0814 01:05:51.932331   61689 filesync.go:126] Scanning /home/jenkins/minikube-integration/19429-9425/.minikube/files for local assets ...
	I0814 01:05:51.932417   61689 filesync.go:149] local asset: /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem -> 165892.pem in /etc/ssl/certs
	I0814 01:05:51.932539   61689 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0814 01:05:51.941299   61689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem --> /etc/ssl/certs/165892.pem (1708 bytes)
	I0814 01:05:51.966445   61689 start.go:296] duration metric: took 130.370634ms for postStartSetup
	I0814 01:05:51.966488   61689 fix.go:56] duration metric: took 20.140102397s for fixHost
	I0814 01:05:51.966509   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHHostname
	I0814 01:05:51.969169   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:51.969542   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:05:51.969574   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:51.970716   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHPort
	I0814 01:05:51.970923   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 01:05:51.971093   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 01:05:51.971233   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHUsername
	I0814 01:05:51.971411   61689 main.go:141] libmachine: Using SSH client type: native
	I0814 01:05:51.971649   61689 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0814 01:05:51.971663   61689 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0814 01:05:52.086583   61689 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723597552.047212997
	
	I0814 01:05:52.086611   61689 fix.go:216] guest clock: 1723597552.047212997
	I0814 01:05:52.086621   61689 fix.go:229] Guest: 2024-08-14 01:05:52.047212997 +0000 UTC Remote: 2024-08-14 01:05:51.966492542 +0000 UTC m=+253.980961749 (delta=80.720455ms)
	I0814 01:05:52.086647   61689 fix.go:200] guest clock delta is within tolerance: 80.720455ms
	I0814 01:05:52.086653   61689 start.go:83] releasing machines lock for "default-k8s-diff-port-585256", held for 20.260304872s
	I0814 01:05:52.086686   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .DriverName
	I0814 01:05:52.086988   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetIP
	I0814 01:05:52.089862   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:52.090237   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:05:52.090269   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:52.090388   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .DriverName
	I0814 01:05:52.090896   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .DriverName
	I0814 01:05:52.091065   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .DriverName
	I0814 01:05:52.091161   61689 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0814 01:05:52.091208   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHHostname
	I0814 01:05:52.091307   61689 ssh_runner.go:195] Run: cat /version.json
	I0814 01:05:52.091327   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHHostname
	I0814 01:05:52.094188   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:52.094456   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:52.094520   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:05:52.094539   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:52.094722   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHPort
	I0814 01:05:52.094906   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 01:05:52.095028   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:05:52.095052   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:52.095095   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHUsername
	I0814 01:05:52.095210   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHPort
	I0814 01:05:52.095290   61689 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/default-k8s-diff-port-585256/id_rsa Username:docker}
	I0814 01:05:52.095355   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 01:05:52.095505   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHUsername
	I0814 01:05:52.095657   61689 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/default-k8s-diff-port-585256/id_rsa Username:docker}
	I0814 01:05:52.214838   61689 ssh_runner.go:195] Run: systemctl --version
	I0814 01:05:52.222204   61689 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0814 01:05:52.375439   61689 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0814 01:05:52.381523   61689 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0814 01:05:52.381609   61689 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0814 01:05:52.401552   61689 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0814 01:05:52.401582   61689 start.go:495] detecting cgroup driver to use...
	I0814 01:05:52.401651   61689 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0814 01:05:52.417919   61689 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0814 01:05:52.437217   61689 docker.go:217] disabling cri-docker service (if available) ...
	I0814 01:05:52.437288   61689 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0814 01:05:52.453875   61689 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0814 01:05:52.470300   61689 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0814 01:05:52.595346   61689 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0814 01:05:52.762539   61689 docker.go:233] disabling docker service ...
	I0814 01:05:52.762616   61689 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0814 01:05:52.778328   61689 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0814 01:05:52.791736   61689 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0814 01:05:52.935414   61689 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0814 01:05:53.120909   61689 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0814 01:05:53.134424   61689 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 01:05:53.152618   61689 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0814 01:05:53.152693   61689 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:05:53.164847   61689 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0814 01:05:53.164922   61689 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:05:53.176337   61689 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:05:53.187338   61689 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:05:53.198573   61689 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0814 01:05:53.208385   61689 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:05:53.218220   61689 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:05:53.234795   61689 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:05:53.251006   61689 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0814 01:05:53.265820   61689 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0814 01:05:53.265883   61689 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0814 01:05:53.285753   61689 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0814 01:05:53.298127   61689 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 01:05:53.458646   61689 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0814 01:05:53.610690   61689 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0814 01:05:53.610765   61689 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0814 01:05:53.615292   61689 start.go:563] Will wait 60s for crictl version
	I0814 01:05:53.615348   61689 ssh_runner.go:195] Run: which crictl
	I0814 01:05:53.618756   61689 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0814 01:05:53.658450   61689 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0814 01:05:53.658551   61689 ssh_runner.go:195] Run: crio --version
	I0814 01:05:53.685316   61689 ssh_runner.go:195] Run: crio --version
	I0814 01:05:53.715106   61689 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0814 01:05:52.110579   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .Start
	I0814 01:05:52.110744   61804 main.go:141] libmachine: (old-k8s-version-179312) Ensuring networks are active...
	I0814 01:05:52.111309   61804 main.go:141] libmachine: (old-k8s-version-179312) Ensuring network default is active
	I0814 01:05:52.111709   61804 main.go:141] libmachine: (old-k8s-version-179312) Ensuring network mk-old-k8s-version-179312 is active
	I0814 01:05:52.112094   61804 main.go:141] libmachine: (old-k8s-version-179312) Getting domain xml...
	I0814 01:05:52.112845   61804 main.go:141] libmachine: (old-k8s-version-179312) Creating domain...
	I0814 01:05:53.502995   61804 main.go:141] libmachine: (old-k8s-version-179312) Waiting to get IP...
	I0814 01:05:53.504003   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:05:53.504428   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 01:05:53.504496   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 01:05:53.504392   62858 retry.go:31] will retry after 197.24813ms: waiting for machine to come up
	I0814 01:05:53.702874   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:05:53.703413   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 01:05:53.703435   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 01:05:53.703362   62858 retry.go:31] will retry after 310.273767ms: waiting for machine to come up
	I0814 01:05:54.015867   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:05:54.016309   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 01:05:54.016343   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 01:05:54.016247   62858 retry.go:31] will retry after 401.494411ms: waiting for machine to come up
	I0814 01:05:54.419847   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:05:54.420305   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 01:05:54.420330   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 01:05:54.420256   62858 retry.go:31] will retry after 407.322632ms: waiting for machine to come up
	I0814 01:05:53.379895   61447 api_server.go:279] https://192.168.72.94:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0814 01:05:53.379926   61447 api_server.go:103] status: https://192.168.72.94:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0814 01:05:53.379939   61447 api_server.go:253] Checking apiserver healthz at https://192.168.72.94:8443/healthz ...
	I0814 01:05:53.410913   61447 api_server.go:279] https://192.168.72.94:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0814 01:05:53.410945   61447 api_server.go:103] status: https://192.168.72.94:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0814 01:05:53.727193   61447 api_server.go:253] Checking apiserver healthz at https://192.168.72.94:8443/healthz ...
	I0814 01:05:53.740840   61447 api_server.go:279] https://192.168.72.94:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0814 01:05:53.740877   61447 api_server.go:103] status: https://192.168.72.94:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0814 01:05:54.227186   61447 api_server.go:253] Checking apiserver healthz at https://192.168.72.94:8443/healthz ...
	I0814 01:05:54.238685   61447 api_server.go:279] https://192.168.72.94:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0814 01:05:54.238721   61447 api_server.go:103] status: https://192.168.72.94:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0814 01:05:54.727193   61447 api_server.go:253] Checking apiserver healthz at https://192.168.72.94:8443/healthz ...
	I0814 01:05:54.733996   61447 api_server.go:279] https://192.168.72.94:8443/healthz returned 200:
	ok
	I0814 01:05:54.744409   61447 api_server.go:141] control plane version: v1.31.0
	I0814 01:05:54.744439   61447 api_server.go:131] duration metric: took 4.018095644s to wait for apiserver health ...
	I0814 01:05:54.744455   61447 cni.go:84] Creating CNI manager for ""
	I0814 01:05:54.744495   61447 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 01:05:54.746461   61447 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0814 01:05:54.748115   61447 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0814 01:05:54.764310   61447 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0814 01:05:54.794096   61447 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 01:05:54.818989   61447 system_pods.go:59] 8 kube-system pods found
	I0814 01:05:54.819032   61447 system_pods.go:61] "coredns-6f6b679f8f-dz9zk" [67e29ce3-7f67-4b96-8030-c980773b5772] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0814 01:05:54.819042   61447 system_pods.go:61] "etcd-no-preload-776907" [b81b7341-dcd8-4374-8241-8797eb33d707] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0814 01:05:54.819081   61447 system_pods.go:61] "kube-apiserver-no-preload-776907" [33b066e2-28ef-46a7-95d7-b17806cdbde6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0814 01:05:54.819094   61447 system_pods.go:61] "kube-controller-manager-no-preload-776907" [1de07b1f-7e0d-4704-84dc-fbb1280fc3bf] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0814 01:05:54.819106   61447 system_pods.go:61] "kube-proxy-pgm9t" [efad60b0-c62e-4c47-974b-98fdca9d3496] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0814 01:05:54.819119   61447 system_pods.go:61] "kube-scheduler-no-preload-776907" [6a57c2f5-6194-4e84-bfd3-985a6ff2333d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0814 01:05:54.819136   61447 system_pods.go:61] "metrics-server-6867b74b74-gb2dt" [c950c58e-c5c3-4535-b10f-f4379ff03409] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 01:05:54.819157   61447 system_pods.go:61] "storage-provisioner" [d0ba9510-e0a5-4558-98e3-a9510920f93a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0814 01:05:54.819172   61447 system_pods.go:74] duration metric: took 25.05113ms to wait for pod list to return data ...
	I0814 01:05:54.819195   61447 node_conditions.go:102] verifying NodePressure condition ...
	I0814 01:05:54.826286   61447 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0814 01:05:54.826394   61447 node_conditions.go:123] node cpu capacity is 2
	I0814 01:05:54.826437   61447 node_conditions.go:105] duration metric: took 7.224617ms to run NodePressure ...
	I0814 01:05:54.826473   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:05:55.135886   61447 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0814 01:05:55.142122   61447 kubeadm.go:739] kubelet initialised
	I0814 01:05:55.142142   61447 kubeadm.go:740] duration metric: took 6.231178ms waiting for restarted kubelet to initialise ...
	I0814 01:05:55.142157   61447 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 01:05:55.147513   61447 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6f6b679f8f-dz9zk" in "kube-system" namespace to be "Ready" ...
	I0814 01:05:55.153178   61447 pod_ready.go:97] node "no-preload-776907" hosting pod "coredns-6f6b679f8f-dz9zk" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-776907" has status "Ready":"False"
	I0814 01:05:55.153200   61447 pod_ready.go:81] duration metric: took 5.659541ms for pod "coredns-6f6b679f8f-dz9zk" in "kube-system" namespace to be "Ready" ...
	E0814 01:05:55.153208   61447 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-776907" hosting pod "coredns-6f6b679f8f-dz9zk" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-776907" has status "Ready":"False"
	I0814 01:05:55.153215   61447 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-776907" in "kube-system" namespace to be "Ready" ...
	I0814 01:05:55.158158   61447 pod_ready.go:97] node "no-preload-776907" hosting pod "etcd-no-preload-776907" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-776907" has status "Ready":"False"
	I0814 01:05:55.158182   61447 pod_ready.go:81] duration metric: took 4.958453ms for pod "etcd-no-preload-776907" in "kube-system" namespace to be "Ready" ...
	E0814 01:05:55.158192   61447 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-776907" hosting pod "etcd-no-preload-776907" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-776907" has status "Ready":"False"
	I0814 01:05:55.158199   61447 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-776907" in "kube-system" namespace to be "Ready" ...
	I0814 01:05:55.164468   61447 pod_ready.go:97] node "no-preload-776907" hosting pod "kube-apiserver-no-preload-776907" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-776907" has status "Ready":"False"
	I0814 01:05:55.164490   61447 pod_ready.go:81] duration metric: took 6.286201ms for pod "kube-apiserver-no-preload-776907" in "kube-system" namespace to be "Ready" ...
	E0814 01:05:55.164499   61447 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-776907" hosting pod "kube-apiserver-no-preload-776907" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-776907" has status "Ready":"False"
	I0814 01:05:55.164506   61447 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-776907" in "kube-system" namespace to be "Ready" ...
	I0814 01:05:55.198966   61447 pod_ready.go:97] node "no-preload-776907" hosting pod "kube-controller-manager-no-preload-776907" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-776907" has status "Ready":"False"
	I0814 01:05:55.199003   61447 pod_ready.go:81] duration metric: took 34.484311ms for pod "kube-controller-manager-no-preload-776907" in "kube-system" namespace to be "Ready" ...
	E0814 01:05:55.199017   61447 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-776907" hosting pod "kube-controller-manager-no-preload-776907" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-776907" has status "Ready":"False"
	I0814 01:05:55.199026   61447 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-pgm9t" in "kube-system" namespace to be "Ready" ...
	I0814 01:05:55.598334   61447 pod_ready.go:97] node "no-preload-776907" hosting pod "kube-proxy-pgm9t" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-776907" has status "Ready":"False"
	I0814 01:05:55.598365   61447 pod_ready.go:81] duration metric: took 399.329275ms for pod "kube-proxy-pgm9t" in "kube-system" namespace to be "Ready" ...
	E0814 01:05:55.598377   61447 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-776907" hosting pod "kube-proxy-pgm9t" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-776907" has status "Ready":"False"
	I0814 01:05:55.598386   61447 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-776907" in "kube-system" namespace to be "Ready" ...
	I0814 01:05:55.998091   61447 pod_ready.go:97] node "no-preload-776907" hosting pod "kube-scheduler-no-preload-776907" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-776907" has status "Ready":"False"
	I0814 01:05:55.998127   61447 pod_ready.go:81] duration metric: took 399.731033ms for pod "kube-scheduler-no-preload-776907" in "kube-system" namespace to be "Ready" ...
	E0814 01:05:55.998142   61447 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-776907" hosting pod "kube-scheduler-no-preload-776907" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-776907" has status "Ready":"False"
	I0814 01:05:55.998152   61447 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace to be "Ready" ...
	I0814 01:05:56.397421   61447 pod_ready.go:97] node "no-preload-776907" hosting pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-776907" has status "Ready":"False"
	I0814 01:05:56.397448   61447 pod_ready.go:81] duration metric: took 399.277712ms for pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace to be "Ready" ...
	E0814 01:05:56.397458   61447 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-776907" hosting pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-776907" has status "Ready":"False"
	I0814 01:05:56.397465   61447 pod_ready.go:38] duration metric: took 1.255299191s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 01:05:56.397481   61447 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0814 01:05:56.409600   61447 ops.go:34] apiserver oom_adj: -16
	I0814 01:05:56.409643   61447 kubeadm.go:597] duration metric: took 8.215521031s to restartPrimaryControlPlane
	I0814 01:05:56.409656   61447 kubeadm.go:394] duration metric: took 8.258927601s to StartCluster
	I0814 01:05:56.409677   61447 settings.go:142] acquiring lock: {Name:mkb0f793aa2a6618ff3457f9cd2d34beec5f1b5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 01:05:56.409769   61447 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19429-9425/kubeconfig
	I0814 01:05:56.411135   61447 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19429-9425/kubeconfig: {Name:mkd99f966962f24d3ec49056c344f5320df43dba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 01:05:56.411434   61447 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.94 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0814 01:05:56.411510   61447 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0814 01:05:56.411605   61447 addons.go:69] Setting storage-provisioner=true in profile "no-preload-776907"
	I0814 01:05:56.411639   61447 addons.go:234] Setting addon storage-provisioner=true in "no-preload-776907"
	W0814 01:05:56.411651   61447 addons.go:243] addon storage-provisioner should already be in state true
	I0814 01:05:56.411692   61447 host.go:66] Checking if "no-preload-776907" exists ...
	I0814 01:05:56.411702   61447 config.go:182] Loaded profile config "no-preload-776907": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 01:05:56.411755   61447 addons.go:69] Setting default-storageclass=true in profile "no-preload-776907"
	I0814 01:05:56.411792   61447 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-776907"
	I0814 01:05:56.412127   61447 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:05:56.412169   61447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:05:56.412221   61447 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:05:56.412238   61447 addons.go:69] Setting metrics-server=true in profile "no-preload-776907"
	I0814 01:05:56.412249   61447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:05:56.412272   61447 addons.go:234] Setting addon metrics-server=true in "no-preload-776907"
	W0814 01:05:56.412289   61447 addons.go:243] addon metrics-server should already be in state true
	I0814 01:05:56.412325   61447 host.go:66] Checking if "no-preload-776907" exists ...
	I0814 01:05:56.412679   61447 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:05:56.412726   61447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:05:56.413470   61447 out.go:177] * Verifying Kubernetes components...
	I0814 01:05:56.414907   61447 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 01:05:56.432617   61447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40991
	I0814 01:05:56.433633   61447 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:05:56.433655   61447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42565
	I0814 01:05:56.433682   61447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33323
	I0814 01:05:56.434304   61447 main.go:141] libmachine: Using API Version  1
	I0814 01:05:56.434325   61447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:05:56.434348   61447 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:05:56.434768   61447 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:05:56.434828   61447 main.go:141] libmachine: Using API Version  1
	I0814 01:05:56.434849   61447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:05:56.435292   61447 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:05:56.435318   61447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:05:56.435500   61447 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:05:56.436085   61447 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:05:56.436133   61447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:05:56.436678   61447 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:05:56.438722   61447 main.go:141] libmachine: Using API Version  1
	I0814 01:05:56.438744   61447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:05:56.439300   61447 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:05:56.442254   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetState
	I0814 01:05:56.445951   61447 addons.go:234] Setting addon default-storageclass=true in "no-preload-776907"
	W0814 01:05:56.445969   61447 addons.go:243] addon default-storageclass should already be in state true
	I0814 01:05:56.445997   61447 host.go:66] Checking if "no-preload-776907" exists ...
	I0814 01:05:56.446331   61447 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:05:56.446364   61447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:05:56.457855   61447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36297
	I0814 01:05:56.459973   61447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40635
	I0814 01:05:56.460484   61447 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:05:56.461068   61447 main.go:141] libmachine: Using API Version  1
	I0814 01:05:56.461089   61447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:05:56.461565   61447 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:05:56.462741   61447 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:05:56.462899   61447 main.go:141] libmachine: Using API Version  1
	I0814 01:05:56.462913   61447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:05:56.463577   61447 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:05:56.463640   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetState
	I0814 01:05:56.464100   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetState
	I0814 01:05:56.464341   61447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38841
	I0814 01:05:56.465394   61447 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:05:56.465878   61447 main.go:141] libmachine: (no-preload-776907) Calling .DriverName
	I0814 01:05:56.465995   61447 main.go:141] libmachine: Using API Version  1
	I0814 01:05:56.466007   61447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:05:56.466617   61447 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:05:56.466684   61447 main.go:141] libmachine: (no-preload-776907) Calling .DriverName
	I0814 01:05:56.467327   61447 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:05:56.467367   61447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:05:56.468708   61447 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0814 01:05:56.468802   61447 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 01:05:56.469927   61447 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0814 01:05:56.469944   61447 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0814 01:05:56.469963   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHHostname
	I0814 01:05:56.473235   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:56.473684   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:56.473705   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:56.473879   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHPort
	I0814 01:05:56.474052   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 01:05:56.474176   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHUsername
	I0814 01:05:56.474181   61447 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 01:05:56.474230   61447 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0814 01:05:56.474244   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHHostname
	I0814 01:05:56.474328   61447 sshutil.go:53] new ssh client: &{IP:192.168.72.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/no-preload-776907/id_rsa Username:docker}
	I0814 01:05:56.477789   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:56.478291   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:56.478307   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:56.478643   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHPort
	I0814 01:05:56.478813   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 01:05:56.478932   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHUsername
	I0814 01:05:56.479056   61447 sshutil.go:53] new ssh client: &{IP:192.168.72.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/no-preload-776907/id_rsa Username:docker}
	I0814 01:05:56.506690   61447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40059
	I0814 01:05:56.507196   61447 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:05:56.507726   61447 main.go:141] libmachine: Using API Version  1
	I0814 01:05:56.507750   61447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:05:56.508129   61447 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:05:56.508352   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetState
	I0814 01:05:53.716678   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetIP
	I0814 01:05:53.719662   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:53.720132   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:05:53.720161   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:53.720382   61689 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0814 01:05:53.724276   61689 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 01:05:53.736896   61689 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-585256 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:default-k8s-diff-port-585256 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0814 01:05:53.737033   61689 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0814 01:05:53.737090   61689 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 01:05:53.786464   61689 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0814 01:05:53.786549   61689 ssh_runner.go:195] Run: which lz4
	I0814 01:05:53.791254   61689 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0814 01:05:53.796216   61689 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0814 01:05:53.796251   61689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0814 01:05:55.074296   61689 crio.go:462] duration metric: took 1.283077887s to copy over tarball
	I0814 01:05:55.074381   61689 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0814 01:05:57.330151   61689 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.255736783s)
	I0814 01:05:57.330183   61689 crio.go:469] duration metric: took 2.255855524s to extract the tarball
	I0814 01:05:57.330193   61689 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0814 01:05:57.390001   61689 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 01:05:57.438765   61689 crio.go:514] all images are preloaded for cri-o runtime.
	I0814 01:05:57.438795   61689 cache_images.go:84] Images are preloaded, skipping loading
	I0814 01:05:57.438804   61689 kubeadm.go:934] updating node { 192.168.39.110 8444 v1.31.0 crio true true} ...
	I0814 01:05:57.438939   61689 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-585256 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.110
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-585256 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0814 01:05:57.439019   61689 ssh_runner.go:195] Run: crio config
	I0814 01:05:57.487432   61689 cni.go:84] Creating CNI manager for ""
	I0814 01:05:57.487456   61689 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 01:05:57.487468   61689 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0814 01:05:57.487488   61689 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.110 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-585256 NodeName:default-k8s-diff-port-585256 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.110"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.110 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0814 01:05:57.487628   61689 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.110
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-585256"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.110
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.110"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0814 01:05:57.487683   61689 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0814 01:05:57.499806   61689 binaries.go:44] Found k8s binaries, skipping transfer
	I0814 01:05:57.499875   61689 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0814 01:05:57.508987   61689 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0814 01:05:57.527561   61689 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0814 01:05:57.546193   61689 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0814 01:05:57.566209   61689 ssh_runner.go:195] Run: grep 192.168.39.110	control-plane.minikube.internal$ /etc/hosts
	I0814 01:05:57.569852   61689 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.110	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 01:05:57.584800   61689 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 01:05:57.718643   61689 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 01:05:57.739124   61689 certs.go:68] Setting up /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/default-k8s-diff-port-585256 for IP: 192.168.39.110
	I0814 01:05:57.739153   61689 certs.go:194] generating shared ca certs ...
	I0814 01:05:57.739174   61689 certs.go:226] acquiring lock for ca certs: {Name:mke321171338291cf6d66f3acbfe43d3fabcafa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 01:05:57.739390   61689 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19429-9425/.minikube/ca.key
	I0814 01:05:57.739461   61689 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.key
	I0814 01:05:57.739476   61689 certs.go:256] generating profile certs ...
	I0814 01:05:57.739607   61689 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/default-k8s-diff-port-585256/client.key
	I0814 01:05:57.739700   61689 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/default-k8s-diff-port-585256/apiserver.key.7cbada89
	I0814 01:05:57.739764   61689 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/default-k8s-diff-port-585256/proxy-client.key
	I0814 01:05:57.739951   61689 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589.pem (1338 bytes)
	W0814 01:05:57.740000   61689 certs.go:480] ignoring /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589_empty.pem, impossibly tiny 0 bytes
	I0814 01:05:57.740017   61689 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem (1679 bytes)
	I0814 01:05:57.740054   61689 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem (1082 bytes)
	I0814 01:05:57.740096   61689 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem (1123 bytes)
	I0814 01:05:57.740128   61689 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem (1675 bytes)
	I0814 01:05:57.740198   61689 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem (1708 bytes)
	I0814 01:05:57.740914   61689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0814 01:05:57.776830   61689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0814 01:05:57.805557   61689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0814 01:05:57.838303   61689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0814 01:05:57.878807   61689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/default-k8s-diff-port-585256/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0814 01:05:57.918149   61689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/default-k8s-diff-port-585256/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0814 01:05:57.951098   61689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/default-k8s-diff-port-585256/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0814 01:05:57.979966   61689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/default-k8s-diff-port-585256/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0814 01:05:58.008045   61689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem --> /usr/share/ca-certificates/165892.pem (1708 bytes)
	I0814 01:05:56.510326   61447 main.go:141] libmachine: (no-preload-776907) Calling .DriverName
	I0814 01:05:56.510711   61447 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0814 01:05:56.510727   61447 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0814 01:05:56.510746   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHHostname
	I0814 01:05:56.513933   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:56.514347   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:56.514366   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:56.514640   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHPort
	I0814 01:05:56.514790   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 01:05:56.514921   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHUsername
	I0814 01:05:56.515041   61447 sshutil.go:53] new ssh client: &{IP:192.168.72.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/no-preload-776907/id_rsa Username:docker}
	I0814 01:05:56.648210   61447 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 01:05:56.669968   61447 node_ready.go:35] waiting up to 6m0s for node "no-preload-776907" to be "Ready" ...
	I0814 01:05:56.752258   61447 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0814 01:05:56.752282   61447 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0814 01:05:56.784534   61447 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0814 01:05:56.784570   61447 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0814 01:05:56.797555   61447 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0814 01:05:56.811711   61447 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 01:05:56.852143   61447 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0814 01:05:56.852222   61447 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0814 01:05:56.896802   61447 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0814 01:05:57.332181   61447 main.go:141] libmachine: Making call to close driver server
	I0814 01:05:57.332207   61447 main.go:141] libmachine: (no-preload-776907) Calling .Close
	I0814 01:05:57.332534   61447 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:05:57.332552   61447 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:05:57.332562   61447 main.go:141] libmachine: Making call to close driver server
	I0814 01:05:57.332570   61447 main.go:141] libmachine: (no-preload-776907) Calling .Close
	I0814 01:05:57.332892   61447 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:05:57.332908   61447 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:05:57.332999   61447 main.go:141] libmachine: (no-preload-776907) DBG | Closing plugin on server side
	I0814 01:05:57.377695   61447 main.go:141] libmachine: Making call to close driver server
	I0814 01:05:57.377726   61447 main.go:141] libmachine: (no-preload-776907) Calling .Close
	I0814 01:05:57.378310   61447 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:05:57.378335   61447 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:05:57.378307   61447 main.go:141] libmachine: (no-preload-776907) DBG | Closing plugin on server side
	I0814 01:05:58.285384   61447 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.388491618s)
	I0814 01:05:58.285399   61447 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.473604802s)
	I0814 01:05:58.285438   61447 main.go:141] libmachine: Making call to close driver server
	I0814 01:05:58.285466   61447 main.go:141] libmachine: (no-preload-776907) Calling .Close
	I0814 01:05:58.285438   61447 main.go:141] libmachine: Making call to close driver server
	I0814 01:05:58.285542   61447 main.go:141] libmachine: (no-preload-776907) Calling .Close
	I0814 01:05:58.285816   61447 main.go:141] libmachine: (no-preload-776907) DBG | Closing plugin on server side
	I0814 01:05:58.285858   61447 main.go:141] libmachine: (no-preload-776907) DBG | Closing plugin on server side
	I0814 01:05:58.285874   61447 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:05:58.285881   61447 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:05:58.285890   61447 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:05:58.285897   61447 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:05:58.285903   61447 main.go:141] libmachine: Making call to close driver server
	I0814 01:05:58.285908   61447 main.go:141] libmachine: Making call to close driver server
	I0814 01:05:58.285915   61447 main.go:141] libmachine: (no-preload-776907) Calling .Close
	I0814 01:05:58.285934   61447 main.go:141] libmachine: (no-preload-776907) Calling .Close
	I0814 01:05:58.286168   61447 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:05:58.286180   61447 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:05:58.287529   61447 main.go:141] libmachine: (no-preload-776907) DBG | Closing plugin on server side
	I0814 01:05:58.287541   61447 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:05:58.287560   61447 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:05:58.287576   61447 addons.go:475] Verifying addon metrics-server=true in "no-preload-776907"
	I0814 01:05:58.289411   61447 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0814 01:05:54.828943   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:05:54.829542   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 01:05:54.829567   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 01:05:54.829451   62858 retry.go:31] will retry after 761.368258ms: waiting for machine to come up
	I0814 01:05:55.592398   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:05:55.593051   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 01:05:55.593077   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 01:05:55.592959   62858 retry.go:31] will retry after 776.526082ms: waiting for machine to come up
	I0814 01:05:56.370701   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:05:56.371193   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 01:05:56.371214   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 01:05:56.371176   62858 retry.go:31] will retry after 1.033572565s: waiting for machine to come up
	I0814 01:05:57.407052   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:05:57.407572   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 01:05:57.407608   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 01:05:57.407514   62858 retry.go:31] will retry after 1.075443116s: waiting for machine to come up
	I0814 01:05:58.484020   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:05:58.484428   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 01:05:58.484450   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 01:05:58.484400   62858 retry.go:31] will retry after 1.753983606s: waiting for machine to come up
	I0814 01:05:58.290516   61447 addons.go:510] duration metric: took 1.879011423s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0814 01:05:58.674495   61447 node_ready.go:53] node "no-preload-776907" has status "Ready":"False"
	I0814 01:06:00.726396   61447 node_ready.go:53] node "no-preload-776907" has status "Ready":"False"
	I0814 01:05:58.035164   61689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0814 01:05:58.062151   61689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589.pem --> /usr/share/ca-certificates/16589.pem (1338 bytes)
	I0814 01:05:58.088779   61689 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0814 01:05:58.104815   61689 ssh_runner.go:195] Run: openssl version
	I0814 01:05:58.111743   61689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0814 01:05:58.122523   61689 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0814 01:05:58.126771   61689 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 13 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0814 01:05:58.126827   61689 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0814 01:05:58.132103   61689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0814 01:05:58.143604   61689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16589.pem && ln -fs /usr/share/ca-certificates/16589.pem /etc/ssl/certs/16589.pem"
	I0814 01:05:58.155065   61689 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16589.pem
	I0814 01:05:58.160457   61689 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 13 23:59 /usr/share/ca-certificates/16589.pem
	I0814 01:05:58.160511   61689 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16589.pem
	I0814 01:05:58.167417   61689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16589.pem /etc/ssl/certs/51391683.0"
	I0814 01:05:58.180825   61689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/165892.pem && ln -fs /usr/share/ca-certificates/165892.pem /etc/ssl/certs/165892.pem"
	I0814 01:05:58.193263   61689 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/165892.pem
	I0814 01:05:58.198571   61689 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 13 23:59 /usr/share/ca-certificates/165892.pem
	I0814 01:05:58.198637   61689 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/165892.pem
	I0814 01:05:58.205645   61689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/165892.pem /etc/ssl/certs/3ec20f2e.0"
	I0814 01:05:58.219088   61689 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0814 01:05:58.224431   61689 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0814 01:05:58.231762   61689 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0814 01:05:58.238996   61689 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0814 01:05:58.244758   61689 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0814 01:05:58.250112   61689 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0814 01:05:58.257224   61689 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0814 01:05:58.262563   61689 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-585256 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:default-k8s-diff-port-585256 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 01:05:58.262677   61689 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0814 01:05:58.262745   61689 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 01:05:58.309680   61689 cri.go:89] found id: ""
	I0814 01:05:58.309753   61689 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0814 01:05:58.319775   61689 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0814 01:05:58.319796   61689 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0814 01:05:58.319852   61689 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0814 01:05:58.329093   61689 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0814 01:05:58.330026   61689 kubeconfig.go:125] found "default-k8s-diff-port-585256" server: "https://192.168.39.110:8444"
	I0814 01:05:58.332001   61689 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0814 01:05:58.341206   61689 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.110
	I0814 01:05:58.341235   61689 kubeadm.go:1160] stopping kube-system containers ...
	I0814 01:05:58.341247   61689 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0814 01:05:58.341311   61689 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 01:05:58.376929   61689 cri.go:89] found id: ""
	I0814 01:05:58.376991   61689 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0814 01:05:58.393789   61689 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 01:05:58.402954   61689 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 01:05:58.402979   61689 kubeadm.go:157] found existing configuration files:
	
	I0814 01:05:58.403032   61689 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0814 01:05:58.412025   61689 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 01:05:58.412081   61689 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 01:05:58.421031   61689 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0814 01:05:58.429702   61689 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 01:05:58.429774   61689 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 01:05:58.438859   61689 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0814 01:05:58.447047   61689 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 01:05:58.447106   61689 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 01:05:58.455697   61689 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0814 01:05:58.463942   61689 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 01:05:58.464004   61689 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 01:05:58.472399   61689 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 01:05:58.481173   61689 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:05:58.591187   61689 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:05:59.150641   61689 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:05:59.356842   61689 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:05:59.416846   61689 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:05:59.500693   61689 api_server.go:52] waiting for apiserver process to appear ...
	I0814 01:05:59.500779   61689 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:00.001860   61689 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:00.500969   61689 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:01.001662   61689 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:01.030737   61689 api_server.go:72] duration metric: took 1.530044643s to wait for apiserver process to appear ...
	I0814 01:06:01.030766   61689 api_server.go:88] waiting for apiserver healthz status ...
	I0814 01:06:01.030790   61689 api_server.go:253] Checking apiserver healthz at https://192.168.39.110:8444/healthz ...
	I0814 01:06:01.031270   61689 api_server.go:269] stopped: https://192.168.39.110:8444/healthz: Get "https://192.168.39.110:8444/healthz": dial tcp 192.168.39.110:8444: connect: connection refused
	I0814 01:06:01.530913   61689 api_server.go:253] Checking apiserver healthz at https://192.168.39.110:8444/healthz ...
	I0814 01:06:00.239701   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:00.240210   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 01:06:00.240234   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 01:06:00.240157   62858 retry.go:31] will retry after 1.471169968s: waiting for machine to come up
	I0814 01:06:01.713921   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:01.714410   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 01:06:01.714449   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 01:06:01.714385   62858 retry.go:31] will retry after 2.509653415s: waiting for machine to come up
	I0814 01:06:04.225883   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:04.226391   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 01:06:04.226417   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 01:06:04.226346   62858 retry.go:31] will retry after 3.61921572s: waiting for machine to come up
	I0814 01:06:04.011296   61689 api_server.go:279] https://192.168.39.110:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0814 01:06:04.011342   61689 api_server.go:103] status: https://192.168.39.110:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0814 01:06:04.011359   61689 api_server.go:253] Checking apiserver healthz at https://192.168.39.110:8444/healthz ...
	I0814 01:06:04.030095   61689 api_server.go:279] https://192.168.39.110:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0814 01:06:04.030128   61689 api_server.go:103] status: https://192.168.39.110:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0814 01:06:04.031159   61689 api_server.go:253] Checking apiserver healthz at https://192.168.39.110:8444/healthz ...
	I0814 01:06:04.149715   61689 api_server.go:279] https://192.168.39.110:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0814 01:06:04.149760   61689 api_server.go:103] status: https://192.168.39.110:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0814 01:06:04.530942   61689 api_server.go:253] Checking apiserver healthz at https://192.168.39.110:8444/healthz ...
	I0814 01:06:04.541074   61689 api_server.go:279] https://192.168.39.110:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0814 01:06:04.541119   61689 api_server.go:103] status: https://192.168.39.110:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0814 01:06:05.031232   61689 api_server.go:253] Checking apiserver healthz at https://192.168.39.110:8444/healthz ...
	I0814 01:06:05.036252   61689 api_server.go:279] https://192.168.39.110:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0814 01:06:05.036278   61689 api_server.go:103] status: https://192.168.39.110:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0814 01:06:05.531902   61689 api_server.go:253] Checking apiserver healthz at https://192.168.39.110:8444/healthz ...
	I0814 01:06:05.536016   61689 api_server.go:279] https://192.168.39.110:8444/healthz returned 200:
	ok
	I0814 01:06:05.542693   61689 api_server.go:141] control plane version: v1.31.0
	I0814 01:06:05.542718   61689 api_server.go:131] duration metric: took 4.511944733s to wait for apiserver health ...
	I0814 01:06:05.542728   61689 cni.go:84] Creating CNI manager for ""
	I0814 01:06:05.542736   61689 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 01:06:05.544557   61689 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0814 01:06:03.174271   61447 node_ready.go:53] node "no-preload-776907" has status "Ready":"False"
	I0814 01:06:04.174287   61447 node_ready.go:49] node "no-preload-776907" has status "Ready":"True"
	I0814 01:06:04.174312   61447 node_ready.go:38] duration metric: took 7.504312709s for node "no-preload-776907" to be "Ready" ...
	I0814 01:06:04.174324   61447 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 01:06:04.181275   61447 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-dz9zk" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:04.187150   61447 pod_ready.go:92] pod "coredns-6f6b679f8f-dz9zk" in "kube-system" namespace has status "Ready":"True"
	I0814 01:06:04.187171   61447 pod_ready.go:81] duration metric: took 5.866488ms for pod "coredns-6f6b679f8f-dz9zk" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:04.187180   61447 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-776907" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:04.192673   61447 pod_ready.go:92] pod "etcd-no-preload-776907" in "kube-system" namespace has status "Ready":"True"
	I0814 01:06:04.192694   61447 pod_ready.go:81] duration metric: took 5.50752ms for pod "etcd-no-preload-776907" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:04.192705   61447 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-776907" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:06.199283   61447 pod_ready.go:102] pod "kube-apiserver-no-preload-776907" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:05.545819   61689 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0814 01:06:05.556019   61689 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0814 01:06:05.598403   61689 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 01:06:05.608687   61689 system_pods.go:59] 8 kube-system pods found
	I0814 01:06:05.608718   61689 system_pods.go:61] "coredns-6f6b679f8f-7vdsf" [ea069874-e3a9-41a4-b038-cfca429e60cc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0814 01:06:05.608730   61689 system_pods.go:61] "etcd-default-k8s-diff-port-585256" [922a7db1-2b4d-4f7b-af08-3ed730f1d6e3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0814 01:06:05.608737   61689 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-585256" [2db632ae-aaf3-4df4-85b2-7ba505297efb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0814 01:06:05.608743   61689 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-585256" [d9cc182b-9153-4606-a719-465aed72c481] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0814 01:06:05.608747   61689 system_pods.go:61] "kube-proxy-cz77l" [67d1af69-ecbd-4564-be50-f96936604345] Running
	I0814 01:06:05.608751   61689 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-585256" [f0e99120-b573-4eb6-909f-a9b79886ec47] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0814 01:06:05.608755   61689 system_pods.go:61] "metrics-server-6867b74b74-6cql9" [f1213ad4-770d-4b81-96b9-7b5e10f2a23a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 01:06:05.608760   61689 system_pods.go:61] "storage-provisioner" [589b83be-2ad6-4b16-829f-cb944487303c] Running
	I0814 01:06:05.608766   61689 system_pods.go:74] duration metric: took 10.339955ms to wait for pod list to return data ...
	I0814 01:06:05.608772   61689 node_conditions.go:102] verifying NodePressure condition ...
	I0814 01:06:05.612993   61689 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0814 01:06:05.613024   61689 node_conditions.go:123] node cpu capacity is 2
	I0814 01:06:05.613037   61689 node_conditions.go:105] duration metric: took 4.259435ms to run NodePressure ...
	I0814 01:06:05.613055   61689 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:06:05.884859   61689 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0814 01:06:05.889608   61689 kubeadm.go:739] kubelet initialised
	I0814 01:06:05.889636   61689 kubeadm.go:740] duration metric: took 4.742229ms waiting for restarted kubelet to initialise ...
	I0814 01:06:05.889644   61689 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 01:06:05.991222   61689 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6f6b679f8f-7vdsf" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:05.997411   61689 pod_ready.go:97] node "default-k8s-diff-port-585256" hosting pod "coredns-6f6b679f8f-7vdsf" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-585256" has status "Ready":"False"
	I0814 01:06:05.997442   61689 pod_ready.go:81] duration metric: took 6.186188ms for pod "coredns-6f6b679f8f-7vdsf" in "kube-system" namespace to be "Ready" ...
	E0814 01:06:05.997455   61689 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-585256" hosting pod "coredns-6f6b679f8f-7vdsf" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-585256" has status "Ready":"False"
	I0814 01:06:05.997463   61689 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-585256" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:06.008153   61689 pod_ready.go:97] node "default-k8s-diff-port-585256" hosting pod "etcd-default-k8s-diff-port-585256" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-585256" has status "Ready":"False"
	I0814 01:06:06.008188   61689 pod_ready.go:81] duration metric: took 10.714691ms for pod "etcd-default-k8s-diff-port-585256" in "kube-system" namespace to be "Ready" ...
	E0814 01:06:06.008204   61689 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-585256" hosting pod "etcd-default-k8s-diff-port-585256" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-585256" has status "Ready":"False"
	I0814 01:06:06.008213   61689 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-585256" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:06.013480   61689 pod_ready.go:97] node "default-k8s-diff-port-585256" hosting pod "kube-apiserver-default-k8s-diff-port-585256" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-585256" has status "Ready":"False"
	I0814 01:06:06.013500   61689 pod_ready.go:81] duration metric: took 5.279106ms for pod "kube-apiserver-default-k8s-diff-port-585256" in "kube-system" namespace to be "Ready" ...
	E0814 01:06:06.013510   61689 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-585256" hosting pod "kube-apiserver-default-k8s-diff-port-585256" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-585256" has status "Ready":"False"
	I0814 01:06:06.013517   61689 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-585256" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:06.022821   61689 pod_ready.go:97] node "default-k8s-diff-port-585256" hosting pod "kube-controller-manager-default-k8s-diff-port-585256" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-585256" has status "Ready":"False"
	I0814 01:06:06.022841   61689 pod_ready.go:81] duration metric: took 9.318586ms for pod "kube-controller-manager-default-k8s-diff-port-585256" in "kube-system" namespace to be "Ready" ...
	E0814 01:06:06.022851   61689 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-585256" hosting pod "kube-controller-manager-default-k8s-diff-port-585256" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-585256" has status "Ready":"False"
	I0814 01:06:06.022857   61689 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-cz77l" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:06.402225   61689 pod_ready.go:92] pod "kube-proxy-cz77l" in "kube-system" namespace has status "Ready":"True"
	I0814 01:06:06.402251   61689 pod_ready.go:81] duration metric: took 379.387097ms for pod "kube-proxy-cz77l" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:06.402267   61689 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-585256" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:07.847343   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:07.847844   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 01:06:07.847879   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 01:06:07.847800   62858 retry.go:31] will retry after 2.983420512s: waiting for machine to come up
	I0814 01:06:07.699362   61447 pod_ready.go:92] pod "kube-apiserver-no-preload-776907" in "kube-system" namespace has status "Ready":"True"
	I0814 01:06:07.699393   61447 pod_ready.go:81] duration metric: took 3.506678951s for pod "kube-apiserver-no-preload-776907" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:07.699407   61447 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-776907" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:07.704007   61447 pod_ready.go:92] pod "kube-controller-manager-no-preload-776907" in "kube-system" namespace has status "Ready":"True"
	I0814 01:06:07.704028   61447 pod_ready.go:81] duration metric: took 4.613152ms for pod "kube-controller-manager-no-preload-776907" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:07.704038   61447 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pgm9t" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:07.708027   61447 pod_ready.go:92] pod "kube-proxy-pgm9t" in "kube-system" namespace has status "Ready":"True"
	I0814 01:06:07.708044   61447 pod_ready.go:81] duration metric: took 3.999792ms for pod "kube-proxy-pgm9t" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:07.708052   61447 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-776907" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:07.774591   61447 pod_ready.go:92] pod "kube-scheduler-no-preload-776907" in "kube-system" namespace has status "Ready":"True"
	I0814 01:06:07.774621   61447 pod_ready.go:81] duration metric: took 66.56102ms for pod "kube-scheduler-no-preload-776907" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:07.774642   61447 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:09.781156   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:12.050400   61115 start.go:364] duration metric: took 54.455049928s to acquireMachinesLock for "embed-certs-901410"
	I0814 01:06:12.050448   61115 start.go:96] Skipping create...Using existing machine configuration
	I0814 01:06:12.050458   61115 fix.go:54] fixHost starting: 
	I0814 01:06:12.050897   61115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:06:12.050932   61115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:06:12.067865   61115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41559
	I0814 01:06:12.068209   61115 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:06:12.068726   61115 main.go:141] libmachine: Using API Version  1
	I0814 01:06:12.068757   61115 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:06:12.069116   61115 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:06:12.069354   61115 main.go:141] libmachine: (embed-certs-901410) Calling .DriverName
	I0814 01:06:12.069516   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetState
	I0814 01:06:12.070994   61115 fix.go:112] recreateIfNeeded on embed-certs-901410: state=Stopped err=<nil>
	I0814 01:06:12.071029   61115 main.go:141] libmachine: (embed-certs-901410) Calling .DriverName
	W0814 01:06:12.071156   61115 fix.go:138] unexpected machine state, will restart: <nil>
	I0814 01:06:12.072932   61115 out.go:177] * Restarting existing kvm2 VM for "embed-certs-901410" ...
	I0814 01:06:08.410114   61689 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-585256" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:10.909528   61689 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-585256" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:12.911385   61689 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-585256" in "kube-system" namespace has status "Ready":"True"
	I0814 01:06:12.911416   61689 pod_ready.go:81] duration metric: took 6.509140238s for pod "kube-scheduler-default-k8s-diff-port-585256" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:12.911432   61689 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:10.834861   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:10.835358   61804 main.go:141] libmachine: (old-k8s-version-179312) Found IP for machine: 192.168.61.123
	I0814 01:06:10.835381   61804 main.go:141] libmachine: (old-k8s-version-179312) Reserving static IP address...
	I0814 01:06:10.835396   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has current primary IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:10.835795   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "old-k8s-version-179312", mac: "52:54:00:b2:76:73", ip: "192.168.61.123"} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:10.835827   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | skip adding static IP to network mk-old-k8s-version-179312 - found existing host DHCP lease matching {name: "old-k8s-version-179312", mac: "52:54:00:b2:76:73", ip: "192.168.61.123"}
	I0814 01:06:10.835846   61804 main.go:141] libmachine: (old-k8s-version-179312) Reserved static IP address: 192.168.61.123
	I0814 01:06:10.835866   61804 main.go:141] libmachine: (old-k8s-version-179312) Waiting for SSH to be available...
	I0814 01:06:10.835880   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | Getting to WaitForSSH function...
	I0814 01:06:10.837965   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:10.838336   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:10.838379   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:10.838482   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | Using SSH client type: external
	I0814 01:06:10.838520   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | Using SSH private key: /home/jenkins/minikube-integration/19429-9425/.minikube/machines/old-k8s-version-179312/id_rsa (-rw-------)
	I0814 01:06:10.838549   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.123 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19429-9425/.minikube/machines/old-k8s-version-179312/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0814 01:06:10.838568   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | About to run SSH command:
	I0814 01:06:10.838578   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | exit 0
	I0814 01:06:10.965836   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | SSH cmd err, output: <nil>: 
	I0814 01:06:10.966231   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetConfigRaw
	I0814 01:06:10.966912   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetIP
	I0814 01:06:10.969194   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:10.969535   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:10.969560   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:10.969789   61804 profile.go:143] Saving config to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312/config.json ...
	I0814 01:06:10.969969   61804 machine.go:94] provisionDockerMachine start ...
	I0814 01:06:10.969987   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .DriverName
	I0814 01:06:10.970183   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHHostname
	I0814 01:06:10.972010   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:10.972332   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:10.972361   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:10.972476   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHPort
	I0814 01:06:10.972658   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 01:06:10.972807   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 01:06:10.972942   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHUsername
	I0814 01:06:10.973088   61804 main.go:141] libmachine: Using SSH client type: native
	I0814 01:06:10.973257   61804 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I0814 01:06:10.973267   61804 main.go:141] libmachine: About to run SSH command:
	hostname
	I0814 01:06:11.074077   61804 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0814 01:06:11.074111   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetMachineName
	I0814 01:06:11.074328   61804 buildroot.go:166] provisioning hostname "old-k8s-version-179312"
	I0814 01:06:11.074364   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetMachineName
	I0814 01:06:11.074666   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHHostname
	I0814 01:06:11.077309   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.077697   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:11.077730   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.077803   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHPort
	I0814 01:06:11.077990   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 01:06:11.078161   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 01:06:11.078304   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHUsername
	I0814 01:06:11.078510   61804 main.go:141] libmachine: Using SSH client type: native
	I0814 01:06:11.078729   61804 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I0814 01:06:11.078743   61804 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-179312 && echo "old-k8s-version-179312" | sudo tee /etc/hostname
	I0814 01:06:11.193209   61804 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-179312
	
	I0814 01:06:11.193241   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHHostname
	I0814 01:06:11.195907   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.196315   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:11.196342   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.196569   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHPort
	I0814 01:06:11.196774   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 01:06:11.196936   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 01:06:11.197079   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHUsername
	I0814 01:06:11.197234   61804 main.go:141] libmachine: Using SSH client type: native
	I0814 01:06:11.197448   61804 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I0814 01:06:11.197477   61804 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-179312' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-179312/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-179312' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0814 01:06:11.312005   61804 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 01:06:11.312037   61804 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19429-9425/.minikube CaCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19429-9425/.minikube}
	I0814 01:06:11.312082   61804 buildroot.go:174] setting up certificates
	I0814 01:06:11.312093   61804 provision.go:84] configureAuth start
	I0814 01:06:11.312103   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetMachineName
	I0814 01:06:11.312396   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetIP
	I0814 01:06:11.315412   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.315909   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:11.315952   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.316043   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHHostname
	I0814 01:06:11.318283   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.318603   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:11.318630   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.318791   61804 provision.go:143] copyHostCerts
	I0814 01:06:11.318852   61804 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem, removing ...
	I0814 01:06:11.318875   61804 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem
	I0814 01:06:11.318944   61804 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem (1082 bytes)
	I0814 01:06:11.319073   61804 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem, removing ...
	I0814 01:06:11.319085   61804 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem
	I0814 01:06:11.319115   61804 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem (1123 bytes)
	I0814 01:06:11.319199   61804 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem, removing ...
	I0814 01:06:11.319209   61804 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem
	I0814 01:06:11.319262   61804 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem (1675 bytes)
	I0814 01:06:11.319351   61804 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-179312 san=[127.0.0.1 192.168.61.123 localhost minikube old-k8s-version-179312]
	I0814 01:06:11.396260   61804 provision.go:177] copyRemoteCerts
	I0814 01:06:11.396338   61804 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0814 01:06:11.396372   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHHostname
	I0814 01:06:11.399365   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.399788   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:11.399824   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.399989   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHPort
	I0814 01:06:11.400186   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 01:06:11.400349   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHUsername
	I0814 01:06:11.400555   61804 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/old-k8s-version-179312/id_rsa Username:docker}
	I0814 01:06:11.483862   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0814 01:06:11.506282   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0814 01:06:11.529014   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0814 01:06:11.550986   61804 provision.go:87] duration metric: took 238.880389ms to configureAuth
	I0814 01:06:11.551022   61804 buildroot.go:189] setting minikube options for container-runtime
	I0814 01:06:11.551253   61804 config.go:182] Loaded profile config "old-k8s-version-179312": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0814 01:06:11.551330   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHHostname
	I0814 01:06:11.554244   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.554622   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:11.554655   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.554880   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHPort
	I0814 01:06:11.555073   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 01:06:11.555249   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 01:06:11.555402   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHUsername
	I0814 01:06:11.555590   61804 main.go:141] libmachine: Using SSH client type: native
	I0814 01:06:11.555834   61804 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I0814 01:06:11.555856   61804 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0814 01:06:11.824529   61804 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0814 01:06:11.824553   61804 machine.go:97] duration metric: took 854.572333ms to provisionDockerMachine
	I0814 01:06:11.824569   61804 start.go:293] postStartSetup for "old-k8s-version-179312" (driver="kvm2")
	I0814 01:06:11.824581   61804 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0814 01:06:11.824626   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .DriverName
	I0814 01:06:11.824929   61804 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0814 01:06:11.824952   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHHostname
	I0814 01:06:11.828165   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.828510   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:11.828545   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.828693   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHPort
	I0814 01:06:11.828883   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 01:06:11.829032   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHUsername
	I0814 01:06:11.829206   61804 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/old-k8s-version-179312/id_rsa Username:docker}
	I0814 01:06:11.909667   61804 ssh_runner.go:195] Run: cat /etc/os-release
	I0814 01:06:11.913426   61804 info.go:137] Remote host: Buildroot 2023.02.9
	I0814 01:06:11.913452   61804 filesync.go:126] Scanning /home/jenkins/minikube-integration/19429-9425/.minikube/addons for local assets ...
	I0814 01:06:11.913530   61804 filesync.go:126] Scanning /home/jenkins/minikube-integration/19429-9425/.minikube/files for local assets ...
	I0814 01:06:11.913630   61804 filesync.go:149] local asset: /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem -> 165892.pem in /etc/ssl/certs
	I0814 01:06:11.913753   61804 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0814 01:06:11.923687   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem --> /etc/ssl/certs/165892.pem (1708 bytes)
	I0814 01:06:11.946123   61804 start.go:296] duration metric: took 121.53594ms for postStartSetup
	I0814 01:06:11.946172   61804 fix.go:56] duration metric: took 19.859362691s for fixHost
	I0814 01:06:11.946192   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHHostname
	I0814 01:06:11.948880   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.949241   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:11.949264   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.949490   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHPort
	I0814 01:06:11.949702   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 01:06:11.949889   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 01:06:11.950031   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHUsername
	I0814 01:06:11.950210   61804 main.go:141] libmachine: Using SSH client type: native
	I0814 01:06:11.950390   61804 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I0814 01:06:11.950403   61804 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0814 01:06:12.050230   61804 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723597572.007643909
	
	I0814 01:06:12.050252   61804 fix.go:216] guest clock: 1723597572.007643909
	I0814 01:06:12.050259   61804 fix.go:229] Guest: 2024-08-14 01:06:12.007643909 +0000 UTC Remote: 2024-08-14 01:06:11.946176003 +0000 UTC m=+272.466568091 (delta=61.467906ms)
	I0814 01:06:12.050292   61804 fix.go:200] guest clock delta is within tolerance: 61.467906ms
	I0814 01:06:12.050297   61804 start.go:83] releasing machines lock for "old-k8s-version-179312", held for 19.963518958s
	I0814 01:06:12.050328   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .DriverName
	I0814 01:06:12.050593   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetIP
	I0814 01:06:12.053723   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:12.054140   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:12.054170   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:12.054376   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .DriverName
	I0814 01:06:12.054804   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .DriverName
	I0814 01:06:12.054992   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .DriverName
	I0814 01:06:12.055076   61804 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0814 01:06:12.055137   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHHostname
	I0814 01:06:12.055191   61804 ssh_runner.go:195] Run: cat /version.json
	I0814 01:06:12.055216   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHHostname
	I0814 01:06:12.058027   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:12.058378   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:12.058404   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:12.058455   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:12.058684   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHPort
	I0814 01:06:12.058796   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:12.058828   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:12.058874   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 01:06:12.059041   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHUsername
	I0814 01:06:12.059107   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHPort
	I0814 01:06:12.059179   61804 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/old-k8s-version-179312/id_rsa Username:docker}
	I0814 01:06:12.059276   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 01:06:12.059582   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHUsername
	I0814 01:06:12.059721   61804 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/old-k8s-version-179312/id_rsa Username:docker}
	I0814 01:06:12.169671   61804 ssh_runner.go:195] Run: systemctl --version
	I0814 01:06:12.175640   61804 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0814 01:06:12.326156   61804 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0814 01:06:12.332951   61804 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0814 01:06:12.333015   61804 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0814 01:06:12.351706   61804 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0814 01:06:12.351737   61804 start.go:495] detecting cgroup driver to use...
	I0814 01:06:12.351808   61804 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0814 01:06:12.367945   61804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0814 01:06:12.381540   61804 docker.go:217] disabling cri-docker service (if available) ...
	I0814 01:06:12.381607   61804 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0814 01:06:12.394497   61804 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0814 01:06:12.408848   61804 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0814 01:06:12.530080   61804 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0814 01:06:12.705566   61804 docker.go:233] disabling docker service ...
	I0814 01:06:12.705627   61804 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0814 01:06:12.721274   61804 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0814 01:06:12.736855   61804 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0814 01:06:12.851178   61804 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0814 01:06:12.973876   61804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0814 01:06:12.987600   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 01:06:13.004553   61804 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0814 01:06:13.004656   61804 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:06:13.014424   61804 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0814 01:06:13.014507   61804 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:06:13.024038   61804 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:06:13.033588   61804 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:06:13.043124   61804 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0814 01:06:13.052585   61804 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0814 01:06:13.061221   61804 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0814 01:06:13.061308   61804 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0814 01:06:13.075277   61804 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0814 01:06:13.087018   61804 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 01:06:13.227288   61804 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0814 01:06:13.372753   61804 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0814 01:06:13.372848   61804 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0814 01:06:13.377444   61804 start.go:563] Will wait 60s for crictl version
	I0814 01:06:13.377499   61804 ssh_runner.go:195] Run: which crictl
	I0814 01:06:13.381068   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0814 01:06:13.430604   61804 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0814 01:06:13.430694   61804 ssh_runner.go:195] Run: crio --version
	I0814 01:06:13.460827   61804 ssh_runner.go:195] Run: crio --version
	I0814 01:06:13.491550   61804 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0814 01:06:13.492760   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetIP
	I0814 01:06:13.495846   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:13.496218   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:13.496255   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:13.496435   61804 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0814 01:06:13.500489   61804 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 01:06:13.512643   61804 kubeadm.go:883] updating cluster {Name:old-k8s-version-179312 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-179312 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.123 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0814 01:06:13.512785   61804 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0814 01:06:13.512842   61804 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 01:06:13.560050   61804 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0814 01:06:13.560112   61804 ssh_runner.go:195] Run: which lz4
	I0814 01:06:13.564105   61804 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0814 01:06:13.567985   61804 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0814 01:06:13.568014   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0814 01:06:12.074155   61115 main.go:141] libmachine: (embed-certs-901410) Calling .Start
	I0814 01:06:12.074285   61115 main.go:141] libmachine: (embed-certs-901410) Ensuring networks are active...
	I0814 01:06:12.074948   61115 main.go:141] libmachine: (embed-certs-901410) Ensuring network default is active
	I0814 01:06:12.075282   61115 main.go:141] libmachine: (embed-certs-901410) Ensuring network mk-embed-certs-901410 is active
	I0814 01:06:12.075694   61115 main.go:141] libmachine: (embed-certs-901410) Getting domain xml...
	I0814 01:06:12.076354   61115 main.go:141] libmachine: (embed-certs-901410) Creating domain...
	I0814 01:06:13.425468   61115 main.go:141] libmachine: (embed-certs-901410) Waiting to get IP...
	I0814 01:06:13.426367   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:13.426876   61115 main.go:141] libmachine: (embed-certs-901410) DBG | unable to find current IP address of domain embed-certs-901410 in network mk-embed-certs-901410
	I0814 01:06:13.426936   61115 main.go:141] libmachine: (embed-certs-901410) DBG | I0814 01:06:13.426842   63044 retry.go:31] will retry after 280.861769ms: waiting for machine to come up
	I0814 01:06:13.709645   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:13.710369   61115 main.go:141] libmachine: (embed-certs-901410) DBG | unable to find current IP address of domain embed-certs-901410 in network mk-embed-certs-901410
	I0814 01:06:13.710524   61115 main.go:141] libmachine: (embed-certs-901410) DBG | I0814 01:06:13.710442   63044 retry.go:31] will retry after 316.02196ms: waiting for machine to come up
	I0814 01:06:14.028197   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:14.028722   61115 main.go:141] libmachine: (embed-certs-901410) DBG | unable to find current IP address of domain embed-certs-901410 in network mk-embed-certs-901410
	I0814 01:06:14.028751   61115 main.go:141] libmachine: (embed-certs-901410) DBG | I0814 01:06:14.028683   63044 retry.go:31] will retry after 317.388844ms: waiting for machine to come up
	I0814 01:06:14.347390   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:14.347888   61115 main.go:141] libmachine: (embed-certs-901410) DBG | unable to find current IP address of domain embed-certs-901410 in network mk-embed-certs-901410
	I0814 01:06:14.347917   61115 main.go:141] libmachine: (embed-certs-901410) DBG | I0814 01:06:14.347834   63044 retry.go:31] will retry after 422.687955ms: waiting for machine to come up
	I0814 01:06:14.772182   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:14.772756   61115 main.go:141] libmachine: (embed-certs-901410) DBG | unable to find current IP address of domain embed-certs-901410 in network mk-embed-certs-901410
	I0814 01:06:14.772785   61115 main.go:141] libmachine: (embed-certs-901410) DBG | I0814 01:06:14.772704   63044 retry.go:31] will retry after 517.722001ms: waiting for machine to come up
	I0814 01:06:11.781300   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:13.782226   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:15.782509   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:14.919068   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:16.920536   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:15.010425   61804 crio.go:462] duration metric: took 1.446361159s to copy over tarball
	I0814 01:06:15.010503   61804 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0814 01:06:17.960543   61804 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.950002604s)
	I0814 01:06:17.960583   61804 crio.go:469] duration metric: took 2.950131362s to extract the tarball
	I0814 01:06:17.960595   61804 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0814 01:06:18.002898   61804 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 01:06:18.039862   61804 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0814 01:06:18.039887   61804 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0814 01:06:18.039949   61804 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 01:06:18.039976   61804 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0814 01:06:18.040029   61804 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0814 01:06:18.040037   61804 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 01:06:18.040076   61804 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0814 01:06:18.040092   61804 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0814 01:06:18.040279   61804 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0814 01:06:18.040285   61804 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0814 01:06:18.041502   61804 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 01:06:18.041605   61804 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0814 01:06:18.041642   61804 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 01:06:18.041655   61804 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0814 01:06:18.041683   61804 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0814 01:06:18.041709   61804 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0814 01:06:18.041712   61804 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0814 01:06:18.041643   61804 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0814 01:06:18.267865   61804 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0814 01:06:18.300630   61804 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0814 01:06:18.309691   61804 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0814 01:06:18.312711   61804 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0814 01:06:18.319830   61804 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 01:06:18.333483   61804 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0814 01:06:18.333571   61804 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0814 01:06:18.333617   61804 ssh_runner.go:195] Run: which crictl
	I0814 01:06:18.333854   61804 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0814 01:06:18.355530   61804 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0814 01:06:18.460940   61804 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0814 01:06:18.460989   61804 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0814 01:06:18.460991   61804 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0814 01:06:18.461028   61804 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0814 01:06:18.461038   61804 ssh_runner.go:195] Run: which crictl
	I0814 01:06:18.461072   61804 ssh_runner.go:195] Run: which crictl
	I0814 01:06:18.466105   61804 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0814 01:06:18.466146   61804 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 01:06:18.466158   61804 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0814 01:06:18.466194   61804 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0814 01:06:18.466200   61804 ssh_runner.go:195] Run: which crictl
	I0814 01:06:18.466232   61804 ssh_runner.go:195] Run: which crictl
	I0814 01:06:18.466109   61804 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0814 01:06:18.466290   61804 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0814 01:06:18.466163   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0814 01:06:18.466338   61804 ssh_runner.go:195] Run: which crictl
	I0814 01:06:18.471203   61804 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0814 01:06:18.471244   61804 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0814 01:06:18.471327   61804 ssh_runner.go:195] Run: which crictl
	I0814 01:06:18.477596   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0814 01:06:18.477709   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 01:06:18.477741   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0814 01:06:18.536417   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0814 01:06:18.536483   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0814 01:06:18.536443   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0814 01:06:18.536516   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0814 01:06:18.560937   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0814 01:06:18.560979   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0814 01:06:18.571932   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 01:06:18.690215   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0814 01:06:18.690271   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0814 01:06:18.690385   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0814 01:06:18.690416   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0814 01:06:18.710801   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0814 01:06:18.722130   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0814 01:06:18.722180   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 01:06:18.854942   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0814 01:06:18.854975   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0814 01:06:18.855019   61804 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0814 01:06:18.855064   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0814 01:06:18.855069   61804 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0814 01:06:18.855143   61804 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0814 01:06:18.855197   61804 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0814 01:06:18.917832   61804 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0814 01:06:18.917892   61804 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0814 01:06:18.919778   61804 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0814 01:06:18.937014   61804 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 01:06:19.077956   61804 cache_images.go:92] duration metric: took 1.038051355s to LoadCachedImages
	W0814 01:06:19.078050   61804 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0814 01:06:19.078068   61804 kubeadm.go:934] updating node { 192.168.61.123 8443 v1.20.0 crio true true} ...
	I0814 01:06:19.078198   61804 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-179312 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.123
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-179312 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0814 01:06:19.078309   61804 ssh_runner.go:195] Run: crio config
	I0814 01:06:19.126091   61804 cni.go:84] Creating CNI manager for ""
	I0814 01:06:19.126114   61804 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 01:06:19.126129   61804 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0814 01:06:19.126159   61804 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.123 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-179312 NodeName:old-k8s-version-179312 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.123"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.123 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0814 01:06:19.126325   61804 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.123
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-179312"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.123
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.123"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0814 01:06:19.126402   61804 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0814 01:06:19.136422   61804 binaries.go:44] Found k8s binaries, skipping transfer
	I0814 01:06:19.136481   61804 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0814 01:06:19.145476   61804 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0814 01:06:19.161780   61804 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0814 01:06:19.178893   61804 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0814 01:06:19.196515   61804 ssh_runner.go:195] Run: grep 192.168.61.123	control-plane.minikube.internal$ /etc/hosts
	I0814 01:06:19.200204   61804 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.123	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 01:06:19.211943   61804 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 01:06:19.333517   61804 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 01:06:19.350008   61804 certs.go:68] Setting up /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312 for IP: 192.168.61.123
	I0814 01:06:19.350055   61804 certs.go:194] generating shared ca certs ...
	I0814 01:06:19.350094   61804 certs.go:226] acquiring lock for ca certs: {Name:mke321171338291cf6d66f3acbfe43d3fabcafa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 01:06:19.350294   61804 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19429-9425/.minikube/ca.key
	I0814 01:06:19.350371   61804 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.key
	I0814 01:06:19.350387   61804 certs.go:256] generating profile certs ...
	I0814 01:06:19.350530   61804 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312/client.key
	I0814 01:06:19.350603   61804 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312/apiserver.key.6e56bf34
	I0814 01:06:19.350667   61804 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312/proxy-client.key
	I0814 01:06:19.350846   61804 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589.pem (1338 bytes)
	W0814 01:06:19.350928   61804 certs.go:480] ignoring /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589_empty.pem, impossibly tiny 0 bytes
	I0814 01:06:19.350958   61804 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem (1679 bytes)
	I0814 01:06:19.350995   61804 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem (1082 bytes)
	I0814 01:06:19.351032   61804 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem (1123 bytes)
	I0814 01:06:19.351076   61804 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem (1675 bytes)
	I0814 01:06:19.351152   61804 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem (1708 bytes)
	I0814 01:06:19.352060   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0814 01:06:19.400249   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0814 01:06:19.430497   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0814 01:06:19.478315   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0814 01:06:19.507327   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0814 01:06:15.292336   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:15.292816   61115 main.go:141] libmachine: (embed-certs-901410) DBG | unable to find current IP address of domain embed-certs-901410 in network mk-embed-certs-901410
	I0814 01:06:15.292847   61115 main.go:141] libmachine: (embed-certs-901410) DBG | I0814 01:06:15.292765   63044 retry.go:31] will retry after 585.844986ms: waiting for machine to come up
	I0814 01:06:15.880233   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:15.880833   61115 main.go:141] libmachine: (embed-certs-901410) DBG | unable to find current IP address of domain embed-certs-901410 in network mk-embed-certs-901410
	I0814 01:06:15.880903   61115 main.go:141] libmachine: (embed-certs-901410) DBG | I0814 01:06:15.880810   63044 retry.go:31] will retry after 827.81891ms: waiting for machine to come up
	I0814 01:06:16.710168   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:16.710630   61115 main.go:141] libmachine: (embed-certs-901410) DBG | unable to find current IP address of domain embed-certs-901410 in network mk-embed-certs-901410
	I0814 01:06:16.710671   61115 main.go:141] libmachine: (embed-certs-901410) DBG | I0814 01:06:16.710577   63044 retry.go:31] will retry after 1.430172339s: waiting for machine to come up
	I0814 01:06:18.142094   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:18.142557   61115 main.go:141] libmachine: (embed-certs-901410) DBG | unable to find current IP address of domain embed-certs-901410 in network mk-embed-certs-901410
	I0814 01:06:18.142604   61115 main.go:141] libmachine: (embed-certs-901410) DBG | I0814 01:06:18.142477   63044 retry.go:31] will retry after 1.240583508s: waiting for machine to come up
	I0814 01:06:19.384686   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:19.385102   61115 main.go:141] libmachine: (embed-certs-901410) DBG | unable to find current IP address of domain embed-certs-901410 in network mk-embed-certs-901410
	I0814 01:06:19.385132   61115 main.go:141] libmachine: (embed-certs-901410) DBG | I0814 01:06:19.385044   63044 retry.go:31] will retry after 2.005758756s: waiting for machine to come up
	I0814 01:06:18.281722   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:20.571594   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:19.619695   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:21.918897   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:19.535095   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0814 01:06:19.564128   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0814 01:06:19.600227   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0814 01:06:19.624712   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0814 01:06:19.649975   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589.pem --> /usr/share/ca-certificates/16589.pem (1338 bytes)
	I0814 01:06:19.673278   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem --> /usr/share/ca-certificates/165892.pem (1708 bytes)
	I0814 01:06:19.697408   61804 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0814 01:06:19.716197   61804 ssh_runner.go:195] Run: openssl version
	I0814 01:06:19.723669   61804 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/165892.pem && ln -fs /usr/share/ca-certificates/165892.pem /etc/ssl/certs/165892.pem"
	I0814 01:06:19.737165   61804 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/165892.pem
	I0814 01:06:19.742731   61804 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 13 23:59 /usr/share/ca-certificates/165892.pem
	I0814 01:06:19.742778   61804 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/165892.pem
	I0814 01:06:19.750009   61804 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/165892.pem /etc/ssl/certs/3ec20f2e.0"
	I0814 01:06:19.761830   61804 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0814 01:06:19.772601   61804 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0814 01:06:19.777222   61804 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 13 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0814 01:06:19.777311   61804 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0814 01:06:19.784554   61804 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0814 01:06:19.794731   61804 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16589.pem && ln -fs /usr/share/ca-certificates/16589.pem /etc/ssl/certs/16589.pem"
	I0814 01:06:19.804326   61804 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16589.pem
	I0814 01:06:19.808528   61804 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 13 23:59 /usr/share/ca-certificates/16589.pem
	I0814 01:06:19.808589   61804 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16589.pem
	I0814 01:06:19.815518   61804 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16589.pem /etc/ssl/certs/51391683.0"
	I0814 01:06:19.828687   61804 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0814 01:06:19.833943   61804 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0814 01:06:19.839826   61804 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0814 01:06:19.845576   61804 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0814 01:06:19.851700   61804 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0814 01:06:19.857179   61804 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0814 01:06:19.862728   61804 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0814 01:06:19.868172   61804 kubeadm.go:392] StartCluster: {Name:old-k8s-version-179312 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-179312 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.123 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 01:06:19.868280   61804 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0814 01:06:19.868327   61804 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 01:06:19.905130   61804 cri.go:89] found id: ""
	I0814 01:06:19.905208   61804 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0814 01:06:19.915743   61804 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0814 01:06:19.915763   61804 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0814 01:06:19.915812   61804 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0814 01:06:19.926673   61804 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0814 01:06:19.928112   61804 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-179312" does not appear in /home/jenkins/minikube-integration/19429-9425/kubeconfig
	I0814 01:06:19.929057   61804 kubeconfig.go:62] /home/jenkins/minikube-integration/19429-9425/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-179312" cluster setting kubeconfig missing "old-k8s-version-179312" context setting]
	I0814 01:06:19.931588   61804 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19429-9425/kubeconfig: {Name:mkd99f966962f24d3ec49056c344f5320df43dba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 01:06:19.938507   61804 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0814 01:06:19.947574   61804 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.123
	I0814 01:06:19.947601   61804 kubeadm.go:1160] stopping kube-system containers ...
	I0814 01:06:19.947641   61804 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0814 01:06:19.947698   61804 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 01:06:19.986219   61804 cri.go:89] found id: ""
	I0814 01:06:19.986301   61804 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0814 01:06:20.001325   61804 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 01:06:20.010260   61804 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 01:06:20.010278   61804 kubeadm.go:157] found existing configuration files:
	
	I0814 01:06:20.010320   61804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 01:06:20.018691   61804 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 01:06:20.018753   61804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 01:06:20.027627   61804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 01:06:20.035892   61804 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 01:06:20.035948   61804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 01:06:20.044508   61804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 01:06:20.052714   61804 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 01:06:20.052760   61804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 01:06:20.062524   61804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 01:06:20.070978   61804 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 01:06:20.071037   61804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 01:06:20.079423   61804 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 01:06:20.088368   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:06:20.206955   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:06:21.197237   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:06:21.439928   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:06:21.552279   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:06:21.636249   61804 api_server.go:52] waiting for apiserver process to appear ...
	I0814 01:06:21.636337   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:22.136661   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:22.636861   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:23.136511   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:23.636583   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:24.136899   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:21.392188   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:21.392717   61115 main.go:141] libmachine: (embed-certs-901410) DBG | unable to find current IP address of domain embed-certs-901410 in network mk-embed-certs-901410
	I0814 01:06:21.392744   61115 main.go:141] libmachine: (embed-certs-901410) DBG | I0814 01:06:21.392636   63044 retry.go:31] will retry after 2.297974145s: waiting for machine to come up
	I0814 01:06:23.692024   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:23.692545   61115 main.go:141] libmachine: (embed-certs-901410) DBG | unable to find current IP address of domain embed-certs-901410 in network mk-embed-certs-901410
	I0814 01:06:23.692574   61115 main.go:141] libmachine: (embed-certs-901410) DBG | I0814 01:06:23.692496   63044 retry.go:31] will retry after 2.273164713s: waiting for machine to come up
	I0814 01:06:22.780588   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:24.781349   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:23.919847   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:26.417563   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:24.636605   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:25.136809   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:25.636474   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:26.137253   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:26.636758   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:27.137184   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:27.637201   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:28.137082   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:28.637409   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:29.136794   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:25.967275   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:25.967771   61115 main.go:141] libmachine: (embed-certs-901410) DBG | unable to find current IP address of domain embed-certs-901410 in network mk-embed-certs-901410
	I0814 01:06:25.967799   61115 main.go:141] libmachine: (embed-certs-901410) DBG | I0814 01:06:25.967714   63044 retry.go:31] will retry after 3.279375715s: waiting for machine to come up
	I0814 01:06:29.249387   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.249873   61115 main.go:141] libmachine: (embed-certs-901410) Found IP for machine: 192.168.50.210
	I0814 01:06:29.249893   61115 main.go:141] libmachine: (embed-certs-901410) Reserving static IP address...
	I0814 01:06:29.249911   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has current primary IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.250345   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "embed-certs-901410", mac: "52:54:00:fa:4e:56", ip: "192.168.50.210"} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:06:29.250380   61115 main.go:141] libmachine: (embed-certs-901410) DBG | skip adding static IP to network mk-embed-certs-901410 - found existing host DHCP lease matching {name: "embed-certs-901410", mac: "52:54:00:fa:4e:56", ip: "192.168.50.210"}
	I0814 01:06:29.250394   61115 main.go:141] libmachine: (embed-certs-901410) Reserved static IP address: 192.168.50.210
	I0814 01:06:29.250409   61115 main.go:141] libmachine: (embed-certs-901410) Waiting for SSH to be available...
	I0814 01:06:29.250425   61115 main.go:141] libmachine: (embed-certs-901410) DBG | Getting to WaitForSSH function...
	I0814 01:06:29.252472   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.252801   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:06:29.252825   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.252933   61115 main.go:141] libmachine: (embed-certs-901410) DBG | Using SSH client type: external
	I0814 01:06:29.252973   61115 main.go:141] libmachine: (embed-certs-901410) DBG | Using SSH private key: /home/jenkins/minikube-integration/19429-9425/.minikube/machines/embed-certs-901410/id_rsa (-rw-------)
	I0814 01:06:29.253015   61115 main.go:141] libmachine: (embed-certs-901410) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.210 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19429-9425/.minikube/machines/embed-certs-901410/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0814 01:06:29.253031   61115 main.go:141] libmachine: (embed-certs-901410) DBG | About to run SSH command:
	I0814 01:06:29.253044   61115 main.go:141] libmachine: (embed-certs-901410) DBG | exit 0
	I0814 01:06:29.381821   61115 main.go:141] libmachine: (embed-certs-901410) DBG | SSH cmd err, output: <nil>: 
	I0814 01:06:29.382216   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetConfigRaw
	I0814 01:06:29.382909   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetIP
	I0814 01:06:29.385247   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.385611   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:06:29.385648   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.385918   61115 profile.go:143] Saving config to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/embed-certs-901410/config.json ...
	I0814 01:06:29.386116   61115 machine.go:94] provisionDockerMachine start ...
	I0814 01:06:29.386151   61115 main.go:141] libmachine: (embed-certs-901410) Calling .DriverName
	I0814 01:06:29.386370   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHHostname
	I0814 01:06:29.388690   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.389026   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:06:29.389054   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.389185   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHPort
	I0814 01:06:29.389353   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 01:06:29.389510   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 01:06:29.389658   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHUsername
	I0814 01:06:29.389812   61115 main.go:141] libmachine: Using SSH client type: native
	I0814 01:06:29.390022   61115 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.210 22 <nil> <nil>}
	I0814 01:06:29.390033   61115 main.go:141] libmachine: About to run SSH command:
	hostname
	I0814 01:06:29.502650   61115 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0814 01:06:29.502704   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetMachineName
	I0814 01:06:29.502923   61115 buildroot.go:166] provisioning hostname "embed-certs-901410"
	I0814 01:06:29.502947   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetMachineName
	I0814 01:06:29.503141   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHHostname
	I0814 01:06:29.505440   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.505866   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:06:29.505903   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.506078   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHPort
	I0814 01:06:29.506278   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 01:06:29.506425   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 01:06:29.506558   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHUsername
	I0814 01:06:29.506733   61115 main.go:141] libmachine: Using SSH client type: native
	I0814 01:06:29.506942   61115 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.210 22 <nil> <nil>}
	I0814 01:06:29.506961   61115 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-901410 && echo "embed-certs-901410" | sudo tee /etc/hostname
	I0814 01:06:29.632717   61115 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-901410
	
	I0814 01:06:29.632749   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHHostname
	I0814 01:06:29.635919   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.636318   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:06:29.636346   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.636582   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHPort
	I0814 01:06:29.636804   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 01:06:29.637010   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 01:06:29.637205   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHUsername
	I0814 01:06:29.637413   61115 main.go:141] libmachine: Using SSH client type: native
	I0814 01:06:29.637574   61115 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.210 22 <nil> <nil>}
	I0814 01:06:29.637590   61115 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-901410' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-901410/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-901410' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0814 01:06:29.759030   61115 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 01:06:29.759059   61115 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19429-9425/.minikube CaCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19429-9425/.minikube}
	I0814 01:06:29.759100   61115 buildroot.go:174] setting up certificates
	I0814 01:06:29.759114   61115 provision.go:84] configureAuth start
	I0814 01:06:29.759126   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetMachineName
	I0814 01:06:29.759412   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetIP
	I0814 01:06:29.761597   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.761918   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:06:29.761946   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.762095   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHHostname
	I0814 01:06:29.763969   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.764320   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:06:29.764353   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.764497   61115 provision.go:143] copyHostCerts
	I0814 01:06:29.764568   61115 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem, removing ...
	I0814 01:06:29.764582   61115 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem
	I0814 01:06:29.764653   61115 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem (1082 bytes)
	I0814 01:06:29.764781   61115 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem, removing ...
	I0814 01:06:29.764791   61115 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem
	I0814 01:06:29.764814   61115 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem (1123 bytes)
	I0814 01:06:29.764875   61115 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem, removing ...
	I0814 01:06:29.764882   61115 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem
	I0814 01:06:29.764899   61115 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem (1675 bytes)
	I0814 01:06:29.764954   61115 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem org=jenkins.embed-certs-901410 san=[127.0.0.1 192.168.50.210 embed-certs-901410 localhost minikube]
	I0814 01:06:29.870234   61115 provision.go:177] copyRemoteCerts
	I0814 01:06:29.870290   61115 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0814 01:06:29.870314   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHHostname
	I0814 01:06:29.872903   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.873188   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:06:29.873220   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.873388   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHPort
	I0814 01:06:29.873582   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 01:06:29.873748   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHUsername
	I0814 01:06:29.873849   61115 sshutil.go:53] new ssh client: &{IP:192.168.50.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/embed-certs-901410/id_rsa Username:docker}
	I0814 01:06:29.959592   61115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0814 01:06:29.982484   61115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0814 01:06:30.005257   61115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0814 01:06:30.029571   61115 provision.go:87] duration metric: took 270.444778ms to configureAuth
	I0814 01:06:30.029595   61115 buildroot.go:189] setting minikube options for container-runtime
	I0814 01:06:30.029773   61115 config.go:182] Loaded profile config "embed-certs-901410": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 01:06:30.029836   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHHostname
	I0814 01:06:30.032696   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:30.033078   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:06:30.033115   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:30.033301   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHPort
	I0814 01:06:30.033492   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 01:06:30.033658   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 01:06:30.033798   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHUsername
	I0814 01:06:30.033953   61115 main.go:141] libmachine: Using SSH client type: native
	I0814 01:06:30.034162   61115 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.210 22 <nil> <nil>}
	I0814 01:06:30.034182   61115 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0814 01:06:27.281267   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:29.284406   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:30.310330   61115 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0814 01:06:30.310362   61115 machine.go:97] duration metric: took 924.221855ms to provisionDockerMachine
	I0814 01:06:30.310376   61115 start.go:293] postStartSetup for "embed-certs-901410" (driver="kvm2")
	I0814 01:06:30.310391   61115 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0814 01:06:30.310412   61115 main.go:141] libmachine: (embed-certs-901410) Calling .DriverName
	I0814 01:06:30.310792   61115 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0814 01:06:30.310829   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHHostname
	I0814 01:06:30.313781   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:30.314184   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:06:30.314211   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:30.314417   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHPort
	I0814 01:06:30.314605   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 01:06:30.314775   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHUsername
	I0814 01:06:30.314921   61115 sshutil.go:53] new ssh client: &{IP:192.168.50.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/embed-certs-901410/id_rsa Username:docker}
	I0814 01:06:30.400094   61115 ssh_runner.go:195] Run: cat /etc/os-release
	I0814 01:06:30.403861   61115 info.go:137] Remote host: Buildroot 2023.02.9
	I0814 01:06:30.403879   61115 filesync.go:126] Scanning /home/jenkins/minikube-integration/19429-9425/.minikube/addons for local assets ...
	I0814 01:06:30.403936   61115 filesync.go:126] Scanning /home/jenkins/minikube-integration/19429-9425/.minikube/files for local assets ...
	I0814 01:06:30.404014   61115 filesync.go:149] local asset: /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem -> 165892.pem in /etc/ssl/certs
	I0814 01:06:30.404128   61115 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0814 01:06:30.412469   61115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem --> /etc/ssl/certs/165892.pem (1708 bytes)
	I0814 01:06:30.434728   61115 start.go:296] duration metric: took 124.33735ms for postStartSetup
	I0814 01:06:30.434768   61115 fix.go:56] duration metric: took 18.384308902s for fixHost
	I0814 01:06:30.434792   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHHostname
	I0814 01:06:30.437730   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:30.438155   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:06:30.438177   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:30.438320   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHPort
	I0814 01:06:30.438510   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 01:06:30.438677   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 01:06:30.438818   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHUsername
	I0814 01:06:30.439014   61115 main.go:141] libmachine: Using SSH client type: native
	I0814 01:06:30.439219   61115 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.210 22 <nil> <nil>}
	I0814 01:06:30.439234   61115 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0814 01:06:30.550947   61115 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723597590.505165718
	
	I0814 01:06:30.550974   61115 fix.go:216] guest clock: 1723597590.505165718
	I0814 01:06:30.550984   61115 fix.go:229] Guest: 2024-08-14 01:06:30.505165718 +0000 UTC Remote: 2024-08-14 01:06:30.434773276 +0000 UTC m=+355.429845421 (delta=70.392442ms)
	I0814 01:06:30.551009   61115 fix.go:200] guest clock delta is within tolerance: 70.392442ms
	I0814 01:06:30.551018   61115 start.go:83] releasing machines lock for "embed-certs-901410", held for 18.500591627s
	I0814 01:06:30.551046   61115 main.go:141] libmachine: (embed-certs-901410) Calling .DriverName
	I0814 01:06:30.551330   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetIP
	I0814 01:06:30.553946   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:30.554367   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:06:30.554403   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:30.554586   61115 main.go:141] libmachine: (embed-certs-901410) Calling .DriverName
	I0814 01:06:30.555088   61115 main.go:141] libmachine: (embed-certs-901410) Calling .DriverName
	I0814 01:06:30.555280   61115 main.go:141] libmachine: (embed-certs-901410) Calling .DriverName
	I0814 01:06:30.555371   61115 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0814 01:06:30.555415   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHHostname
	I0814 01:06:30.555523   61115 ssh_runner.go:195] Run: cat /version.json
	I0814 01:06:30.555549   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHHostname
	I0814 01:06:30.558280   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:30.558369   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:30.558704   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:06:30.558730   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:30.558909   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHPort
	I0814 01:06:30.558922   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:06:30.558945   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:30.559110   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHPort
	I0814 01:06:30.559121   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 01:06:30.559307   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHUsername
	I0814 01:06:30.559319   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 01:06:30.559477   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHUsername
	I0814 01:06:30.559473   61115 sshutil.go:53] new ssh client: &{IP:192.168.50.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/embed-certs-901410/id_rsa Username:docker}
	I0814 01:06:30.559633   61115 sshutil.go:53] new ssh client: &{IP:192.168.50.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/embed-certs-901410/id_rsa Username:docker}
	I0814 01:06:30.650942   61115 ssh_runner.go:195] Run: systemctl --version
	I0814 01:06:30.686931   61115 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0814 01:06:30.834893   61115 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0814 01:06:30.840573   61115 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0814 01:06:30.840644   61115 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0814 01:06:30.856179   61115 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0814 01:06:30.856200   61115 start.go:495] detecting cgroup driver to use...
	I0814 01:06:30.856268   61115 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0814 01:06:30.872056   61115 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0814 01:06:30.884525   61115 docker.go:217] disabling cri-docker service (if available) ...
	I0814 01:06:30.884604   61115 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0814 01:06:30.897219   61115 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0814 01:06:30.910649   61115 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0814 01:06:31.031843   61115 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0814 01:06:31.170959   61115 docker.go:233] disabling docker service ...
	I0814 01:06:31.171034   61115 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0814 01:06:31.185812   61115 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0814 01:06:31.198349   61115 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0814 01:06:31.334492   61115 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0814 01:06:31.448638   61115 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0814 01:06:31.462494   61115 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 01:06:31.479307   61115 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0814 01:06:31.479376   61115 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:06:31.489135   61115 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0814 01:06:31.489202   61115 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:06:31.500174   61115 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:06:31.509884   61115 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:06:31.519412   61115 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0814 01:06:31.529352   61115 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:06:31.539360   61115 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:06:31.555761   61115 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:06:31.566278   61115 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0814 01:06:31.575191   61115 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0814 01:06:31.575242   61115 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0814 01:06:31.587429   61115 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0814 01:06:31.596637   61115 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 01:06:31.702555   61115 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0814 01:06:31.836836   61115 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0814 01:06:31.836908   61115 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0814 01:06:31.841202   61115 start.go:563] Will wait 60s for crictl version
	I0814 01:06:31.841272   61115 ssh_runner.go:195] Run: which crictl
	I0814 01:06:31.844681   61115 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0814 01:06:31.882260   61115 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0814 01:06:31.882348   61115 ssh_runner.go:195] Run: crio --version
	I0814 01:06:31.908181   61115 ssh_runner.go:195] Run: crio --version
	I0814 01:06:31.938158   61115 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0814 01:06:28.917018   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:30.917940   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:32.919466   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:29.636401   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:30.136547   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:30.636748   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:31.136557   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:31.636752   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:32.137082   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:32.637429   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:33.136895   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:33.636703   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:34.136811   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:31.939399   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetIP
	I0814 01:06:31.942325   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:31.942622   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:06:31.942660   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:31.942828   61115 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0814 01:06:31.947071   61115 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 01:06:31.958632   61115 kubeadm.go:883] updating cluster {Name:embed-certs-901410 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:embed-certs-901410 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.210 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0814 01:06:31.958783   61115 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0814 01:06:31.958853   61115 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 01:06:31.996526   61115 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0814 01:06:31.996602   61115 ssh_runner.go:195] Run: which lz4
	I0814 01:06:32.000322   61115 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0814 01:06:32.004629   61115 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0814 01:06:32.004661   61115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0814 01:06:33.171433   61115 crio.go:462] duration metric: took 1.171173942s to copy over tarball
	I0814 01:06:33.171504   61115 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0814 01:06:31.781468   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:33.781547   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:35.781641   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:35.418170   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:37.920694   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:34.637429   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:35.137322   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:35.636955   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:36.136713   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:36.636457   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:37.137396   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:37.637271   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:38.137099   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:38.637303   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:39.136673   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:35.285022   61115 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.11348357s)
	I0814 01:06:35.285047   61115 crio.go:469] duration metric: took 2.113589929s to extract the tarball
	I0814 01:06:35.285054   61115 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0814 01:06:35.320814   61115 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 01:06:35.362145   61115 crio.go:514] all images are preloaded for cri-o runtime.
	I0814 01:06:35.362169   61115 cache_images.go:84] Images are preloaded, skipping loading
	I0814 01:06:35.362177   61115 kubeadm.go:934] updating node { 192.168.50.210 8443 v1.31.0 crio true true} ...
	I0814 01:06:35.362289   61115 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-901410 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.210
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-901410 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0814 01:06:35.362359   61115 ssh_runner.go:195] Run: crio config
	I0814 01:06:35.413412   61115 cni.go:84] Creating CNI manager for ""
	I0814 01:06:35.413433   61115 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 01:06:35.413442   61115 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0814 01:06:35.413461   61115 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.210 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-901410 NodeName:embed-certs-901410 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.210"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.210 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0814 01:06:35.413620   61115 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.210
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-901410"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.210
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.210"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0814 01:06:35.413681   61115 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0814 01:06:35.424217   61115 binaries.go:44] Found k8s binaries, skipping transfer
	I0814 01:06:35.424287   61115 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0814 01:06:35.433358   61115 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0814 01:06:35.448828   61115 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0814 01:06:35.463579   61115 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0814 01:06:35.478423   61115 ssh_runner.go:195] Run: grep 192.168.50.210	control-plane.minikube.internal$ /etc/hosts
	I0814 01:06:35.482005   61115 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.210	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 01:06:35.493411   61115 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 01:06:35.625613   61115 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 01:06:35.642901   61115 certs.go:68] Setting up /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/embed-certs-901410 for IP: 192.168.50.210
	I0814 01:06:35.642927   61115 certs.go:194] generating shared ca certs ...
	I0814 01:06:35.642955   61115 certs.go:226] acquiring lock for ca certs: {Name:mke321171338291cf6d66f3acbfe43d3fabcafa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 01:06:35.643119   61115 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19429-9425/.minikube/ca.key
	I0814 01:06:35.643172   61115 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.key
	I0814 01:06:35.643184   61115 certs.go:256] generating profile certs ...
	I0814 01:06:35.643301   61115 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/embed-certs-901410/client.key
	I0814 01:06:35.643390   61115 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/embed-certs-901410/apiserver.key.0b2ea541
	I0814 01:06:35.643439   61115 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/embed-certs-901410/proxy-client.key
	I0814 01:06:35.643591   61115 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589.pem (1338 bytes)
	W0814 01:06:35.643630   61115 certs.go:480] ignoring /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589_empty.pem, impossibly tiny 0 bytes
	I0814 01:06:35.643648   61115 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem (1679 bytes)
	I0814 01:06:35.643682   61115 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem (1082 bytes)
	I0814 01:06:35.643727   61115 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem (1123 bytes)
	I0814 01:06:35.643768   61115 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem (1675 bytes)
	I0814 01:06:35.643825   61115 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem (1708 bytes)
	I0814 01:06:35.644478   61115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0814 01:06:35.681297   61115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0814 01:06:35.730067   61115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0814 01:06:35.763133   61115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0814 01:06:35.790593   61115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/embed-certs-901410/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0814 01:06:35.815663   61115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/embed-certs-901410/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0814 01:06:35.840763   61115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/embed-certs-901410/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0814 01:06:35.863820   61115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/embed-certs-901410/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0814 01:06:35.887018   61115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589.pem --> /usr/share/ca-certificates/16589.pem (1338 bytes)
	I0814 01:06:35.909408   61115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem --> /usr/share/ca-certificates/165892.pem (1708 bytes)
	I0814 01:06:35.934175   61115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0814 01:06:35.957179   61115 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0814 01:06:35.972922   61115 ssh_runner.go:195] Run: openssl version
	I0814 01:06:35.978523   61115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16589.pem && ln -fs /usr/share/ca-certificates/16589.pem /etc/ssl/certs/16589.pem"
	I0814 01:06:35.987896   61115 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16589.pem
	I0814 01:06:35.991861   61115 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 13 23:59 /usr/share/ca-certificates/16589.pem
	I0814 01:06:35.991922   61115 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16589.pem
	I0814 01:06:35.997354   61115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16589.pem /etc/ssl/certs/51391683.0"
	I0814 01:06:36.007366   61115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/165892.pem && ln -fs /usr/share/ca-certificates/165892.pem /etc/ssl/certs/165892.pem"
	I0814 01:06:36.017502   61115 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/165892.pem
	I0814 01:06:36.021456   61115 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 13 23:59 /usr/share/ca-certificates/165892.pem
	I0814 01:06:36.021506   61115 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/165892.pem
	I0814 01:06:36.026605   61115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/165892.pem /etc/ssl/certs/3ec20f2e.0"
	I0814 01:06:36.035758   61115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0814 01:06:36.044976   61115 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0814 01:06:36.048866   61115 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 13 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0814 01:06:36.048905   61115 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0814 01:06:36.053841   61115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0814 01:06:36.062901   61115 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0814 01:06:36.066905   61115 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0814 01:06:36.072359   61115 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0814 01:06:36.077384   61115 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0814 01:06:36.082634   61115 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0814 01:06:36.087734   61115 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0814 01:06:36.093076   61115 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0814 01:06:36.098239   61115 kubeadm.go:392] StartCluster: {Name:embed-certs-901410 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:embed-certs-901410 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.210 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 01:06:36.098366   61115 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0814 01:06:36.098414   61115 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 01:06:36.137745   61115 cri.go:89] found id: ""
	I0814 01:06:36.137812   61115 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0814 01:06:36.151288   61115 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0814 01:06:36.151304   61115 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0814 01:06:36.151346   61115 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0814 01:06:36.160854   61115 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0814 01:06:36.162454   61115 kubeconfig.go:125] found "embed-certs-901410" server: "https://192.168.50.210:8443"
	I0814 01:06:36.165608   61115 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0814 01:06:36.174251   61115 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.210
	I0814 01:06:36.174272   61115 kubeadm.go:1160] stopping kube-system containers ...
	I0814 01:06:36.174307   61115 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0814 01:06:36.174355   61115 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 01:06:36.208617   61115 cri.go:89] found id: ""
	I0814 01:06:36.208689   61115 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0814 01:06:36.223217   61115 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 01:06:36.231791   61115 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 01:06:36.231807   61115 kubeadm.go:157] found existing configuration files:
	
	I0814 01:06:36.231846   61115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 01:06:36.239738   61115 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 01:06:36.239779   61115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 01:06:36.248183   61115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 01:06:36.256052   61115 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 01:06:36.256099   61115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 01:06:36.264174   61115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 01:06:36.271909   61115 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 01:06:36.271951   61115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 01:06:36.280467   61115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 01:06:36.288795   61115 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 01:06:36.288841   61115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 01:06:36.297142   61115 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 01:06:36.305326   61115 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:06:36.419654   61115 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:06:37.266994   61115 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:06:37.469417   61115 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:06:37.544102   61115 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:06:37.616596   61115 api_server.go:52] waiting for apiserver process to appear ...
	I0814 01:06:37.616684   61115 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:38.117278   61115 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:38.616805   61115 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:39.117789   61115 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:39.616986   61115 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:39.684640   61115 api_server.go:72] duration metric: took 2.068036759s to wait for apiserver process to appear ...
	I0814 01:06:39.684668   61115 api_server.go:88] waiting for apiserver healthz status ...
	I0814 01:06:39.684690   61115 api_server.go:253] Checking apiserver healthz at https://192.168.50.210:8443/healthz ...
	I0814 01:06:39.685138   61115 api_server.go:269] stopped: https://192.168.50.210:8443/healthz: Get "https://192.168.50.210:8443/healthz": dial tcp 192.168.50.210:8443: connect: connection refused
	I0814 01:06:37.782873   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:40.281438   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:40.418079   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:42.418440   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:40.184807   61115 api_server.go:253] Checking apiserver healthz at https://192.168.50.210:8443/healthz ...
	I0814 01:06:42.435930   61115 api_server.go:279] https://192.168.50.210:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0814 01:06:42.435960   61115 api_server.go:103] status: https://192.168.50.210:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0814 01:06:42.435997   61115 api_server.go:253] Checking apiserver healthz at https://192.168.50.210:8443/healthz ...
	I0814 01:06:42.464919   61115 api_server.go:279] https://192.168.50.210:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0814 01:06:42.464949   61115 api_server.go:103] status: https://192.168.50.210:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0814 01:06:42.685218   61115 api_server.go:253] Checking apiserver healthz at https://192.168.50.210:8443/healthz ...
	I0814 01:06:42.691065   61115 api_server.go:279] https://192.168.50.210:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0814 01:06:42.691089   61115 api_server.go:103] status: https://192.168.50.210:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0814 01:06:43.185274   61115 api_server.go:253] Checking apiserver healthz at https://192.168.50.210:8443/healthz ...
	I0814 01:06:43.191160   61115 api_server.go:279] https://192.168.50.210:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0814 01:06:43.191189   61115 api_server.go:103] status: https://192.168.50.210:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0814 01:06:43.685407   61115 api_server.go:253] Checking apiserver healthz at https://192.168.50.210:8443/healthz ...
	I0814 01:06:43.689515   61115 api_server.go:279] https://192.168.50.210:8443/healthz returned 200:
	ok
	I0814 01:06:43.695408   61115 api_server.go:141] control plane version: v1.31.0
	I0814 01:06:43.695435   61115 api_server.go:131] duration metric: took 4.010759094s to wait for apiserver health ...
	I0814 01:06:43.695445   61115 cni.go:84] Creating CNI manager for ""
	I0814 01:06:43.695454   61115 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 01:06:43.696966   61115 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0814 01:06:39.637384   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:40.136562   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:40.637447   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:41.137212   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:41.636824   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:42.136790   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:42.637352   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:43.137237   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:43.637327   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:44.136777   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:43.698444   61115 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0814 01:06:43.713840   61115 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0814 01:06:43.754611   61115 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 01:06:43.765369   61115 system_pods.go:59] 8 kube-system pods found
	I0814 01:06:43.765402   61115 system_pods.go:61] "coredns-6f6b679f8f-fpz8f" [0fae381f-1394-4a55-9735-61197051e0da] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0814 01:06:43.765410   61115 system_pods.go:61] "etcd-embed-certs-901410" [238a87a0-88ab-4663-bc2f-6bf2cb641902] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0814 01:06:43.765421   61115 system_pods.go:61] "kube-apiserver-embed-certs-901410" [0847b62e-42c4-4616-9412-a1547f991ea5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0814 01:06:43.765427   61115 system_pods.go:61] "kube-controller-manager-embed-certs-901410" [868c288a-504f-4bc6-9af3-8d3eff0a4e66] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0814 01:06:43.765431   61115 system_pods.go:61] "kube-proxy-gtr77" [f7b7a6b1-e47f-4982-8247-2adf9ce6690b] Running
	I0814 01:06:43.765436   61115 system_pods.go:61] "kube-scheduler-embed-certs-901410" [803a8501-9a24-436d-8439-2e05ed2b6e2c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0814 01:06:43.765443   61115 system_pods.go:61] "metrics-server-6867b74b74-82tmq" [4683e8c4-92a5-4b81-86c8-55da6044e780] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 01:06:43.765447   61115 system_pods.go:61] "storage-provisioner" [796497c7-c7b4-4207-9dbb-970702bab314] Running
	I0814 01:06:43.765453   61115 system_pods.go:74] duration metric: took 10.823914ms to wait for pod list to return data ...
	I0814 01:06:43.765468   61115 node_conditions.go:102] verifying NodePressure condition ...
	I0814 01:06:43.769292   61115 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0814 01:06:43.769319   61115 node_conditions.go:123] node cpu capacity is 2
	I0814 01:06:43.769334   61115 node_conditions.go:105] duration metric: took 3.855137ms to run NodePressure ...
	I0814 01:06:43.769355   61115 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:06:44.041384   61115 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0814 01:06:44.045549   61115 kubeadm.go:739] kubelet initialised
	I0814 01:06:44.045569   61115 kubeadm.go:740] duration metric: took 4.15887ms waiting for restarted kubelet to initialise ...
	I0814 01:06:44.045576   61115 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 01:06:44.050480   61115 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6f6b679f8f-fpz8f" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:42.281812   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:44.795089   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:44.917037   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:46.918399   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:44.636971   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:45.137082   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:45.636661   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:46.136690   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:46.636597   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:47.136601   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:47.636799   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:48.136486   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:48.637415   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:49.136703   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:46.057380   61115 pod_ready.go:102] pod "coredns-6f6b679f8f-fpz8f" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:48.556914   61115 pod_ready.go:102] pod "coredns-6f6b679f8f-fpz8f" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:49.561672   61115 pod_ready.go:92] pod "coredns-6f6b679f8f-fpz8f" in "kube-system" namespace has status "Ready":"True"
	I0814 01:06:49.561693   61115 pod_ready.go:81] duration metric: took 5.511190087s for pod "coredns-6f6b679f8f-fpz8f" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:49.561705   61115 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-901410" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:47.281700   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:49.780884   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:49.418739   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:51.918181   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:49.636646   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:50.137134   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:50.637310   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:51.136913   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:51.636930   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:52.137158   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:52.636489   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:53.137140   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:53.637032   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:54.137345   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:51.567510   61115 pod_ready.go:102] pod "etcd-embed-certs-901410" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:52.567550   61115 pod_ready.go:92] pod "etcd-embed-certs-901410" in "kube-system" namespace has status "Ready":"True"
	I0814 01:06:52.567575   61115 pod_ready.go:81] duration metric: took 3.005862861s for pod "etcd-embed-certs-901410" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:52.567584   61115 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-901410" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:52.572128   61115 pod_ready.go:92] pod "kube-apiserver-embed-certs-901410" in "kube-system" namespace has status "Ready":"True"
	I0814 01:06:52.572150   61115 pod_ready.go:81] duration metric: took 4.558756ms for pod "kube-apiserver-embed-certs-901410" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:52.572160   61115 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-901410" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:52.575875   61115 pod_ready.go:92] pod "kube-controller-manager-embed-certs-901410" in "kube-system" namespace has status "Ready":"True"
	I0814 01:06:52.575894   61115 pod_ready.go:81] duration metric: took 3.728258ms for pod "kube-controller-manager-embed-certs-901410" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:52.575903   61115 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-gtr77" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:52.579889   61115 pod_ready.go:92] pod "kube-proxy-gtr77" in "kube-system" namespace has status "Ready":"True"
	I0814 01:06:52.579908   61115 pod_ready.go:81] duration metric: took 3.999715ms for pod "kube-proxy-gtr77" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:52.579916   61115 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-901410" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:52.583481   61115 pod_ready.go:92] pod "kube-scheduler-embed-certs-901410" in "kube-system" namespace has status "Ready":"True"
	I0814 01:06:52.583499   61115 pod_ready.go:81] duration metric: took 3.577393ms for pod "kube-scheduler-embed-certs-901410" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:52.583508   61115 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:54.590479   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:51.781057   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:54.280478   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:54.418737   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:56.917785   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:54.636613   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:55.137191   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:55.637149   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:56.137437   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:56.637155   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:57.136629   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:57.636616   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:58.136691   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:58.637180   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:59.137246   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:57.091108   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:59.590751   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:56.781427   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:59.280620   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:01.281835   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:58.918424   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:01.418091   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:59.636603   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:00.137399   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:00.636477   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:01.136689   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:01.636867   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:02.136874   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:02.636850   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:03.136568   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:03.636915   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:04.137185   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:02.090113   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:04.589929   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:03.780774   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:05.781084   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:03.918432   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:06.417245   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:04.636433   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:05.136514   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:05.637177   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:06.136522   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:06.636384   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:07.136753   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:07.636417   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:08.137158   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:08.636665   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:09.137281   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:07.089678   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:09.590309   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:07.781208   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:10.281385   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:08.917707   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:10.917814   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:09.637102   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:10.136575   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:10.637290   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:11.136999   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:11.636523   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:12.136756   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:12.637369   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:13.136763   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:13.637275   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:14.137363   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:12.090323   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:14.092742   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:12.780837   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:14.781484   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:13.424099   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:15.917599   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:17.918631   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:14.636871   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:15.136819   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:15.636660   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:16.136568   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:16.637322   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:17.137088   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:17.637082   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:18.136469   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:18.637351   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:19.136899   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:16.589319   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:18.590539   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:17.279827   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:19.280727   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:20.418308   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:22.418709   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:19.636984   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:20.137256   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:20.636678   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:21.136871   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:21.637264   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:07:21.637336   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:07:21.674035   61804 cri.go:89] found id: ""
	I0814 01:07:21.674081   61804 logs.go:276] 0 containers: []
	W0814 01:07:21.674091   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:07:21.674100   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:07:21.674150   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:07:21.706567   61804 cri.go:89] found id: ""
	I0814 01:07:21.706594   61804 logs.go:276] 0 containers: []
	W0814 01:07:21.706602   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:07:21.706608   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:07:21.706670   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:07:21.744892   61804 cri.go:89] found id: ""
	I0814 01:07:21.744917   61804 logs.go:276] 0 containers: []
	W0814 01:07:21.744927   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:07:21.744933   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:07:21.744987   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:07:21.780766   61804 cri.go:89] found id: ""
	I0814 01:07:21.780791   61804 logs.go:276] 0 containers: []
	W0814 01:07:21.780799   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:07:21.780805   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:07:21.780861   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:07:21.813710   61804 cri.go:89] found id: ""
	I0814 01:07:21.813737   61804 logs.go:276] 0 containers: []
	W0814 01:07:21.813744   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:07:21.813750   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:07:21.813800   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:07:21.851621   61804 cri.go:89] found id: ""
	I0814 01:07:21.851649   61804 logs.go:276] 0 containers: []
	W0814 01:07:21.851657   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:07:21.851663   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:07:21.851713   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:07:21.885176   61804 cri.go:89] found id: ""
	I0814 01:07:21.885207   61804 logs.go:276] 0 containers: []
	W0814 01:07:21.885218   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:07:21.885226   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:07:21.885293   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:07:21.922273   61804 cri.go:89] found id: ""
	I0814 01:07:21.922303   61804 logs.go:276] 0 containers: []
	W0814 01:07:21.922319   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:07:21.922330   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:07:21.922344   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:07:21.975619   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:07:21.975657   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:07:21.989295   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:07:21.989330   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:07:22.117376   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:07:22.117406   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:07:22.117421   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:07:22.190366   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:07:22.190407   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:07:21.094685   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:23.592014   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:21.781584   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:24.281405   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:24.919338   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:27.417053   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:24.727910   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:24.741649   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:07:24.741722   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:07:24.778658   61804 cri.go:89] found id: ""
	I0814 01:07:24.778684   61804 logs.go:276] 0 containers: []
	W0814 01:07:24.778693   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:07:24.778699   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:07:24.778761   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:07:24.811263   61804 cri.go:89] found id: ""
	I0814 01:07:24.811290   61804 logs.go:276] 0 containers: []
	W0814 01:07:24.811314   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:07:24.811321   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:07:24.811385   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:07:24.847414   61804 cri.go:89] found id: ""
	I0814 01:07:24.847442   61804 logs.go:276] 0 containers: []
	W0814 01:07:24.847450   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:07:24.847456   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:07:24.847512   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:07:24.888714   61804 cri.go:89] found id: ""
	I0814 01:07:24.888737   61804 logs.go:276] 0 containers: []
	W0814 01:07:24.888745   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:07:24.888750   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:07:24.888828   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:07:24.937957   61804 cri.go:89] found id: ""
	I0814 01:07:24.937983   61804 logs.go:276] 0 containers: []
	W0814 01:07:24.937994   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:07:24.938002   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:07:24.938086   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:07:24.990489   61804 cri.go:89] found id: ""
	I0814 01:07:24.990514   61804 logs.go:276] 0 containers: []
	W0814 01:07:24.990522   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:07:24.990530   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:07:24.990592   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:07:25.033458   61804 cri.go:89] found id: ""
	I0814 01:07:25.033489   61804 logs.go:276] 0 containers: []
	W0814 01:07:25.033500   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:07:25.033508   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:07:25.033594   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:07:25.065009   61804 cri.go:89] found id: ""
	I0814 01:07:25.065039   61804 logs.go:276] 0 containers: []
	W0814 01:07:25.065049   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:07:25.065062   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:07:25.065074   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:07:25.116806   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:07:25.116841   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:07:25.131759   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:07:25.131790   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:07:25.206389   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:07:25.206415   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:07:25.206435   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:07:25.284603   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:07:25.284632   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:07:27.823371   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:27.836369   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:07:27.836452   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:07:27.876906   61804 cri.go:89] found id: ""
	I0814 01:07:27.876937   61804 logs.go:276] 0 containers: []
	W0814 01:07:27.876950   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:07:27.876960   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:07:27.877039   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:07:27.912449   61804 cri.go:89] found id: ""
	I0814 01:07:27.912481   61804 logs.go:276] 0 containers: []
	W0814 01:07:27.912494   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:07:27.912501   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:07:27.912568   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:07:27.945584   61804 cri.go:89] found id: ""
	I0814 01:07:27.945611   61804 logs.go:276] 0 containers: []
	W0814 01:07:27.945620   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:07:27.945628   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:07:27.945693   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:07:27.982470   61804 cri.go:89] found id: ""
	I0814 01:07:27.982498   61804 logs.go:276] 0 containers: []
	W0814 01:07:27.982508   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:07:27.982517   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:07:27.982592   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:07:28.020494   61804 cri.go:89] found id: ""
	I0814 01:07:28.020521   61804 logs.go:276] 0 containers: []
	W0814 01:07:28.020529   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:07:28.020535   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:07:28.020604   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:07:28.055810   61804 cri.go:89] found id: ""
	I0814 01:07:28.055835   61804 logs.go:276] 0 containers: []
	W0814 01:07:28.055846   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:07:28.055854   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:07:28.055917   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:07:28.092241   61804 cri.go:89] found id: ""
	I0814 01:07:28.092266   61804 logs.go:276] 0 containers: []
	W0814 01:07:28.092273   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:07:28.092279   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:07:28.092336   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:07:28.128234   61804 cri.go:89] found id: ""
	I0814 01:07:28.128259   61804 logs.go:276] 0 containers: []
	W0814 01:07:28.128266   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:07:28.128275   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:07:28.128292   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:07:28.169651   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:07:28.169682   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:07:28.223578   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:07:28.223614   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:07:28.237283   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:07:28.237317   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:07:28.310610   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:07:28.310633   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:07:28.310657   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:07:26.090425   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:28.090637   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:26.781404   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:29.280644   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:31.281808   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:29.917201   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:31.918087   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:30.892125   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:30.904416   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:07:30.904487   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:07:30.938158   61804 cri.go:89] found id: ""
	I0814 01:07:30.938186   61804 logs.go:276] 0 containers: []
	W0814 01:07:30.938197   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:07:30.938204   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:07:30.938273   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:07:30.969960   61804 cri.go:89] found id: ""
	I0814 01:07:30.969990   61804 logs.go:276] 0 containers: []
	W0814 01:07:30.970000   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:07:30.970006   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:07:30.970094   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:07:31.003442   61804 cri.go:89] found id: ""
	I0814 01:07:31.003472   61804 logs.go:276] 0 containers: []
	W0814 01:07:31.003484   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:07:31.003492   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:07:31.003547   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:07:31.036819   61804 cri.go:89] found id: ""
	I0814 01:07:31.036852   61804 logs.go:276] 0 containers: []
	W0814 01:07:31.036866   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:07:31.036874   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:07:31.036943   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:07:31.070521   61804 cri.go:89] found id: ""
	I0814 01:07:31.070546   61804 logs.go:276] 0 containers: []
	W0814 01:07:31.070556   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:07:31.070570   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:07:31.070627   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:07:31.111200   61804 cri.go:89] found id: ""
	I0814 01:07:31.111223   61804 logs.go:276] 0 containers: []
	W0814 01:07:31.111230   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:07:31.111236   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:07:31.111299   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:07:31.143931   61804 cri.go:89] found id: ""
	I0814 01:07:31.143965   61804 logs.go:276] 0 containers: []
	W0814 01:07:31.143973   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:07:31.143978   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:07:31.144027   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:07:31.176742   61804 cri.go:89] found id: ""
	I0814 01:07:31.176765   61804 logs.go:276] 0 containers: []
	W0814 01:07:31.176773   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:07:31.176782   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:07:31.176800   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:07:31.247117   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:07:31.247145   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:07:31.247159   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:07:31.327763   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:07:31.327797   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:07:31.368715   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:07:31.368753   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:07:31.421802   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:07:31.421833   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:07:33.936162   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:33.949580   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:07:33.949647   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:07:33.982423   61804 cri.go:89] found id: ""
	I0814 01:07:33.982452   61804 logs.go:276] 0 containers: []
	W0814 01:07:33.982464   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:07:33.982472   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:07:33.982532   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:07:34.015547   61804 cri.go:89] found id: ""
	I0814 01:07:34.015580   61804 logs.go:276] 0 containers: []
	W0814 01:07:34.015591   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:07:34.015598   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:07:34.015660   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:07:34.047814   61804 cri.go:89] found id: ""
	I0814 01:07:34.047837   61804 logs.go:276] 0 containers: []
	W0814 01:07:34.047845   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:07:34.047851   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:07:34.047914   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:07:34.080509   61804 cri.go:89] found id: ""
	I0814 01:07:34.080539   61804 logs.go:276] 0 containers: []
	W0814 01:07:34.080552   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:07:34.080561   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:07:34.080629   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:07:34.114693   61804 cri.go:89] found id: ""
	I0814 01:07:34.114723   61804 logs.go:276] 0 containers: []
	W0814 01:07:34.114735   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:07:34.114742   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:07:34.114812   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:07:34.148294   61804 cri.go:89] found id: ""
	I0814 01:07:34.148321   61804 logs.go:276] 0 containers: []
	W0814 01:07:34.148334   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:07:34.148344   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:07:34.148410   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:07:34.182913   61804 cri.go:89] found id: ""
	I0814 01:07:34.182938   61804 logs.go:276] 0 containers: []
	W0814 01:07:34.182947   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:07:34.182953   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:07:34.183002   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:07:34.215609   61804 cri.go:89] found id: ""
	I0814 01:07:34.215639   61804 logs.go:276] 0 containers: []
	W0814 01:07:34.215649   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:07:34.215662   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:07:34.215688   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:07:34.278627   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:07:34.278657   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:07:34.278674   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:07:34.353824   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:07:34.353863   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:07:34.390511   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:07:34.390551   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:07:34.440170   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:07:34.440205   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:07:30.589452   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:33.089231   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:33.780724   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:35.781648   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:34.417300   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:36.418300   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:36.955228   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:36.968676   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:07:36.968752   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:07:37.005738   61804 cri.go:89] found id: ""
	I0814 01:07:37.005770   61804 logs.go:276] 0 containers: []
	W0814 01:07:37.005781   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:07:37.005800   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:07:37.005876   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:07:37.038556   61804 cri.go:89] found id: ""
	I0814 01:07:37.038586   61804 logs.go:276] 0 containers: []
	W0814 01:07:37.038594   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:07:37.038599   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:07:37.038659   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:07:37.073835   61804 cri.go:89] found id: ""
	I0814 01:07:37.073870   61804 logs.go:276] 0 containers: []
	W0814 01:07:37.073881   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:07:37.073890   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:07:37.073952   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:07:37.109720   61804 cri.go:89] found id: ""
	I0814 01:07:37.109754   61804 logs.go:276] 0 containers: []
	W0814 01:07:37.109766   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:07:37.109774   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:07:37.109837   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:07:37.141361   61804 cri.go:89] found id: ""
	I0814 01:07:37.141391   61804 logs.go:276] 0 containers: []
	W0814 01:07:37.141401   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:07:37.141409   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:07:37.141460   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:07:37.172803   61804 cri.go:89] found id: ""
	I0814 01:07:37.172833   61804 logs.go:276] 0 containers: []
	W0814 01:07:37.172841   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:07:37.172847   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:07:37.172898   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:07:37.205074   61804 cri.go:89] found id: ""
	I0814 01:07:37.205101   61804 logs.go:276] 0 containers: []
	W0814 01:07:37.205110   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:07:37.205116   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:07:37.205172   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:07:37.237440   61804 cri.go:89] found id: ""
	I0814 01:07:37.237462   61804 logs.go:276] 0 containers: []
	W0814 01:07:37.237472   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:07:37.237484   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:07:37.237499   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:07:37.286411   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:07:37.286442   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:07:37.299649   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:07:37.299673   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:07:37.363165   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:07:37.363188   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:07:37.363209   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:07:37.440551   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:07:37.440589   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:07:35.090686   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:37.091438   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:39.590158   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:38.281686   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:40.780496   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:38.919024   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:41.417327   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:39.980740   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:39.992656   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:07:39.992724   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:07:40.026980   61804 cri.go:89] found id: ""
	I0814 01:07:40.027009   61804 logs.go:276] 0 containers: []
	W0814 01:07:40.027020   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:07:40.027027   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:07:40.027093   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:07:40.059474   61804 cri.go:89] found id: ""
	I0814 01:07:40.059509   61804 logs.go:276] 0 containers: []
	W0814 01:07:40.059521   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:07:40.059528   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:07:40.059602   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:07:40.092222   61804 cri.go:89] found id: ""
	I0814 01:07:40.092251   61804 logs.go:276] 0 containers: []
	W0814 01:07:40.092260   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:07:40.092265   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:07:40.092314   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:07:40.123458   61804 cri.go:89] found id: ""
	I0814 01:07:40.123487   61804 logs.go:276] 0 containers: []
	W0814 01:07:40.123495   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:07:40.123501   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:07:40.123557   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:07:40.155410   61804 cri.go:89] found id: ""
	I0814 01:07:40.155433   61804 logs.go:276] 0 containers: []
	W0814 01:07:40.155461   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:07:40.155467   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:07:40.155517   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:07:40.186726   61804 cri.go:89] found id: ""
	I0814 01:07:40.186750   61804 logs.go:276] 0 containers: []
	W0814 01:07:40.186774   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:07:40.186782   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:07:40.186842   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:07:40.223940   61804 cri.go:89] found id: ""
	I0814 01:07:40.223964   61804 logs.go:276] 0 containers: []
	W0814 01:07:40.223974   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:07:40.223981   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:07:40.224039   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:07:40.255483   61804 cri.go:89] found id: ""
	I0814 01:07:40.255511   61804 logs.go:276] 0 containers: []
	W0814 01:07:40.255520   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:07:40.255532   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:07:40.255547   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:07:40.307368   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:07:40.307400   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:07:40.320297   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:07:40.320323   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:07:40.382358   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:07:40.382390   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:07:40.382406   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:07:40.464226   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:07:40.464312   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:07:43.001144   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:43.015011   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:07:43.015090   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:07:43.047581   61804 cri.go:89] found id: ""
	I0814 01:07:43.047617   61804 logs.go:276] 0 containers: []
	W0814 01:07:43.047629   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:07:43.047636   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:07:43.047709   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:07:43.081737   61804 cri.go:89] found id: ""
	I0814 01:07:43.081769   61804 logs.go:276] 0 containers: []
	W0814 01:07:43.081780   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:07:43.081788   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:07:43.081858   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:07:43.116828   61804 cri.go:89] found id: ""
	I0814 01:07:43.116851   61804 logs.go:276] 0 containers: []
	W0814 01:07:43.116860   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:07:43.116865   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:07:43.116918   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:07:43.149154   61804 cri.go:89] found id: ""
	I0814 01:07:43.149183   61804 logs.go:276] 0 containers: []
	W0814 01:07:43.149195   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:07:43.149203   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:07:43.149270   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:07:43.183298   61804 cri.go:89] found id: ""
	I0814 01:07:43.183327   61804 logs.go:276] 0 containers: []
	W0814 01:07:43.183335   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:07:43.183341   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:07:43.183402   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:07:43.217844   61804 cri.go:89] found id: ""
	I0814 01:07:43.217875   61804 logs.go:276] 0 containers: []
	W0814 01:07:43.217885   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:07:43.217894   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:07:43.217957   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:07:43.254501   61804 cri.go:89] found id: ""
	I0814 01:07:43.254529   61804 logs.go:276] 0 containers: []
	W0814 01:07:43.254540   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:07:43.254549   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:07:43.254621   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:07:43.288499   61804 cri.go:89] found id: ""
	I0814 01:07:43.288520   61804 logs.go:276] 0 containers: []
	W0814 01:07:43.288528   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:07:43.288538   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:07:43.288553   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:07:43.364920   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:07:43.364957   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:07:43.402536   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:07:43.402563   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:07:43.454370   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:07:43.454403   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:07:43.467972   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:07:43.468000   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:07:43.541823   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:07:42.089879   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:44.090254   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:42.781141   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:45.280856   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:43.418435   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:45.918224   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:47.918468   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:46.042614   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:46.055014   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:07:46.055074   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:07:46.088632   61804 cri.go:89] found id: ""
	I0814 01:07:46.088664   61804 logs.go:276] 0 containers: []
	W0814 01:07:46.088676   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:07:46.088684   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:07:46.088755   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:07:46.121747   61804 cri.go:89] found id: ""
	I0814 01:07:46.121774   61804 logs.go:276] 0 containers: []
	W0814 01:07:46.121782   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:07:46.121788   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:07:46.121837   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:07:46.157301   61804 cri.go:89] found id: ""
	I0814 01:07:46.157329   61804 logs.go:276] 0 containers: []
	W0814 01:07:46.157340   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:07:46.157348   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:07:46.157412   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:07:46.188543   61804 cri.go:89] found id: ""
	I0814 01:07:46.188575   61804 logs.go:276] 0 containers: []
	W0814 01:07:46.188586   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:07:46.188594   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:07:46.188657   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:07:46.219762   61804 cri.go:89] found id: ""
	I0814 01:07:46.219787   61804 logs.go:276] 0 containers: []
	W0814 01:07:46.219795   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:07:46.219801   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:07:46.219849   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:07:46.253187   61804 cri.go:89] found id: ""
	I0814 01:07:46.253223   61804 logs.go:276] 0 containers: []
	W0814 01:07:46.253234   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:07:46.253242   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:07:46.253326   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:07:46.287614   61804 cri.go:89] found id: ""
	I0814 01:07:46.287647   61804 logs.go:276] 0 containers: []
	W0814 01:07:46.287656   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:07:46.287662   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:07:46.287716   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:07:46.323558   61804 cri.go:89] found id: ""
	I0814 01:07:46.323588   61804 logs.go:276] 0 containers: []
	W0814 01:07:46.323599   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:07:46.323611   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:07:46.323628   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:07:46.336110   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:07:46.336139   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:07:46.398541   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:07:46.398568   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:07:46.398584   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:07:46.476132   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:07:46.476166   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:07:46.521433   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:07:46.521470   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:07:49.071324   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:49.083741   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:07:49.083816   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:07:49.117788   61804 cri.go:89] found id: ""
	I0814 01:07:49.117816   61804 logs.go:276] 0 containers: []
	W0814 01:07:49.117828   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:07:49.117836   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:07:49.117903   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:07:49.153363   61804 cri.go:89] found id: ""
	I0814 01:07:49.153398   61804 logs.go:276] 0 containers: []
	W0814 01:07:49.153409   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:07:49.153417   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:07:49.153488   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:07:49.186229   61804 cri.go:89] found id: ""
	I0814 01:07:49.186253   61804 logs.go:276] 0 containers: []
	W0814 01:07:49.186261   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:07:49.186267   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:07:49.186327   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:07:49.218463   61804 cri.go:89] found id: ""
	I0814 01:07:49.218485   61804 logs.go:276] 0 containers: []
	W0814 01:07:49.218492   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:07:49.218498   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:07:49.218559   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:07:49.250172   61804 cri.go:89] found id: ""
	I0814 01:07:49.250204   61804 logs.go:276] 0 containers: []
	W0814 01:07:49.250214   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:07:49.250222   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:07:49.250287   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:07:49.285574   61804 cri.go:89] found id: ""
	I0814 01:07:49.285602   61804 logs.go:276] 0 containers: []
	W0814 01:07:49.285612   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:07:49.285620   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:07:49.285679   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:07:49.317583   61804 cri.go:89] found id: ""
	I0814 01:07:49.317614   61804 logs.go:276] 0 containers: []
	W0814 01:07:49.317625   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:07:49.317632   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:07:49.317690   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:07:49.350486   61804 cri.go:89] found id: ""
	I0814 01:07:49.350513   61804 logs.go:276] 0 containers: []
	W0814 01:07:49.350524   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:07:49.350535   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:07:49.350550   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:07:49.401242   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:07:49.401278   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:07:49.415776   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:07:49.415805   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:07:49.487135   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:07:49.487207   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:07:49.487229   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:07:46.092233   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:48.589232   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:47.780910   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:49.781008   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:50.418178   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:52.917953   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:49.569068   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:07:49.569103   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:07:52.108074   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:52.120495   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:07:52.120568   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:07:52.155022   61804 cri.go:89] found id: ""
	I0814 01:07:52.155047   61804 logs.go:276] 0 containers: []
	W0814 01:07:52.155055   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:07:52.155063   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:07:52.155131   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:07:52.186783   61804 cri.go:89] found id: ""
	I0814 01:07:52.186813   61804 logs.go:276] 0 containers: []
	W0814 01:07:52.186837   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:07:52.186854   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:07:52.186908   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:07:52.219089   61804 cri.go:89] found id: ""
	I0814 01:07:52.219118   61804 logs.go:276] 0 containers: []
	W0814 01:07:52.219129   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:07:52.219136   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:07:52.219200   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:07:52.252343   61804 cri.go:89] found id: ""
	I0814 01:07:52.252378   61804 logs.go:276] 0 containers: []
	W0814 01:07:52.252391   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:07:52.252399   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:07:52.252460   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:07:52.288827   61804 cri.go:89] found id: ""
	I0814 01:07:52.288848   61804 logs.go:276] 0 containers: []
	W0814 01:07:52.288855   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:07:52.288861   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:07:52.288913   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:07:52.322201   61804 cri.go:89] found id: ""
	I0814 01:07:52.322228   61804 logs.go:276] 0 containers: []
	W0814 01:07:52.322240   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:07:52.322247   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:07:52.322327   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:07:52.357482   61804 cri.go:89] found id: ""
	I0814 01:07:52.357508   61804 logs.go:276] 0 containers: []
	W0814 01:07:52.357519   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:07:52.357527   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:07:52.357599   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:07:52.390481   61804 cri.go:89] found id: ""
	I0814 01:07:52.390508   61804 logs.go:276] 0 containers: []
	W0814 01:07:52.390515   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:07:52.390523   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:07:52.390536   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:07:52.403144   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:07:52.403171   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:07:52.474148   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:07:52.474170   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:07:52.474182   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:07:52.555353   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:07:52.555396   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:07:52.592151   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:07:52.592180   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:07:50.589355   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:52.590468   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:52.282598   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:54.780753   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:55.418165   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:57.418294   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:55.143835   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:55.156285   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:07:55.156360   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:07:55.195624   61804 cri.go:89] found id: ""
	I0814 01:07:55.195655   61804 logs.go:276] 0 containers: []
	W0814 01:07:55.195666   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:07:55.195673   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:07:55.195735   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:07:55.230384   61804 cri.go:89] found id: ""
	I0814 01:07:55.230409   61804 logs.go:276] 0 containers: []
	W0814 01:07:55.230419   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:07:55.230426   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:07:55.230491   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:07:55.264774   61804 cri.go:89] found id: ""
	I0814 01:07:55.264802   61804 logs.go:276] 0 containers: []
	W0814 01:07:55.264812   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:07:55.264819   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:07:55.264905   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:07:55.297679   61804 cri.go:89] found id: ""
	I0814 01:07:55.297706   61804 logs.go:276] 0 containers: []
	W0814 01:07:55.297715   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:07:55.297721   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:07:55.297780   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:07:55.331555   61804 cri.go:89] found id: ""
	I0814 01:07:55.331591   61804 logs.go:276] 0 containers: []
	W0814 01:07:55.331602   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:07:55.331609   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:07:55.331685   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:07:55.362351   61804 cri.go:89] found id: ""
	I0814 01:07:55.362374   61804 logs.go:276] 0 containers: []
	W0814 01:07:55.362381   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:07:55.362388   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:07:55.362434   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:07:55.397261   61804 cri.go:89] found id: ""
	I0814 01:07:55.397292   61804 logs.go:276] 0 containers: []
	W0814 01:07:55.397301   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:07:55.397308   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:07:55.397355   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:07:55.431333   61804 cri.go:89] found id: ""
	I0814 01:07:55.431363   61804 logs.go:276] 0 containers: []
	W0814 01:07:55.431376   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:07:55.431388   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:07:55.431403   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:07:55.445865   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:07:55.445901   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:07:55.511474   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:07:55.511494   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:07:55.511505   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:07:55.596934   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:07:55.596966   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:07:55.632440   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:07:55.632477   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:07:58.183656   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:58.196717   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:07:58.196776   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:07:58.231854   61804 cri.go:89] found id: ""
	I0814 01:07:58.231890   61804 logs.go:276] 0 containers: []
	W0814 01:07:58.231902   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:07:58.231910   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:07:58.231972   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:07:58.267169   61804 cri.go:89] found id: ""
	I0814 01:07:58.267201   61804 logs.go:276] 0 containers: []
	W0814 01:07:58.267211   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:07:58.267218   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:07:58.267277   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:07:58.301552   61804 cri.go:89] found id: ""
	I0814 01:07:58.301581   61804 logs.go:276] 0 containers: []
	W0814 01:07:58.301589   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:07:58.301596   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:07:58.301652   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:07:58.334399   61804 cri.go:89] found id: ""
	I0814 01:07:58.334427   61804 logs.go:276] 0 containers: []
	W0814 01:07:58.334434   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:07:58.334440   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:07:58.334490   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:07:58.366748   61804 cri.go:89] found id: ""
	I0814 01:07:58.366777   61804 logs.go:276] 0 containers: []
	W0814 01:07:58.366787   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:07:58.366794   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:07:58.366860   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:07:58.401078   61804 cri.go:89] found id: ""
	I0814 01:07:58.401108   61804 logs.go:276] 0 containers: []
	W0814 01:07:58.401117   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:07:58.401123   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:07:58.401179   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:07:58.433766   61804 cri.go:89] found id: ""
	I0814 01:07:58.433795   61804 logs.go:276] 0 containers: []
	W0814 01:07:58.433807   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:07:58.433813   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:07:58.433863   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:07:58.467187   61804 cri.go:89] found id: ""
	I0814 01:07:58.467211   61804 logs.go:276] 0 containers: []
	W0814 01:07:58.467219   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:07:58.467227   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:07:58.467241   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:07:58.520695   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:07:58.520733   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:07:58.535262   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:07:58.535288   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:07:58.601335   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:07:58.601354   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:07:58.601367   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:07:58.683365   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:07:58.683411   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:07:55.089601   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:57.089754   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:59.590432   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:56.783376   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:59.282603   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:59.917309   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:01.917515   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:01.221305   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:01.233782   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:01.233863   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:01.265991   61804 cri.go:89] found id: ""
	I0814 01:08:01.266019   61804 logs.go:276] 0 containers: []
	W0814 01:08:01.266030   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:01.266048   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:01.266116   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:01.300802   61804 cri.go:89] found id: ""
	I0814 01:08:01.300825   61804 logs.go:276] 0 containers: []
	W0814 01:08:01.300840   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:01.300851   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:01.300918   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:01.334762   61804 cri.go:89] found id: ""
	I0814 01:08:01.334788   61804 logs.go:276] 0 containers: []
	W0814 01:08:01.334796   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:01.334803   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:01.334858   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:01.367051   61804 cri.go:89] found id: ""
	I0814 01:08:01.367075   61804 logs.go:276] 0 containers: []
	W0814 01:08:01.367083   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:01.367089   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:01.367147   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:01.401875   61804 cri.go:89] found id: ""
	I0814 01:08:01.401904   61804 logs.go:276] 0 containers: []
	W0814 01:08:01.401915   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:01.401922   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:01.401982   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:01.435930   61804 cri.go:89] found id: ""
	I0814 01:08:01.435958   61804 logs.go:276] 0 containers: []
	W0814 01:08:01.435975   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:01.435994   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:01.436056   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:01.470913   61804 cri.go:89] found id: ""
	I0814 01:08:01.470943   61804 logs.go:276] 0 containers: []
	W0814 01:08:01.470958   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:01.470966   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:01.471030   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:01.506552   61804 cri.go:89] found id: ""
	I0814 01:08:01.506584   61804 logs.go:276] 0 containers: []
	W0814 01:08:01.506595   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:01.506607   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:01.506621   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:01.557203   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:01.557245   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:01.570729   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:01.570754   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:01.636244   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:01.636268   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:01.636282   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:01.720905   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:01.720937   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:04.261326   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:04.274952   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:04.275020   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:04.309640   61804 cri.go:89] found id: ""
	I0814 01:08:04.309695   61804 logs.go:276] 0 containers: []
	W0814 01:08:04.309708   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:04.309717   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:04.309784   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:04.343333   61804 cri.go:89] found id: ""
	I0814 01:08:04.343368   61804 logs.go:276] 0 containers: []
	W0814 01:08:04.343380   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:04.343388   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:04.343446   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:04.377058   61804 cri.go:89] found id: ""
	I0814 01:08:04.377090   61804 logs.go:276] 0 containers: []
	W0814 01:08:04.377101   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:04.377109   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:04.377170   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:04.411932   61804 cri.go:89] found id: ""
	I0814 01:08:04.411961   61804 logs.go:276] 0 containers: []
	W0814 01:08:04.411973   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:04.411980   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:04.412039   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:04.449523   61804 cri.go:89] found id: ""
	I0814 01:08:04.449557   61804 logs.go:276] 0 containers: []
	W0814 01:08:04.449569   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:04.449577   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:04.449639   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:04.505818   61804 cri.go:89] found id: ""
	I0814 01:08:04.505844   61804 logs.go:276] 0 containers: []
	W0814 01:08:04.505852   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:04.505858   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:04.505911   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:01.594524   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:04.089421   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:01.780659   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:03.780893   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:06.281784   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:03.917861   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:06.417117   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:04.540720   61804 cri.go:89] found id: ""
	I0814 01:08:04.540747   61804 logs.go:276] 0 containers: []
	W0814 01:08:04.540754   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:04.540759   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:04.540822   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:04.575188   61804 cri.go:89] found id: ""
	I0814 01:08:04.575218   61804 logs.go:276] 0 containers: []
	W0814 01:08:04.575230   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:04.575241   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:04.575254   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:04.624557   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:04.624593   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:04.637679   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:04.637707   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:04.707655   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:04.707676   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:04.707690   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:04.792530   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:04.792564   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:07.333726   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:07.346667   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:07.346762   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:07.379773   61804 cri.go:89] found id: ""
	I0814 01:08:07.379809   61804 logs.go:276] 0 containers: []
	W0814 01:08:07.379821   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:07.379832   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:07.379898   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:07.413473   61804 cri.go:89] found id: ""
	I0814 01:08:07.413508   61804 logs.go:276] 0 containers: []
	W0814 01:08:07.413519   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:07.413528   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:07.413592   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:07.448033   61804 cri.go:89] found id: ""
	I0814 01:08:07.448065   61804 logs.go:276] 0 containers: []
	W0814 01:08:07.448076   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:07.448084   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:07.448149   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:07.483015   61804 cri.go:89] found id: ""
	I0814 01:08:07.483043   61804 logs.go:276] 0 containers: []
	W0814 01:08:07.483051   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:07.483057   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:07.483116   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:07.516222   61804 cri.go:89] found id: ""
	I0814 01:08:07.516245   61804 logs.go:276] 0 containers: []
	W0814 01:08:07.516253   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:07.516259   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:07.516309   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:07.552179   61804 cri.go:89] found id: ""
	I0814 01:08:07.552203   61804 logs.go:276] 0 containers: []
	W0814 01:08:07.552211   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:07.552217   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:07.552269   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:07.585804   61804 cri.go:89] found id: ""
	I0814 01:08:07.585832   61804 logs.go:276] 0 containers: []
	W0814 01:08:07.585842   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:07.585850   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:07.585913   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:07.620731   61804 cri.go:89] found id: ""
	I0814 01:08:07.620757   61804 logs.go:276] 0 containers: []
	W0814 01:08:07.620766   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:07.620774   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:07.620786   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:07.662648   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:07.662686   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:07.713380   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:07.713418   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:07.726770   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:07.726801   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:07.794679   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:07.794705   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:07.794720   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:06.090545   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:08.093404   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:08.780821   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:11.281395   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:08.417151   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:10.418613   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:12.916869   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:10.370665   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:10.383986   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:10.384046   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:10.417596   61804 cri.go:89] found id: ""
	I0814 01:08:10.417622   61804 logs.go:276] 0 containers: []
	W0814 01:08:10.417634   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:10.417642   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:10.417703   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:10.453782   61804 cri.go:89] found id: ""
	I0814 01:08:10.453813   61804 logs.go:276] 0 containers: []
	W0814 01:08:10.453824   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:10.453832   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:10.453895   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:10.486795   61804 cri.go:89] found id: ""
	I0814 01:08:10.486821   61804 logs.go:276] 0 containers: []
	W0814 01:08:10.486831   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:10.486839   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:10.486930   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:10.519249   61804 cri.go:89] found id: ""
	I0814 01:08:10.519285   61804 logs.go:276] 0 containers: []
	W0814 01:08:10.519296   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:10.519304   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:10.519369   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:10.551791   61804 cri.go:89] found id: ""
	I0814 01:08:10.551818   61804 logs.go:276] 0 containers: []
	W0814 01:08:10.551825   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:10.551834   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:10.551892   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:10.584630   61804 cri.go:89] found id: ""
	I0814 01:08:10.584658   61804 logs.go:276] 0 containers: []
	W0814 01:08:10.584669   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:10.584679   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:10.584742   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:10.616870   61804 cri.go:89] found id: ""
	I0814 01:08:10.616898   61804 logs.go:276] 0 containers: []
	W0814 01:08:10.616911   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:10.616918   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:10.616984   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:10.650681   61804 cri.go:89] found id: ""
	I0814 01:08:10.650709   61804 logs.go:276] 0 containers: []
	W0814 01:08:10.650721   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:10.650731   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:10.650748   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:10.663021   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:10.663047   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:10.731788   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:10.731813   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:10.731829   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:10.812174   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:10.812213   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:10.854260   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:10.854287   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:13.414862   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:13.428537   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:13.428595   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:13.460800   61804 cri.go:89] found id: ""
	I0814 01:08:13.460836   61804 logs.go:276] 0 containers: []
	W0814 01:08:13.460850   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:13.460859   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:13.460933   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:13.494240   61804 cri.go:89] found id: ""
	I0814 01:08:13.494264   61804 logs.go:276] 0 containers: []
	W0814 01:08:13.494274   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:13.494282   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:13.494370   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:13.526684   61804 cri.go:89] found id: ""
	I0814 01:08:13.526715   61804 logs.go:276] 0 containers: []
	W0814 01:08:13.526726   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:13.526734   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:13.526797   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:13.560258   61804 cri.go:89] found id: ""
	I0814 01:08:13.560281   61804 logs.go:276] 0 containers: []
	W0814 01:08:13.560289   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:13.560296   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:13.560353   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:13.592615   61804 cri.go:89] found id: ""
	I0814 01:08:13.592641   61804 logs.go:276] 0 containers: []
	W0814 01:08:13.592653   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:13.592668   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:13.592732   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:13.627268   61804 cri.go:89] found id: ""
	I0814 01:08:13.627291   61804 logs.go:276] 0 containers: []
	W0814 01:08:13.627299   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:13.627305   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:13.627363   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:13.661932   61804 cri.go:89] found id: ""
	I0814 01:08:13.661955   61804 logs.go:276] 0 containers: []
	W0814 01:08:13.661963   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:13.661968   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:13.662024   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:13.694724   61804 cri.go:89] found id: ""
	I0814 01:08:13.694750   61804 logs.go:276] 0 containers: []
	W0814 01:08:13.694760   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:13.694770   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:13.694785   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:13.759415   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:13.759436   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:13.759449   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:13.835496   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:13.835532   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:13.873749   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:13.873779   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:13.927612   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:13.927647   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:10.590789   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:13.090113   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:13.781937   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:16.281253   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:14.920559   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:17.418625   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:16.440696   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:16.455648   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:16.455734   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:16.490557   61804 cri.go:89] found id: ""
	I0814 01:08:16.490587   61804 logs.go:276] 0 containers: []
	W0814 01:08:16.490599   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:16.490606   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:16.490667   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:16.524268   61804 cri.go:89] found id: ""
	I0814 01:08:16.524294   61804 logs.go:276] 0 containers: []
	W0814 01:08:16.524303   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:16.524315   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:16.524379   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:16.562651   61804 cri.go:89] found id: ""
	I0814 01:08:16.562686   61804 logs.go:276] 0 containers: []
	W0814 01:08:16.562696   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:16.562708   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:16.562771   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:16.598581   61804 cri.go:89] found id: ""
	I0814 01:08:16.598605   61804 logs.go:276] 0 containers: []
	W0814 01:08:16.598613   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:16.598619   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:16.598669   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:16.646849   61804 cri.go:89] found id: ""
	I0814 01:08:16.646872   61804 logs.go:276] 0 containers: []
	W0814 01:08:16.646880   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:16.646886   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:16.646939   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:16.698695   61804 cri.go:89] found id: ""
	I0814 01:08:16.698720   61804 logs.go:276] 0 containers: []
	W0814 01:08:16.698727   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:16.698733   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:16.698793   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:16.748149   61804 cri.go:89] found id: ""
	I0814 01:08:16.748182   61804 logs.go:276] 0 containers: []
	W0814 01:08:16.748193   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:16.748201   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:16.748263   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:16.783334   61804 cri.go:89] found id: ""
	I0814 01:08:16.783362   61804 logs.go:276] 0 containers: []
	W0814 01:08:16.783371   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:16.783378   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:16.783389   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:16.833178   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:16.833211   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:16.845843   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:16.845873   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:16.916728   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:16.916754   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:16.916770   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:17.001194   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:17.001236   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:15.588888   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:17.589309   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:19.593806   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:18.780869   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:20.780899   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:19.918779   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:22.417464   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:19.540300   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:19.554740   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:19.554823   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:19.590452   61804 cri.go:89] found id: ""
	I0814 01:08:19.590478   61804 logs.go:276] 0 containers: []
	W0814 01:08:19.590489   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:19.590498   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:19.590559   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:19.623643   61804 cri.go:89] found id: ""
	I0814 01:08:19.623673   61804 logs.go:276] 0 containers: []
	W0814 01:08:19.623683   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:19.623691   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:19.623759   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:19.659205   61804 cri.go:89] found id: ""
	I0814 01:08:19.659228   61804 logs.go:276] 0 containers: []
	W0814 01:08:19.659236   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:19.659243   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:19.659312   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:19.695038   61804 cri.go:89] found id: ""
	I0814 01:08:19.695061   61804 logs.go:276] 0 containers: []
	W0814 01:08:19.695068   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:19.695075   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:19.695132   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:19.728525   61804 cri.go:89] found id: ""
	I0814 01:08:19.728555   61804 logs.go:276] 0 containers: []
	W0814 01:08:19.728568   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:19.728585   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:19.728652   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:19.764153   61804 cri.go:89] found id: ""
	I0814 01:08:19.764180   61804 logs.go:276] 0 containers: []
	W0814 01:08:19.764191   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:19.764198   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:19.764261   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:19.803346   61804 cri.go:89] found id: ""
	I0814 01:08:19.803382   61804 logs.go:276] 0 containers: []
	W0814 01:08:19.803392   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:19.803400   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:19.803462   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:19.835783   61804 cri.go:89] found id: ""
	I0814 01:08:19.835811   61804 logs.go:276] 0 containers: []
	W0814 01:08:19.835818   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:19.835827   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:19.835839   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:19.889917   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:19.889961   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:19.903826   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:19.903858   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:19.977790   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:19.977813   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:19.977832   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:20.053634   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:20.053672   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:22.598821   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:22.612128   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:22.612209   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:22.647840   61804 cri.go:89] found id: ""
	I0814 01:08:22.647864   61804 logs.go:276] 0 containers: []
	W0814 01:08:22.647873   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:22.647880   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:22.647942   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:22.681572   61804 cri.go:89] found id: ""
	I0814 01:08:22.681594   61804 logs.go:276] 0 containers: []
	W0814 01:08:22.681601   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:22.681606   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:22.681670   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:22.715737   61804 cri.go:89] found id: ""
	I0814 01:08:22.715785   61804 logs.go:276] 0 containers: []
	W0814 01:08:22.715793   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:22.715799   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:22.715856   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:22.750605   61804 cri.go:89] found id: ""
	I0814 01:08:22.750628   61804 logs.go:276] 0 containers: []
	W0814 01:08:22.750636   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:22.750643   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:22.750693   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:22.786410   61804 cri.go:89] found id: ""
	I0814 01:08:22.786434   61804 logs.go:276] 0 containers: []
	W0814 01:08:22.786442   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:22.786447   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:22.786502   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:22.821799   61804 cri.go:89] found id: ""
	I0814 01:08:22.821830   61804 logs.go:276] 0 containers: []
	W0814 01:08:22.821840   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:22.821846   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:22.821923   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:22.861218   61804 cri.go:89] found id: ""
	I0814 01:08:22.861243   61804 logs.go:276] 0 containers: []
	W0814 01:08:22.861254   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:22.861261   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:22.861324   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:22.896371   61804 cri.go:89] found id: ""
	I0814 01:08:22.896398   61804 logs.go:276] 0 containers: []
	W0814 01:08:22.896408   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:22.896419   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:22.896434   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:22.951998   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:22.952035   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:22.966214   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:22.966239   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:23.035790   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:23.035812   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:23.035824   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:23.119675   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:23.119708   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:22.090427   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:24.100671   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:22.781758   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:25.280556   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:24.419130   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:26.918236   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:25.657771   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:25.671521   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:25.671607   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:25.708419   61804 cri.go:89] found id: ""
	I0814 01:08:25.708451   61804 logs.go:276] 0 containers: []
	W0814 01:08:25.708460   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:25.708466   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:25.708514   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:25.743263   61804 cri.go:89] found id: ""
	I0814 01:08:25.743296   61804 logs.go:276] 0 containers: []
	W0814 01:08:25.743309   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:25.743318   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:25.743384   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:25.773544   61804 cri.go:89] found id: ""
	I0814 01:08:25.773570   61804 logs.go:276] 0 containers: []
	W0814 01:08:25.773580   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:25.773588   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:25.773649   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:25.805316   61804 cri.go:89] found id: ""
	I0814 01:08:25.805339   61804 logs.go:276] 0 containers: []
	W0814 01:08:25.805347   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:25.805353   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:25.805404   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:25.837785   61804 cri.go:89] found id: ""
	I0814 01:08:25.837810   61804 logs.go:276] 0 containers: []
	W0814 01:08:25.837818   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:25.837824   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:25.837893   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:25.877145   61804 cri.go:89] found id: ""
	I0814 01:08:25.877171   61804 logs.go:276] 0 containers: []
	W0814 01:08:25.877182   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:25.877190   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:25.877236   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:25.913823   61804 cri.go:89] found id: ""
	I0814 01:08:25.913861   61804 logs.go:276] 0 containers: []
	W0814 01:08:25.913872   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:25.913880   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:25.913946   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:25.947397   61804 cri.go:89] found id: ""
	I0814 01:08:25.947419   61804 logs.go:276] 0 containers: []
	W0814 01:08:25.947427   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:25.947435   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:25.947446   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:26.023754   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:26.023812   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:26.060030   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:26.060068   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:26.110625   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:26.110663   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:26.123952   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:26.123991   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:26.194210   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:28.694490   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:28.706976   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:28.707040   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:28.739739   61804 cri.go:89] found id: ""
	I0814 01:08:28.739768   61804 logs.go:276] 0 containers: []
	W0814 01:08:28.739775   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:28.739781   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:28.739831   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:28.771179   61804 cri.go:89] found id: ""
	I0814 01:08:28.771217   61804 logs.go:276] 0 containers: []
	W0814 01:08:28.771228   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:28.771237   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:28.771303   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:28.805634   61804 cri.go:89] found id: ""
	I0814 01:08:28.805661   61804 logs.go:276] 0 containers: []
	W0814 01:08:28.805670   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:28.805675   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:28.805727   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:28.840796   61804 cri.go:89] found id: ""
	I0814 01:08:28.840819   61804 logs.go:276] 0 containers: []
	W0814 01:08:28.840827   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:28.840833   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:28.840893   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:28.879627   61804 cri.go:89] found id: ""
	I0814 01:08:28.879656   61804 logs.go:276] 0 containers: []
	W0814 01:08:28.879668   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:28.879675   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:28.879734   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:28.916568   61804 cri.go:89] found id: ""
	I0814 01:08:28.916588   61804 logs.go:276] 0 containers: []
	W0814 01:08:28.916597   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:28.916602   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:28.916658   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:28.952959   61804 cri.go:89] found id: ""
	I0814 01:08:28.952986   61804 logs.go:276] 0 containers: []
	W0814 01:08:28.952996   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:28.953003   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:28.953061   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:28.993496   61804 cri.go:89] found id: ""
	I0814 01:08:28.993527   61804 logs.go:276] 0 containers: []
	W0814 01:08:28.993538   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:28.993550   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:28.993565   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:29.079181   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:29.079219   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:29.121692   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:29.121718   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:29.174008   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:29.174068   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:29.188872   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:29.188904   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:29.254381   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:26.589068   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:28.590266   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:27.281232   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:29.781697   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:28.918512   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:31.418087   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:31.754986   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:31.767581   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:31.767656   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:31.803826   61804 cri.go:89] found id: ""
	I0814 01:08:31.803853   61804 logs.go:276] 0 containers: []
	W0814 01:08:31.803861   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:31.803867   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:31.803927   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:31.837958   61804 cri.go:89] found id: ""
	I0814 01:08:31.837986   61804 logs.go:276] 0 containers: []
	W0814 01:08:31.837996   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:31.838004   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:31.838077   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:31.869567   61804 cri.go:89] found id: ""
	I0814 01:08:31.869595   61804 logs.go:276] 0 containers: []
	W0814 01:08:31.869604   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:31.869612   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:31.869680   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:31.906943   61804 cri.go:89] found id: ""
	I0814 01:08:31.906973   61804 logs.go:276] 0 containers: []
	W0814 01:08:31.906985   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:31.906992   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:31.907059   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:31.940969   61804 cri.go:89] found id: ""
	I0814 01:08:31.941006   61804 logs.go:276] 0 containers: []
	W0814 01:08:31.941017   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:31.941025   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:31.941081   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:31.974546   61804 cri.go:89] found id: ""
	I0814 01:08:31.974578   61804 logs.go:276] 0 containers: []
	W0814 01:08:31.974588   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:31.974596   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:31.974657   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:32.007586   61804 cri.go:89] found id: ""
	I0814 01:08:32.007619   61804 logs.go:276] 0 containers: []
	W0814 01:08:32.007633   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:32.007641   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:32.007703   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:32.040073   61804 cri.go:89] found id: ""
	I0814 01:08:32.040104   61804 logs.go:276] 0 containers: []
	W0814 01:08:32.040116   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:32.040128   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:32.040142   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:32.094938   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:32.094978   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:32.107967   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:32.108002   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:32.176290   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:32.176314   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:32.176326   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:32.251231   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:32.251269   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:30.590569   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:33.089507   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:32.287689   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:34.781273   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:33.918103   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:36.417197   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:34.791693   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:34.804519   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:34.804582   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:34.838907   61804 cri.go:89] found id: ""
	I0814 01:08:34.838933   61804 logs.go:276] 0 containers: []
	W0814 01:08:34.838941   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:34.838947   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:34.839008   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:34.869650   61804 cri.go:89] found id: ""
	I0814 01:08:34.869676   61804 logs.go:276] 0 containers: []
	W0814 01:08:34.869684   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:34.869689   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:34.869739   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:34.903598   61804 cri.go:89] found id: ""
	I0814 01:08:34.903635   61804 logs.go:276] 0 containers: []
	W0814 01:08:34.903648   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:34.903655   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:34.903719   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:34.937101   61804 cri.go:89] found id: ""
	I0814 01:08:34.937131   61804 logs.go:276] 0 containers: []
	W0814 01:08:34.937143   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:34.937151   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:34.937214   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:34.969880   61804 cri.go:89] found id: ""
	I0814 01:08:34.969913   61804 logs.go:276] 0 containers: []
	W0814 01:08:34.969925   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:34.969933   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:34.969990   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:35.004158   61804 cri.go:89] found id: ""
	I0814 01:08:35.004185   61804 logs.go:276] 0 containers: []
	W0814 01:08:35.004194   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:35.004200   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:35.004267   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:35.037368   61804 cri.go:89] found id: ""
	I0814 01:08:35.037397   61804 logs.go:276] 0 containers: []
	W0814 01:08:35.037407   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:35.037415   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:35.037467   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:35.071051   61804 cri.go:89] found id: ""
	I0814 01:08:35.071080   61804 logs.go:276] 0 containers: []
	W0814 01:08:35.071089   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:35.071102   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:35.071116   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:35.147845   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:35.147879   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:35.189235   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:35.189271   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:35.242094   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:35.242132   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:35.255405   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:35.255430   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:35.325820   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:37.826188   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:37.839036   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:37.839117   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:37.876368   61804 cri.go:89] found id: ""
	I0814 01:08:37.876397   61804 logs.go:276] 0 containers: []
	W0814 01:08:37.876406   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:37.876411   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:37.876468   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:37.916680   61804 cri.go:89] found id: ""
	I0814 01:08:37.916717   61804 logs.go:276] 0 containers: []
	W0814 01:08:37.916727   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:37.916735   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:37.916802   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:37.951025   61804 cri.go:89] found id: ""
	I0814 01:08:37.951048   61804 logs.go:276] 0 containers: []
	W0814 01:08:37.951056   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:37.951062   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:37.951122   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:37.984837   61804 cri.go:89] found id: ""
	I0814 01:08:37.984865   61804 logs.go:276] 0 containers: []
	W0814 01:08:37.984873   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:37.984878   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:37.984928   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:38.018722   61804 cri.go:89] found id: ""
	I0814 01:08:38.018744   61804 logs.go:276] 0 containers: []
	W0814 01:08:38.018752   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:38.018757   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:38.018815   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:38.052306   61804 cri.go:89] found id: ""
	I0814 01:08:38.052337   61804 logs.go:276] 0 containers: []
	W0814 01:08:38.052350   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:38.052358   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:38.052419   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:38.086752   61804 cri.go:89] found id: ""
	I0814 01:08:38.086784   61804 logs.go:276] 0 containers: []
	W0814 01:08:38.086801   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:38.086811   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:38.086877   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:38.119201   61804 cri.go:89] found id: ""
	I0814 01:08:38.119228   61804 logs.go:276] 0 containers: []
	W0814 01:08:38.119235   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:38.119243   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:38.119255   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:38.171460   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:38.171492   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:38.184712   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:38.184739   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:38.248529   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:38.248552   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:38.248568   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:38.324517   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:38.324556   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:35.092682   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:37.590633   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:39.590761   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:37.280984   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:39.780961   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:38.417262   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:40.417822   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:42.918615   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:40.865218   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:40.877772   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:40.877847   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:40.910171   61804 cri.go:89] found id: ""
	I0814 01:08:40.910197   61804 logs.go:276] 0 containers: []
	W0814 01:08:40.910204   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:40.910210   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:40.910257   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:40.947205   61804 cri.go:89] found id: ""
	I0814 01:08:40.947234   61804 logs.go:276] 0 containers: []
	W0814 01:08:40.947244   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:40.947249   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:40.947304   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:40.979404   61804 cri.go:89] found id: ""
	I0814 01:08:40.979428   61804 logs.go:276] 0 containers: []
	W0814 01:08:40.979436   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:40.979442   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:40.979500   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:41.017710   61804 cri.go:89] found id: ""
	I0814 01:08:41.017737   61804 logs.go:276] 0 containers: []
	W0814 01:08:41.017746   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:41.017752   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:41.017799   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:41.052240   61804 cri.go:89] found id: ""
	I0814 01:08:41.052269   61804 logs.go:276] 0 containers: []
	W0814 01:08:41.052278   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:41.052286   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:41.052353   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:41.084124   61804 cri.go:89] found id: ""
	I0814 01:08:41.084151   61804 logs.go:276] 0 containers: []
	W0814 01:08:41.084159   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:41.084165   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:41.084230   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:41.120994   61804 cri.go:89] found id: ""
	I0814 01:08:41.121027   61804 logs.go:276] 0 containers: []
	W0814 01:08:41.121039   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:41.121047   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:41.121106   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:41.155794   61804 cri.go:89] found id: ""
	I0814 01:08:41.155829   61804 logs.go:276] 0 containers: []
	W0814 01:08:41.155842   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:41.155854   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:41.155873   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:41.209146   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:41.209191   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:41.222112   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:41.222141   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:41.298512   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:41.298533   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:41.298550   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:41.378609   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:41.378645   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:43.924469   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:43.936857   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:43.936935   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:43.969234   61804 cri.go:89] found id: ""
	I0814 01:08:43.969267   61804 logs.go:276] 0 containers: []
	W0814 01:08:43.969276   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:43.969282   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:43.969348   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:44.003814   61804 cri.go:89] found id: ""
	I0814 01:08:44.003841   61804 logs.go:276] 0 containers: []
	W0814 01:08:44.003852   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:44.003860   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:44.003929   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:44.037828   61804 cri.go:89] found id: ""
	I0814 01:08:44.037858   61804 logs.go:276] 0 containers: []
	W0814 01:08:44.037869   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:44.037877   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:44.037931   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:44.077084   61804 cri.go:89] found id: ""
	I0814 01:08:44.077110   61804 logs.go:276] 0 containers: []
	W0814 01:08:44.077118   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:44.077124   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:44.077174   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:44.111028   61804 cri.go:89] found id: ""
	I0814 01:08:44.111054   61804 logs.go:276] 0 containers: []
	W0814 01:08:44.111063   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:44.111070   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:44.111122   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:44.143178   61804 cri.go:89] found id: ""
	I0814 01:08:44.143211   61804 logs.go:276] 0 containers: []
	W0814 01:08:44.143222   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:44.143229   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:44.143293   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:44.177606   61804 cri.go:89] found id: ""
	I0814 01:08:44.177636   61804 logs.go:276] 0 containers: []
	W0814 01:08:44.177648   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:44.177657   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:44.177723   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:44.210941   61804 cri.go:89] found id: ""
	I0814 01:08:44.210965   61804 logs.go:276] 0 containers: []
	W0814 01:08:44.210973   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:44.210982   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:44.210995   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:44.224219   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:44.224248   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:44.289411   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:44.289431   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:44.289442   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:44.369680   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:44.369720   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:44.407705   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:44.407742   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:42.088924   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:44.090237   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:41.781814   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:44.281794   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:45.418397   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:47.419132   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:46.962321   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:46.975711   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:46.975843   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:47.008529   61804 cri.go:89] found id: ""
	I0814 01:08:47.008642   61804 logs.go:276] 0 containers: []
	W0814 01:08:47.008651   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:47.008657   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:47.008707   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:47.042469   61804 cri.go:89] found id: ""
	I0814 01:08:47.042498   61804 logs.go:276] 0 containers: []
	W0814 01:08:47.042509   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:47.042518   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:47.042586   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:47.081186   61804 cri.go:89] found id: ""
	I0814 01:08:47.081214   61804 logs.go:276] 0 containers: []
	W0814 01:08:47.081222   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:47.081229   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:47.081286   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:47.117727   61804 cri.go:89] found id: ""
	I0814 01:08:47.117754   61804 logs.go:276] 0 containers: []
	W0814 01:08:47.117765   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:47.117773   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:47.117858   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:47.151247   61804 cri.go:89] found id: ""
	I0814 01:08:47.151283   61804 logs.go:276] 0 containers: []
	W0814 01:08:47.151298   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:47.151307   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:47.151370   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:47.185640   61804 cri.go:89] found id: ""
	I0814 01:08:47.185671   61804 logs.go:276] 0 containers: []
	W0814 01:08:47.185681   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:47.185689   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:47.185755   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:47.220597   61804 cri.go:89] found id: ""
	I0814 01:08:47.220625   61804 logs.go:276] 0 containers: []
	W0814 01:08:47.220633   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:47.220641   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:47.220714   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:47.257099   61804 cri.go:89] found id: ""
	I0814 01:08:47.257131   61804 logs.go:276] 0 containers: []
	W0814 01:08:47.257147   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:47.257162   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:47.257179   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:47.307503   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:47.307538   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:47.320882   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:47.320907   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:47.394519   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:47.394553   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:47.394567   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:47.475998   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:47.476058   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:46.091154   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:48.590382   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:46.780699   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:48.780773   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:51.281235   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:49.421293   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:51.918374   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:50.019454   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:50.033470   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:50.033550   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:50.070782   61804 cri.go:89] found id: ""
	I0814 01:08:50.070806   61804 logs.go:276] 0 containers: []
	W0814 01:08:50.070813   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:50.070819   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:50.070881   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:50.104047   61804 cri.go:89] found id: ""
	I0814 01:08:50.104083   61804 logs.go:276] 0 containers: []
	W0814 01:08:50.104092   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:50.104101   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:50.104172   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:50.139445   61804 cri.go:89] found id: ""
	I0814 01:08:50.139472   61804 logs.go:276] 0 containers: []
	W0814 01:08:50.139480   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:50.139487   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:50.139545   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:50.173077   61804 cri.go:89] found id: ""
	I0814 01:08:50.173109   61804 logs.go:276] 0 containers: []
	W0814 01:08:50.173118   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:50.173126   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:50.173189   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:50.204234   61804 cri.go:89] found id: ""
	I0814 01:08:50.204264   61804 logs.go:276] 0 containers: []
	W0814 01:08:50.204273   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:50.204281   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:50.204342   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:50.237005   61804 cri.go:89] found id: ""
	I0814 01:08:50.237034   61804 logs.go:276] 0 containers: []
	W0814 01:08:50.237044   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:50.237052   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:50.237107   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:50.270171   61804 cri.go:89] found id: ""
	I0814 01:08:50.270197   61804 logs.go:276] 0 containers: []
	W0814 01:08:50.270204   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:50.270209   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:50.270274   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:50.304932   61804 cri.go:89] found id: ""
	I0814 01:08:50.304959   61804 logs.go:276] 0 containers: []
	W0814 01:08:50.304968   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:50.304980   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:50.305000   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:50.317524   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:50.317552   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:50.384790   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:50.384817   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:50.384833   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:50.461398   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:50.461432   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:50.518516   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:50.518545   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:53.069835   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:53.082707   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:53.082777   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:53.119053   61804 cri.go:89] found id: ""
	I0814 01:08:53.119075   61804 logs.go:276] 0 containers: []
	W0814 01:08:53.119083   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:53.119089   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:53.119138   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:53.154565   61804 cri.go:89] found id: ""
	I0814 01:08:53.154598   61804 logs.go:276] 0 containers: []
	W0814 01:08:53.154610   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:53.154618   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:53.154690   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:53.187144   61804 cri.go:89] found id: ""
	I0814 01:08:53.187171   61804 logs.go:276] 0 containers: []
	W0814 01:08:53.187178   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:53.187184   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:53.187236   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:53.220965   61804 cri.go:89] found id: ""
	I0814 01:08:53.220989   61804 logs.go:276] 0 containers: []
	W0814 01:08:53.220998   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:53.221004   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:53.221062   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:53.256825   61804 cri.go:89] found id: ""
	I0814 01:08:53.256857   61804 logs.go:276] 0 containers: []
	W0814 01:08:53.256868   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:53.256875   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:53.256941   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:53.295733   61804 cri.go:89] found id: ""
	I0814 01:08:53.295761   61804 logs.go:276] 0 containers: []
	W0814 01:08:53.295768   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:53.295774   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:53.295822   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:53.328928   61804 cri.go:89] found id: ""
	I0814 01:08:53.328959   61804 logs.go:276] 0 containers: []
	W0814 01:08:53.328970   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:53.328979   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:53.329049   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:53.362866   61804 cri.go:89] found id: ""
	I0814 01:08:53.362896   61804 logs.go:276] 0 containers: []
	W0814 01:08:53.362907   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:53.362919   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:53.362934   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:53.375681   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:53.375718   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:53.439108   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:53.439132   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:53.439148   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:53.524801   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:53.524838   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:53.560832   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:53.560866   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:51.091445   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:53.589472   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:53.780960   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:56.281731   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:54.417207   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:56.417442   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:56.117383   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:56.129668   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:56.129729   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:56.161928   61804 cri.go:89] found id: ""
	I0814 01:08:56.161953   61804 logs.go:276] 0 containers: []
	W0814 01:08:56.161966   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:56.161971   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:56.162017   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:56.192303   61804 cri.go:89] found id: ""
	I0814 01:08:56.192332   61804 logs.go:276] 0 containers: []
	W0814 01:08:56.192343   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:56.192360   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:56.192428   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:56.226668   61804 cri.go:89] found id: ""
	I0814 01:08:56.226696   61804 logs.go:276] 0 containers: []
	W0814 01:08:56.226707   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:56.226715   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:56.226776   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:56.284959   61804 cri.go:89] found id: ""
	I0814 01:08:56.284987   61804 logs.go:276] 0 containers: []
	W0814 01:08:56.284998   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:56.285006   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:56.285066   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:56.317591   61804 cri.go:89] found id: ""
	I0814 01:08:56.317623   61804 logs.go:276] 0 containers: []
	W0814 01:08:56.317633   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:56.317640   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:56.317707   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:56.350119   61804 cri.go:89] found id: ""
	I0814 01:08:56.350146   61804 logs.go:276] 0 containers: []
	W0814 01:08:56.350157   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:56.350165   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:56.350223   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:56.382204   61804 cri.go:89] found id: ""
	I0814 01:08:56.382231   61804 logs.go:276] 0 containers: []
	W0814 01:08:56.382239   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:56.382244   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:56.382295   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:56.415098   61804 cri.go:89] found id: ""
	I0814 01:08:56.415130   61804 logs.go:276] 0 containers: []
	W0814 01:08:56.415140   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:56.415160   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:56.415174   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:56.466056   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:56.466094   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:56.480989   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:56.481019   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:56.550348   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:56.550371   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:56.550387   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:56.629331   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:56.629371   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:59.166791   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:59.179818   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:59.179907   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:59.212759   61804 cri.go:89] found id: ""
	I0814 01:08:59.212781   61804 logs.go:276] 0 containers: []
	W0814 01:08:59.212789   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:59.212796   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:59.212851   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:59.248330   61804 cri.go:89] found id: ""
	I0814 01:08:59.248354   61804 logs.go:276] 0 containers: []
	W0814 01:08:59.248362   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:59.248368   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:59.248420   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:59.282101   61804 cri.go:89] found id: ""
	I0814 01:08:59.282123   61804 logs.go:276] 0 containers: []
	W0814 01:08:59.282136   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:59.282142   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:59.282190   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:59.318477   61804 cri.go:89] found id: ""
	I0814 01:08:59.318502   61804 logs.go:276] 0 containers: []
	W0814 01:08:59.318510   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:59.318516   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:59.318566   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:59.352473   61804 cri.go:89] found id: ""
	I0814 01:08:59.352499   61804 logs.go:276] 0 containers: []
	W0814 01:08:59.352507   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:59.352514   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:59.352583   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:59.386004   61804 cri.go:89] found id: ""
	I0814 01:08:59.386032   61804 logs.go:276] 0 containers: []
	W0814 01:08:59.386056   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:59.386065   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:59.386127   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:59.424280   61804 cri.go:89] found id: ""
	I0814 01:08:59.424309   61804 logs.go:276] 0 containers: []
	W0814 01:08:59.424334   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:59.424340   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:59.424390   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:59.461555   61804 cri.go:89] found id: ""
	I0814 01:08:59.461579   61804 logs.go:276] 0 containers: []
	W0814 01:08:59.461587   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:59.461596   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:59.461608   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:59.501997   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:59.502032   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:56.089181   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:58.089349   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:58.780740   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:01.280817   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:58.417590   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:00.417914   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:02.418923   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:59.554228   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:59.554276   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:59.569169   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:59.569201   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:59.635758   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:59.635779   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:59.635793   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:02.211233   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:02.223647   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:02.223733   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:02.257172   61804 cri.go:89] found id: ""
	I0814 01:09:02.257204   61804 logs.go:276] 0 containers: []
	W0814 01:09:02.257215   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:02.257222   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:02.257286   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:02.290090   61804 cri.go:89] found id: ""
	I0814 01:09:02.290123   61804 logs.go:276] 0 containers: []
	W0814 01:09:02.290132   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:02.290139   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:02.290207   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:02.324436   61804 cri.go:89] found id: ""
	I0814 01:09:02.324461   61804 logs.go:276] 0 containers: []
	W0814 01:09:02.324469   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:02.324474   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:02.324531   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:02.357092   61804 cri.go:89] found id: ""
	I0814 01:09:02.357116   61804 logs.go:276] 0 containers: []
	W0814 01:09:02.357124   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:02.357130   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:02.357191   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:02.390237   61804 cri.go:89] found id: ""
	I0814 01:09:02.390265   61804 logs.go:276] 0 containers: []
	W0814 01:09:02.390278   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:02.390287   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:02.390357   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:02.425960   61804 cri.go:89] found id: ""
	I0814 01:09:02.425988   61804 logs.go:276] 0 containers: []
	W0814 01:09:02.425996   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:02.426002   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:02.426072   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:02.459644   61804 cri.go:89] found id: ""
	I0814 01:09:02.459683   61804 logs.go:276] 0 containers: []
	W0814 01:09:02.459694   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:02.459702   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:02.459764   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:02.496147   61804 cri.go:89] found id: ""
	I0814 01:09:02.496169   61804 logs.go:276] 0 containers: []
	W0814 01:09:02.496182   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:02.496190   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:02.496202   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:02.576512   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:02.576547   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:02.612410   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:02.612440   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:02.665810   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:02.665850   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:02.680992   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:02.681020   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:02.781868   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:00.089915   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:02.090971   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:04.589030   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:03.780689   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:05.784928   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:04.917086   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:06.918108   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:05.282001   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:05.294986   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:05.295064   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:05.326520   61804 cri.go:89] found id: ""
	I0814 01:09:05.326547   61804 logs.go:276] 0 containers: []
	W0814 01:09:05.326555   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:05.326562   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:05.326618   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:05.358458   61804 cri.go:89] found id: ""
	I0814 01:09:05.358482   61804 logs.go:276] 0 containers: []
	W0814 01:09:05.358490   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:05.358497   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:05.358556   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:05.393729   61804 cri.go:89] found id: ""
	I0814 01:09:05.393763   61804 logs.go:276] 0 containers: []
	W0814 01:09:05.393771   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:05.393777   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:05.393824   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:05.433291   61804 cri.go:89] found id: ""
	I0814 01:09:05.433319   61804 logs.go:276] 0 containers: []
	W0814 01:09:05.433327   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:05.433334   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:05.433384   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:05.467163   61804 cri.go:89] found id: ""
	I0814 01:09:05.467187   61804 logs.go:276] 0 containers: []
	W0814 01:09:05.467198   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:05.467206   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:05.467284   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:05.499718   61804 cri.go:89] found id: ""
	I0814 01:09:05.499747   61804 logs.go:276] 0 containers: []
	W0814 01:09:05.499758   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:05.499768   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:05.499819   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:05.532818   61804 cri.go:89] found id: ""
	I0814 01:09:05.532847   61804 logs.go:276] 0 containers: []
	W0814 01:09:05.532859   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:05.532867   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:05.532920   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:05.566908   61804 cri.go:89] found id: ""
	I0814 01:09:05.566936   61804 logs.go:276] 0 containers: []
	W0814 01:09:05.566947   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:05.566957   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:05.566969   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:05.621247   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:05.621283   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:05.635566   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:05.635606   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:05.698579   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:05.698606   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:05.698622   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:05.780861   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:05.780897   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:08.322931   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:08.336836   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:08.336918   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:08.369802   61804 cri.go:89] found id: ""
	I0814 01:09:08.369833   61804 logs.go:276] 0 containers: []
	W0814 01:09:08.369842   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:08.369849   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:08.369899   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:08.415414   61804 cri.go:89] found id: ""
	I0814 01:09:08.415441   61804 logs.go:276] 0 containers: []
	W0814 01:09:08.415451   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:08.415459   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:08.415525   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:08.477026   61804 cri.go:89] found id: ""
	I0814 01:09:08.477058   61804 logs.go:276] 0 containers: []
	W0814 01:09:08.477069   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:08.477077   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:08.477145   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:08.522385   61804 cri.go:89] found id: ""
	I0814 01:09:08.522417   61804 logs.go:276] 0 containers: []
	W0814 01:09:08.522429   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:08.522438   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:08.522502   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:08.555803   61804 cri.go:89] found id: ""
	I0814 01:09:08.555839   61804 logs.go:276] 0 containers: []
	W0814 01:09:08.555848   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:08.555855   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:08.555922   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:08.589910   61804 cri.go:89] found id: ""
	I0814 01:09:08.589932   61804 logs.go:276] 0 containers: []
	W0814 01:09:08.589939   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:08.589945   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:08.589992   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:08.622278   61804 cri.go:89] found id: ""
	I0814 01:09:08.622313   61804 logs.go:276] 0 containers: []
	W0814 01:09:08.622321   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:08.622328   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:08.622381   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:08.655221   61804 cri.go:89] found id: ""
	I0814 01:09:08.655248   61804 logs.go:276] 0 containers: []
	W0814 01:09:08.655257   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:08.655266   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:08.655280   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:08.691932   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:08.691965   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:08.742551   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:08.742586   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:08.755590   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:08.755619   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:08.822365   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:08.822384   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:08.822401   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:06.589889   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:09.089601   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:08.281249   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:10.781156   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:09.418153   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:11.418222   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:11.397107   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:11.409425   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:11.409498   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:11.442680   61804 cri.go:89] found id: ""
	I0814 01:09:11.442711   61804 logs.go:276] 0 containers: []
	W0814 01:09:11.442724   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:11.442732   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:11.442791   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:11.482991   61804 cri.go:89] found id: ""
	I0814 01:09:11.483016   61804 logs.go:276] 0 containers: []
	W0814 01:09:11.483023   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:11.483034   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:11.483099   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:11.516069   61804 cri.go:89] found id: ""
	I0814 01:09:11.516091   61804 logs.go:276] 0 containers: []
	W0814 01:09:11.516100   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:11.516105   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:11.516154   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:11.549745   61804 cri.go:89] found id: ""
	I0814 01:09:11.549773   61804 logs.go:276] 0 containers: []
	W0814 01:09:11.549780   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:11.549787   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:11.549851   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:11.582542   61804 cri.go:89] found id: ""
	I0814 01:09:11.582569   61804 logs.go:276] 0 containers: []
	W0814 01:09:11.582577   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:11.582583   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:11.582642   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:11.616238   61804 cri.go:89] found id: ""
	I0814 01:09:11.616261   61804 logs.go:276] 0 containers: []
	W0814 01:09:11.616269   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:11.616275   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:11.616330   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:11.650238   61804 cri.go:89] found id: ""
	I0814 01:09:11.650286   61804 logs.go:276] 0 containers: []
	W0814 01:09:11.650301   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:11.650311   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:11.650384   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:11.683100   61804 cri.go:89] found id: ""
	I0814 01:09:11.683128   61804 logs.go:276] 0 containers: []
	W0814 01:09:11.683139   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:11.683149   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:11.683169   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:11.760248   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:11.760292   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:11.798965   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:11.798996   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:11.853109   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:11.853145   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:11.865645   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:11.865682   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:11.935478   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:14.436076   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:14.448846   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:14.448927   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:14.483833   61804 cri.go:89] found id: ""
	I0814 01:09:14.483873   61804 logs.go:276] 0 containers: []
	W0814 01:09:14.483882   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:14.483887   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:14.483940   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:11.089723   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:13.090681   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:12.781680   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:14.782443   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:13.918681   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:16.417982   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:14.522643   61804 cri.go:89] found id: ""
	I0814 01:09:14.522670   61804 logs.go:276] 0 containers: []
	W0814 01:09:14.522678   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:14.522683   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:14.522783   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:14.564084   61804 cri.go:89] found id: ""
	I0814 01:09:14.564111   61804 logs.go:276] 0 containers: []
	W0814 01:09:14.564121   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:14.564129   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:14.564193   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:14.603532   61804 cri.go:89] found id: ""
	I0814 01:09:14.603560   61804 logs.go:276] 0 containers: []
	W0814 01:09:14.603571   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:14.603578   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:14.603641   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:14.644420   61804 cri.go:89] found id: ""
	I0814 01:09:14.644443   61804 logs.go:276] 0 containers: []
	W0814 01:09:14.644450   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:14.644455   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:14.644503   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:14.681652   61804 cri.go:89] found id: ""
	I0814 01:09:14.681685   61804 logs.go:276] 0 containers: []
	W0814 01:09:14.681693   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:14.681701   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:14.681757   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:14.715830   61804 cri.go:89] found id: ""
	I0814 01:09:14.715852   61804 logs.go:276] 0 containers: []
	W0814 01:09:14.715860   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:14.715866   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:14.715912   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:14.752305   61804 cri.go:89] found id: ""
	I0814 01:09:14.752336   61804 logs.go:276] 0 containers: []
	W0814 01:09:14.752343   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:14.752352   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:14.752367   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:14.765250   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:14.765287   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:14.834427   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:14.834453   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:14.834470   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:14.914683   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:14.914721   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:14.959497   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:14.959534   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:17.513077   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:17.526300   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:17.526409   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:17.563670   61804 cri.go:89] found id: ""
	I0814 01:09:17.563700   61804 logs.go:276] 0 containers: []
	W0814 01:09:17.563709   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:17.563715   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:17.563768   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:17.599019   61804 cri.go:89] found id: ""
	I0814 01:09:17.599048   61804 logs.go:276] 0 containers: []
	W0814 01:09:17.599070   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:17.599078   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:17.599158   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:17.633378   61804 cri.go:89] found id: ""
	I0814 01:09:17.633407   61804 logs.go:276] 0 containers: []
	W0814 01:09:17.633422   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:17.633430   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:17.633494   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:17.667180   61804 cri.go:89] found id: ""
	I0814 01:09:17.667213   61804 logs.go:276] 0 containers: []
	W0814 01:09:17.667225   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:17.667233   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:17.667293   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:17.704552   61804 cri.go:89] found id: ""
	I0814 01:09:17.704582   61804 logs.go:276] 0 containers: []
	W0814 01:09:17.704595   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:17.704603   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:17.704670   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:17.735937   61804 cri.go:89] found id: ""
	I0814 01:09:17.735966   61804 logs.go:276] 0 containers: []
	W0814 01:09:17.735978   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:17.735987   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:17.736057   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:17.772223   61804 cri.go:89] found id: ""
	I0814 01:09:17.772251   61804 logs.go:276] 0 containers: []
	W0814 01:09:17.772263   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:17.772271   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:17.772335   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:17.807432   61804 cri.go:89] found id: ""
	I0814 01:09:17.807462   61804 logs.go:276] 0 containers: []
	W0814 01:09:17.807474   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:17.807485   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:17.807499   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:17.860093   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:17.860135   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:17.874608   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:17.874644   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:17.948791   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:17.948812   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:17.948827   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:18.024743   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:18.024778   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:15.590951   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:18.090491   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:17.296200   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:19.780540   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:18.419867   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:20.917387   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:22.918933   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:20.559854   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:20.572920   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:20.573004   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:20.609163   61804 cri.go:89] found id: ""
	I0814 01:09:20.609189   61804 logs.go:276] 0 containers: []
	W0814 01:09:20.609200   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:20.609205   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:20.609253   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:20.646826   61804 cri.go:89] found id: ""
	I0814 01:09:20.646852   61804 logs.go:276] 0 containers: []
	W0814 01:09:20.646859   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:20.646865   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:20.646913   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:20.682403   61804 cri.go:89] found id: ""
	I0814 01:09:20.682432   61804 logs.go:276] 0 containers: []
	W0814 01:09:20.682443   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:20.682452   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:20.682515   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:20.717678   61804 cri.go:89] found id: ""
	I0814 01:09:20.717700   61804 logs.go:276] 0 containers: []
	W0814 01:09:20.717708   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:20.717713   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:20.717761   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:20.748451   61804 cri.go:89] found id: ""
	I0814 01:09:20.748481   61804 logs.go:276] 0 containers: []
	W0814 01:09:20.748492   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:20.748501   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:20.748567   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:20.785684   61804 cri.go:89] found id: ""
	I0814 01:09:20.785712   61804 logs.go:276] 0 containers: []
	W0814 01:09:20.785722   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:20.785729   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:20.785792   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:20.826195   61804 cri.go:89] found id: ""
	I0814 01:09:20.826225   61804 logs.go:276] 0 containers: []
	W0814 01:09:20.826233   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:20.826239   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:20.826305   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:20.860155   61804 cri.go:89] found id: ""
	I0814 01:09:20.860181   61804 logs.go:276] 0 containers: []
	W0814 01:09:20.860190   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:20.860198   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:20.860209   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:20.909428   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:20.909464   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:20.923178   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:20.923208   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:20.994502   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:20.994537   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:20.994556   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:21.074097   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:21.074138   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:23.615557   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:23.628906   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:23.628976   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:23.661923   61804 cri.go:89] found id: ""
	I0814 01:09:23.661954   61804 logs.go:276] 0 containers: []
	W0814 01:09:23.661966   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:23.661973   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:23.662033   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:23.693786   61804 cri.go:89] found id: ""
	I0814 01:09:23.693815   61804 logs.go:276] 0 containers: []
	W0814 01:09:23.693828   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:23.693844   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:23.693938   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:23.726707   61804 cri.go:89] found id: ""
	I0814 01:09:23.726739   61804 logs.go:276] 0 containers: []
	W0814 01:09:23.726750   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:23.726758   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:23.726823   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:23.757433   61804 cri.go:89] found id: ""
	I0814 01:09:23.757457   61804 logs.go:276] 0 containers: []
	W0814 01:09:23.757465   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:23.757471   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:23.757521   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:23.789493   61804 cri.go:89] found id: ""
	I0814 01:09:23.789516   61804 logs.go:276] 0 containers: []
	W0814 01:09:23.789523   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:23.789529   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:23.789589   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:23.824641   61804 cri.go:89] found id: ""
	I0814 01:09:23.824668   61804 logs.go:276] 0 containers: []
	W0814 01:09:23.824676   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:23.824685   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:23.824758   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:23.857651   61804 cri.go:89] found id: ""
	I0814 01:09:23.857678   61804 logs.go:276] 0 containers: []
	W0814 01:09:23.857688   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:23.857697   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:23.857761   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:23.898116   61804 cri.go:89] found id: ""
	I0814 01:09:23.898138   61804 logs.go:276] 0 containers: []
	W0814 01:09:23.898145   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:23.898154   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:23.898169   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:23.982086   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:23.982121   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:24.018340   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:24.018372   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:24.067264   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:24.067300   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:24.081648   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:24.081681   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:24.156566   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:20.590620   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:23.090160   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:21.781174   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:23.782333   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:26.282145   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:25.417101   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:27.417596   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:26.656930   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:26.669540   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:26.669616   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:26.701786   61804 cri.go:89] found id: ""
	I0814 01:09:26.701819   61804 logs.go:276] 0 containers: []
	W0814 01:09:26.701828   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:26.701834   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:26.701897   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:26.734372   61804 cri.go:89] found id: ""
	I0814 01:09:26.734397   61804 logs.go:276] 0 containers: []
	W0814 01:09:26.734405   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:26.734410   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:26.734463   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:26.767100   61804 cri.go:89] found id: ""
	I0814 01:09:26.767125   61804 logs.go:276] 0 containers: []
	W0814 01:09:26.767140   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:26.767148   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:26.767210   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:26.802145   61804 cri.go:89] found id: ""
	I0814 01:09:26.802168   61804 logs.go:276] 0 containers: []
	W0814 01:09:26.802177   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:26.802182   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:26.802230   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:26.835588   61804 cri.go:89] found id: ""
	I0814 01:09:26.835616   61804 logs.go:276] 0 containers: []
	W0814 01:09:26.835624   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:26.835630   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:26.835685   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:26.868104   61804 cri.go:89] found id: ""
	I0814 01:09:26.868130   61804 logs.go:276] 0 containers: []
	W0814 01:09:26.868138   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:26.868144   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:26.868209   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:26.899709   61804 cri.go:89] found id: ""
	I0814 01:09:26.899736   61804 logs.go:276] 0 containers: []
	W0814 01:09:26.899755   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:26.899764   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:26.899824   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:26.934964   61804 cri.go:89] found id: ""
	I0814 01:09:26.934989   61804 logs.go:276] 0 containers: []
	W0814 01:09:26.934996   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:26.935005   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:26.935023   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:26.970832   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:26.970859   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:27.022349   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:27.022390   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:27.035656   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:27.035683   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:27.115414   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:27.115441   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:27.115458   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:25.090543   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:27.590088   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:29.590449   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:28.781004   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:30.781622   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:29.920036   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:32.417796   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:29.701338   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:29.713890   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:29.713947   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:29.745724   61804 cri.go:89] found id: ""
	I0814 01:09:29.745749   61804 logs.go:276] 0 containers: []
	W0814 01:09:29.745756   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:29.745763   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:29.745816   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:29.777020   61804 cri.go:89] found id: ""
	I0814 01:09:29.777047   61804 logs.go:276] 0 containers: []
	W0814 01:09:29.777057   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:29.777065   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:29.777130   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:29.813355   61804 cri.go:89] found id: ""
	I0814 01:09:29.813386   61804 logs.go:276] 0 containers: []
	W0814 01:09:29.813398   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:29.813406   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:29.813464   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:29.845184   61804 cri.go:89] found id: ""
	I0814 01:09:29.845212   61804 logs.go:276] 0 containers: []
	W0814 01:09:29.845222   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:29.845227   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:29.845288   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:29.881128   61804 cri.go:89] found id: ""
	I0814 01:09:29.881158   61804 logs.go:276] 0 containers: []
	W0814 01:09:29.881169   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:29.881177   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:29.881249   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:29.912034   61804 cri.go:89] found id: ""
	I0814 01:09:29.912078   61804 logs.go:276] 0 containers: []
	W0814 01:09:29.912091   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:29.912100   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:29.912173   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:29.950345   61804 cri.go:89] found id: ""
	I0814 01:09:29.950378   61804 logs.go:276] 0 containers: []
	W0814 01:09:29.950386   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:29.950392   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:29.950454   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:29.989118   61804 cri.go:89] found id: ""
	I0814 01:09:29.989150   61804 logs.go:276] 0 containers: []
	W0814 01:09:29.989161   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:29.989172   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:29.989186   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:30.042231   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:30.042262   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:30.056231   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:30.056262   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:30.130840   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:30.130871   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:30.130891   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:30.209332   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:30.209372   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:32.751036   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:32.765011   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:32.765072   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:32.802505   61804 cri.go:89] found id: ""
	I0814 01:09:32.802533   61804 logs.go:276] 0 containers: []
	W0814 01:09:32.802543   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:32.802548   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:32.802600   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:32.835127   61804 cri.go:89] found id: ""
	I0814 01:09:32.835165   61804 logs.go:276] 0 containers: []
	W0814 01:09:32.835174   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:32.835179   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:32.835230   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:32.871768   61804 cri.go:89] found id: ""
	I0814 01:09:32.871793   61804 logs.go:276] 0 containers: []
	W0814 01:09:32.871800   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:32.871814   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:32.871865   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:32.907601   61804 cri.go:89] found id: ""
	I0814 01:09:32.907625   61804 logs.go:276] 0 containers: []
	W0814 01:09:32.907634   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:32.907640   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:32.907693   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:32.942615   61804 cri.go:89] found id: ""
	I0814 01:09:32.942640   61804 logs.go:276] 0 containers: []
	W0814 01:09:32.942649   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:32.942655   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:32.942707   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:32.975436   61804 cri.go:89] found id: ""
	I0814 01:09:32.975467   61804 logs.go:276] 0 containers: []
	W0814 01:09:32.975478   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:32.975486   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:32.975546   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:33.008982   61804 cri.go:89] found id: ""
	I0814 01:09:33.009013   61804 logs.go:276] 0 containers: []
	W0814 01:09:33.009021   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:33.009027   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:33.009077   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:33.042312   61804 cri.go:89] found id: ""
	I0814 01:09:33.042345   61804 logs.go:276] 0 containers: []
	W0814 01:09:33.042362   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:33.042371   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:33.042383   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:33.102102   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:33.102145   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:33.116497   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:33.116527   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:33.191821   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:33.191847   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:33.191862   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:33.272510   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:33.272562   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:32.090206   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:34.589260   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:33.280565   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:35.280918   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:34.417839   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:36.417950   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:35.813246   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:35.826224   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:35.826304   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:35.859220   61804 cri.go:89] found id: ""
	I0814 01:09:35.859252   61804 logs.go:276] 0 containers: []
	W0814 01:09:35.859263   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:35.859274   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:35.859349   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:35.896460   61804 cri.go:89] found id: ""
	I0814 01:09:35.896485   61804 logs.go:276] 0 containers: []
	W0814 01:09:35.896494   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:35.896500   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:35.896559   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:35.929796   61804 cri.go:89] found id: ""
	I0814 01:09:35.929819   61804 logs.go:276] 0 containers: []
	W0814 01:09:35.929827   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:35.929832   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:35.929883   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:35.963928   61804 cri.go:89] found id: ""
	I0814 01:09:35.963954   61804 logs.go:276] 0 containers: []
	W0814 01:09:35.963965   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:35.963972   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:35.964033   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:36.004613   61804 cri.go:89] found id: ""
	I0814 01:09:36.004644   61804 logs.go:276] 0 containers: []
	W0814 01:09:36.004654   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:36.004660   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:36.004729   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:36.039212   61804 cri.go:89] found id: ""
	I0814 01:09:36.039241   61804 logs.go:276] 0 containers: []
	W0814 01:09:36.039249   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:36.039256   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:36.039311   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:36.072917   61804 cri.go:89] found id: ""
	I0814 01:09:36.072945   61804 logs.go:276] 0 containers: []
	W0814 01:09:36.072954   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:36.072960   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:36.073020   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:36.113542   61804 cri.go:89] found id: ""
	I0814 01:09:36.113573   61804 logs.go:276] 0 containers: []
	W0814 01:09:36.113584   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:36.113594   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:36.113610   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:36.152043   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:36.152071   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:36.203163   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:36.203200   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:36.216733   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:36.216764   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:36.288171   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:36.288193   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:36.288206   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:38.868008   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:38.881009   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:38.881089   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:38.914485   61804 cri.go:89] found id: ""
	I0814 01:09:38.914515   61804 logs.go:276] 0 containers: []
	W0814 01:09:38.914527   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:38.914535   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:38.914595   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:38.950810   61804 cri.go:89] found id: ""
	I0814 01:09:38.950841   61804 logs.go:276] 0 containers: []
	W0814 01:09:38.950852   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:38.950860   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:38.950913   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:38.984938   61804 cri.go:89] found id: ""
	I0814 01:09:38.984964   61804 logs.go:276] 0 containers: []
	W0814 01:09:38.984972   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:38.984980   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:38.985050   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:39.017383   61804 cri.go:89] found id: ""
	I0814 01:09:39.017408   61804 logs.go:276] 0 containers: []
	W0814 01:09:39.017415   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:39.017421   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:39.017467   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:39.050669   61804 cri.go:89] found id: ""
	I0814 01:09:39.050694   61804 logs.go:276] 0 containers: []
	W0814 01:09:39.050706   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:39.050712   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:39.050777   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:39.083840   61804 cri.go:89] found id: ""
	I0814 01:09:39.083870   61804 logs.go:276] 0 containers: []
	W0814 01:09:39.083882   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:39.083903   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:39.083973   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:39.117880   61804 cri.go:89] found id: ""
	I0814 01:09:39.117905   61804 logs.go:276] 0 containers: []
	W0814 01:09:39.117913   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:39.117920   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:39.117989   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:39.151956   61804 cri.go:89] found id: ""
	I0814 01:09:39.151981   61804 logs.go:276] 0 containers: []
	W0814 01:09:39.151991   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:39.152002   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:39.152017   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:39.229820   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:39.229860   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:39.266989   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:39.267023   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:39.317673   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:39.317709   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:39.332968   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:39.332997   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:39.401164   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:36.591033   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:39.089990   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:37.282218   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:39.781653   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:38.918816   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:41.417142   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:41.901891   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:41.914735   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:41.914810   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:41.950605   61804 cri.go:89] found id: ""
	I0814 01:09:41.950633   61804 logs.go:276] 0 containers: []
	W0814 01:09:41.950641   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:41.950648   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:41.950699   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:41.984517   61804 cri.go:89] found id: ""
	I0814 01:09:41.984541   61804 logs.go:276] 0 containers: []
	W0814 01:09:41.984549   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:41.984555   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:41.984609   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:42.018378   61804 cri.go:89] found id: ""
	I0814 01:09:42.018405   61804 logs.go:276] 0 containers: []
	W0814 01:09:42.018413   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:42.018418   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:42.018475   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:42.057088   61804 cri.go:89] found id: ""
	I0814 01:09:42.057126   61804 logs.go:276] 0 containers: []
	W0814 01:09:42.057134   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:42.057140   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:42.057208   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:42.093523   61804 cri.go:89] found id: ""
	I0814 01:09:42.093548   61804 logs.go:276] 0 containers: []
	W0814 01:09:42.093564   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:42.093569   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:42.093621   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:42.127036   61804 cri.go:89] found id: ""
	I0814 01:09:42.127059   61804 logs.go:276] 0 containers: []
	W0814 01:09:42.127067   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:42.127072   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:42.127123   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:42.161194   61804 cri.go:89] found id: ""
	I0814 01:09:42.161218   61804 logs.go:276] 0 containers: []
	W0814 01:09:42.161226   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:42.161231   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:42.161279   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:42.195595   61804 cri.go:89] found id: ""
	I0814 01:09:42.195624   61804 logs.go:276] 0 containers: []
	W0814 01:09:42.195633   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:42.195643   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:42.195656   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:42.251942   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:42.251974   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:42.309142   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:42.309179   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:42.322696   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:42.322724   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:42.389877   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:42.389903   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:42.389918   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:41.589650   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:43.589804   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:42.281108   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:44.780495   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:43.417531   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:45.419122   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:47.918282   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:44.974486   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:44.986981   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:44.987044   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:45.023400   61804 cri.go:89] found id: ""
	I0814 01:09:45.023426   61804 logs.go:276] 0 containers: []
	W0814 01:09:45.023435   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:45.023441   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:45.023492   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:45.057923   61804 cri.go:89] found id: ""
	I0814 01:09:45.057948   61804 logs.go:276] 0 containers: []
	W0814 01:09:45.057961   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:45.057968   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:45.058024   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:45.092882   61804 cri.go:89] found id: ""
	I0814 01:09:45.092908   61804 logs.go:276] 0 containers: []
	W0814 01:09:45.092918   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:45.092924   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:45.092987   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:45.128802   61804 cri.go:89] found id: ""
	I0814 01:09:45.128832   61804 logs.go:276] 0 containers: []
	W0814 01:09:45.128840   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:45.128846   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:45.128909   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:45.164528   61804 cri.go:89] found id: ""
	I0814 01:09:45.164556   61804 logs.go:276] 0 containers: []
	W0814 01:09:45.164564   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:45.164571   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:45.164619   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:45.198115   61804 cri.go:89] found id: ""
	I0814 01:09:45.198145   61804 logs.go:276] 0 containers: []
	W0814 01:09:45.198157   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:45.198164   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:45.198231   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:45.230356   61804 cri.go:89] found id: ""
	I0814 01:09:45.230389   61804 logs.go:276] 0 containers: []
	W0814 01:09:45.230401   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:45.230409   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:45.230471   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:45.268342   61804 cri.go:89] found id: ""
	I0814 01:09:45.268367   61804 logs.go:276] 0 containers: []
	W0814 01:09:45.268376   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:45.268384   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:45.268398   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:45.321257   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:45.321294   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:45.334182   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:45.334206   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:45.409140   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:45.409162   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:45.409178   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:45.493974   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:45.494013   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:48.032466   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:48.045704   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:48.045783   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:48.084634   61804 cri.go:89] found id: ""
	I0814 01:09:48.084663   61804 logs.go:276] 0 containers: []
	W0814 01:09:48.084674   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:48.084683   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:48.084743   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:48.121917   61804 cri.go:89] found id: ""
	I0814 01:09:48.121941   61804 logs.go:276] 0 containers: []
	W0814 01:09:48.121948   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:48.121953   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:48.122014   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:48.156005   61804 cri.go:89] found id: ""
	I0814 01:09:48.156029   61804 logs.go:276] 0 containers: []
	W0814 01:09:48.156038   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:48.156046   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:48.156104   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:48.190105   61804 cri.go:89] found id: ""
	I0814 01:09:48.190127   61804 logs.go:276] 0 containers: []
	W0814 01:09:48.190136   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:48.190141   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:48.190202   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:48.222617   61804 cri.go:89] found id: ""
	I0814 01:09:48.222641   61804 logs.go:276] 0 containers: []
	W0814 01:09:48.222649   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:48.222655   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:48.222727   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:48.256198   61804 cri.go:89] found id: ""
	I0814 01:09:48.256222   61804 logs.go:276] 0 containers: []
	W0814 01:09:48.256230   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:48.256236   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:48.256294   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:48.294389   61804 cri.go:89] found id: ""
	I0814 01:09:48.294420   61804 logs.go:276] 0 containers: []
	W0814 01:09:48.294428   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:48.294434   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:48.294496   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:48.331503   61804 cri.go:89] found id: ""
	I0814 01:09:48.331540   61804 logs.go:276] 0 containers: []
	W0814 01:09:48.331553   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:48.331565   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:48.331585   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:48.407092   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:48.407134   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:48.446890   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:48.446920   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:48.498523   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:48.498559   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:48.511540   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:48.511578   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:48.576299   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:45.590239   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:48.090689   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:46.781816   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:49.280840   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:51.281638   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:50.418154   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:52.917611   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:51.076974   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:51.089440   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:51.089508   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:51.122770   61804 cri.go:89] found id: ""
	I0814 01:09:51.122794   61804 logs.go:276] 0 containers: []
	W0814 01:09:51.122806   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:51.122814   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:51.122873   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:51.159045   61804 cri.go:89] found id: ""
	I0814 01:09:51.159075   61804 logs.go:276] 0 containers: []
	W0814 01:09:51.159084   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:51.159090   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:51.159144   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:51.192983   61804 cri.go:89] found id: ""
	I0814 01:09:51.193013   61804 logs.go:276] 0 containers: []
	W0814 01:09:51.193022   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:51.193028   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:51.193087   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:51.225112   61804 cri.go:89] found id: ""
	I0814 01:09:51.225137   61804 logs.go:276] 0 containers: []
	W0814 01:09:51.225145   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:51.225151   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:51.225204   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:51.257785   61804 cri.go:89] found id: ""
	I0814 01:09:51.257813   61804 logs.go:276] 0 containers: []
	W0814 01:09:51.257822   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:51.257828   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:51.257879   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:51.289863   61804 cri.go:89] found id: ""
	I0814 01:09:51.289891   61804 logs.go:276] 0 containers: []
	W0814 01:09:51.289902   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:51.289910   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:51.289963   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:51.321834   61804 cri.go:89] found id: ""
	I0814 01:09:51.321860   61804 logs.go:276] 0 containers: []
	W0814 01:09:51.321870   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:51.321880   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:51.321949   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:51.354494   61804 cri.go:89] found id: ""
	I0814 01:09:51.354517   61804 logs.go:276] 0 containers: []
	W0814 01:09:51.354526   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:51.354535   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:51.354556   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:51.424704   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:51.424726   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:51.424741   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:51.505301   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:51.505337   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:51.544567   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:51.544603   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:51.598924   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:51.598954   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:54.113501   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:54.128000   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:54.128075   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:54.162230   61804 cri.go:89] found id: ""
	I0814 01:09:54.162260   61804 logs.go:276] 0 containers: []
	W0814 01:09:54.162270   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:54.162277   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:54.162327   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:54.196395   61804 cri.go:89] found id: ""
	I0814 01:09:54.196421   61804 logs.go:276] 0 containers: []
	W0814 01:09:54.196432   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:54.196440   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:54.196500   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:54.229685   61804 cri.go:89] found id: ""
	I0814 01:09:54.229718   61804 logs.go:276] 0 containers: []
	W0814 01:09:54.229730   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:54.229738   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:54.229825   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:54.263141   61804 cri.go:89] found id: ""
	I0814 01:09:54.263174   61804 logs.go:276] 0 containers: []
	W0814 01:09:54.263185   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:54.263193   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:54.263257   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:54.298658   61804 cri.go:89] found id: ""
	I0814 01:09:54.298689   61804 logs.go:276] 0 containers: []
	W0814 01:09:54.298700   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:54.298708   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:54.298794   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:54.331254   61804 cri.go:89] found id: ""
	I0814 01:09:54.331278   61804 logs.go:276] 0 containers: []
	W0814 01:09:54.331287   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:54.331294   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:54.331348   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:54.362930   61804 cri.go:89] found id: ""
	I0814 01:09:54.362954   61804 logs.go:276] 0 containers: []
	W0814 01:09:54.362961   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:54.362967   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:54.363017   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:54.403663   61804 cri.go:89] found id: ""
	I0814 01:09:54.403690   61804 logs.go:276] 0 containers: []
	W0814 01:09:54.403697   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:54.403706   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:54.403725   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:54.460623   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:54.460661   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:54.478728   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:54.478757   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 01:09:50.589697   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:53.089733   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:53.781208   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:56.282166   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:54.918107   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:56.918502   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	W0814 01:09:54.548615   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:54.548640   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:54.548654   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:54.624350   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:54.624385   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:57.164202   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:57.176107   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:57.176174   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:57.211204   61804 cri.go:89] found id: ""
	I0814 01:09:57.211230   61804 logs.go:276] 0 containers: []
	W0814 01:09:57.211238   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:57.211245   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:57.211305   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:57.243004   61804 cri.go:89] found id: ""
	I0814 01:09:57.243035   61804 logs.go:276] 0 containers: []
	W0814 01:09:57.243046   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:57.243052   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:57.243113   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:57.275315   61804 cri.go:89] found id: ""
	I0814 01:09:57.275346   61804 logs.go:276] 0 containers: []
	W0814 01:09:57.275357   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:57.275365   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:57.275435   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:57.311856   61804 cri.go:89] found id: ""
	I0814 01:09:57.311878   61804 logs.go:276] 0 containers: []
	W0814 01:09:57.311885   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:57.311890   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:57.311944   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:57.345305   61804 cri.go:89] found id: ""
	I0814 01:09:57.345335   61804 logs.go:276] 0 containers: []
	W0814 01:09:57.345347   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:57.345355   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:57.345419   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:57.378001   61804 cri.go:89] found id: ""
	I0814 01:09:57.378033   61804 logs.go:276] 0 containers: []
	W0814 01:09:57.378058   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:57.378066   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:57.378127   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:57.410664   61804 cri.go:89] found id: ""
	I0814 01:09:57.410691   61804 logs.go:276] 0 containers: []
	W0814 01:09:57.410700   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:57.410706   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:57.410766   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:57.443477   61804 cri.go:89] found id: ""
	I0814 01:09:57.443505   61804 logs.go:276] 0 containers: []
	W0814 01:09:57.443514   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:57.443523   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:57.443538   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:57.497674   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:57.497710   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:57.511347   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:57.511380   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:57.580111   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:57.580137   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:57.580153   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:57.660119   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:57.660166   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:55.089771   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:57.090272   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:59.591289   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:58.780363   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:00.781165   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:59.417990   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:01.419950   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:00.203685   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:10:00.224480   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:10:00.224552   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:10:00.265353   61804 cri.go:89] found id: ""
	I0814 01:10:00.265379   61804 logs.go:276] 0 containers: []
	W0814 01:10:00.265388   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:10:00.265395   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:10:00.265449   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:10:00.301086   61804 cri.go:89] found id: ""
	I0814 01:10:00.301112   61804 logs.go:276] 0 containers: []
	W0814 01:10:00.301122   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:10:00.301129   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:10:00.301203   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:10:00.335369   61804 cri.go:89] found id: ""
	I0814 01:10:00.335400   61804 logs.go:276] 0 containers: []
	W0814 01:10:00.335422   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:10:00.335442   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:10:00.335501   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:10:00.369341   61804 cri.go:89] found id: ""
	I0814 01:10:00.369367   61804 logs.go:276] 0 containers: []
	W0814 01:10:00.369377   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:10:00.369384   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:10:00.369446   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:10:00.403958   61804 cri.go:89] found id: ""
	I0814 01:10:00.403985   61804 logs.go:276] 0 containers: []
	W0814 01:10:00.403993   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:10:00.403998   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:10:00.404059   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:10:00.437921   61804 cri.go:89] found id: ""
	I0814 01:10:00.437944   61804 logs.go:276] 0 containers: []
	W0814 01:10:00.437952   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:10:00.437958   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:10:00.438020   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:10:00.471076   61804 cri.go:89] found id: ""
	I0814 01:10:00.471116   61804 logs.go:276] 0 containers: []
	W0814 01:10:00.471127   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:10:00.471135   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:10:00.471194   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:10:00.506002   61804 cri.go:89] found id: ""
	I0814 01:10:00.506026   61804 logs.go:276] 0 containers: []
	W0814 01:10:00.506034   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:10:00.506056   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:10:00.506074   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:10:00.576627   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:10:00.576653   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:10:00.576668   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:10:00.661108   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:10:00.661150   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:10:00.699083   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:10:00.699111   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:10:00.748944   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:10:00.748981   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:10:03.262338   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:10:03.274831   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:10:03.274909   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:10:03.308413   61804 cri.go:89] found id: ""
	I0814 01:10:03.308445   61804 logs.go:276] 0 containers: []
	W0814 01:10:03.308456   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:10:03.308463   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:10:03.308530   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:10:03.340763   61804 cri.go:89] found id: ""
	I0814 01:10:03.340789   61804 logs.go:276] 0 containers: []
	W0814 01:10:03.340798   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:10:03.340804   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:10:03.340872   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:10:03.375914   61804 cri.go:89] found id: ""
	I0814 01:10:03.375945   61804 logs.go:276] 0 containers: []
	W0814 01:10:03.375956   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:10:03.375964   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:10:03.376028   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:10:03.408904   61804 cri.go:89] found id: ""
	I0814 01:10:03.408934   61804 logs.go:276] 0 containers: []
	W0814 01:10:03.408944   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:10:03.408951   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:10:03.409015   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:10:03.443664   61804 cri.go:89] found id: ""
	I0814 01:10:03.443694   61804 logs.go:276] 0 containers: []
	W0814 01:10:03.443704   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:10:03.443712   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:10:03.443774   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:10:03.475742   61804 cri.go:89] found id: ""
	I0814 01:10:03.475775   61804 logs.go:276] 0 containers: []
	W0814 01:10:03.475786   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:10:03.475794   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:10:03.475856   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:10:03.509252   61804 cri.go:89] found id: ""
	I0814 01:10:03.509297   61804 logs.go:276] 0 containers: []
	W0814 01:10:03.509309   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:10:03.509315   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:10:03.509380   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:10:03.544311   61804 cri.go:89] found id: ""
	I0814 01:10:03.544332   61804 logs.go:276] 0 containers: []
	W0814 01:10:03.544341   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:10:03.544350   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:10:03.544365   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:10:03.620425   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:10:03.620459   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:10:03.658574   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:10:03.658601   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:10:03.708154   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:10:03.708187   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:10:03.721959   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:10:03.721986   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:10:03.789903   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:10:02.088526   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:04.092427   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:02.781595   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:05.280678   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:03.917268   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:05.917774   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:07.918699   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:06.290301   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:10:06.301935   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:10:06.301994   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:10:06.336211   61804 cri.go:89] found id: ""
	I0814 01:10:06.336231   61804 logs.go:276] 0 containers: []
	W0814 01:10:06.336239   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:10:06.336245   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:10:06.336293   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:10:06.369489   61804 cri.go:89] found id: ""
	I0814 01:10:06.369517   61804 logs.go:276] 0 containers: []
	W0814 01:10:06.369526   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:10:06.369532   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:10:06.369590   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:10:06.401142   61804 cri.go:89] found id: ""
	I0814 01:10:06.401167   61804 logs.go:276] 0 containers: []
	W0814 01:10:06.401176   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:10:06.401183   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:10:06.401233   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:10:06.432265   61804 cri.go:89] found id: ""
	I0814 01:10:06.432294   61804 logs.go:276] 0 containers: []
	W0814 01:10:06.432303   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:10:06.432308   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:10:06.432368   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:10:06.464786   61804 cri.go:89] found id: ""
	I0814 01:10:06.464815   61804 logs.go:276] 0 containers: []
	W0814 01:10:06.464826   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:10:06.464834   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:10:06.464928   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:10:06.497984   61804 cri.go:89] found id: ""
	I0814 01:10:06.498013   61804 logs.go:276] 0 containers: []
	W0814 01:10:06.498021   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:10:06.498027   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:10:06.498122   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:10:06.528722   61804 cri.go:89] found id: ""
	I0814 01:10:06.528750   61804 logs.go:276] 0 containers: []
	W0814 01:10:06.528760   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:10:06.528768   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:10:06.528836   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:10:06.559920   61804 cri.go:89] found id: ""
	I0814 01:10:06.559947   61804 logs.go:276] 0 containers: []
	W0814 01:10:06.559955   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:10:06.559964   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:10:06.559976   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:10:06.609227   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:10:06.609256   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:10:06.621627   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:10:06.621652   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:10:06.686110   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:10:06.686132   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:10:06.686145   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:10:06.767163   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:10:06.767201   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:10:09.302611   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:10:09.314804   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:10:09.314863   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:10:09.347222   61804 cri.go:89] found id: ""
	I0814 01:10:09.347248   61804 logs.go:276] 0 containers: []
	W0814 01:10:09.347257   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:10:09.347262   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:10:09.347311   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:10:09.382005   61804 cri.go:89] found id: ""
	I0814 01:10:09.382035   61804 logs.go:276] 0 containers: []
	W0814 01:10:09.382059   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:10:09.382067   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:10:09.382129   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:10:09.413728   61804 cri.go:89] found id: ""
	I0814 01:10:09.413759   61804 logs.go:276] 0 containers: []
	W0814 01:10:09.413771   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:10:09.413778   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:10:09.413843   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:10:09.446389   61804 cri.go:89] found id: ""
	I0814 01:10:09.446422   61804 logs.go:276] 0 containers: []
	W0814 01:10:09.446435   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:10:09.446455   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:10:09.446511   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:10:09.482224   61804 cri.go:89] found id: ""
	I0814 01:10:09.482253   61804 logs.go:276] 0 containers: []
	W0814 01:10:09.482261   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:10:09.482267   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:10:09.482330   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:10:06.589791   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:09.089933   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:07.782212   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:07.782245   61447 pod_ready.go:81] duration metric: took 4m0.007594209s for pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace to be "Ready" ...
	E0814 01:10:07.782257   61447 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0814 01:10:07.782267   61447 pod_ready.go:38] duration metric: took 4m3.607931792s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 01:10:07.782286   61447 api_server.go:52] waiting for apiserver process to appear ...
	I0814 01:10:07.782318   61447 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:10:07.782382   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:10:07.840346   61447 cri.go:89] found id: "ddba3ebb8413d6647bf6c8fe0bff16a1a10b35bcd7219cad5b66979c372cd92e"
	I0814 01:10:07.840370   61447 cri.go:89] found id: ""
	I0814 01:10:07.840378   61447 logs.go:276] 1 containers: [ddba3ebb8413d6647bf6c8fe0bff16a1a10b35bcd7219cad5b66979c372cd92e]
	I0814 01:10:07.840426   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:07.844721   61447 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:10:07.844775   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:10:07.879720   61447 cri.go:89] found id: "1632d4b88f7f0e7e802d1848acf6c67950cfa7cdfbe5b4dd7f6fbc467a969388"
	I0814 01:10:07.879748   61447 cri.go:89] found id: ""
	I0814 01:10:07.879756   61447 logs.go:276] 1 containers: [1632d4b88f7f0e7e802d1848acf6c67950cfa7cdfbe5b4dd7f6fbc467a969388]
	I0814 01:10:07.879805   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:07.883392   61447 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:10:07.883455   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:10:07.919395   61447 cri.go:89] found id: "7d3cb1d648607a62a84856164fa12f6d400c0d4316d7bda5d83a448870c145fc"
	I0814 01:10:07.919414   61447 cri.go:89] found id: ""
	I0814 01:10:07.919423   61447 logs.go:276] 1 containers: [7d3cb1d648607a62a84856164fa12f6d400c0d4316d7bda5d83a448870c145fc]
	I0814 01:10:07.919481   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:07.923650   61447 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:10:07.923715   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:10:07.960706   61447 cri.go:89] found id: "89953f1dc813e32748579ac4c2541c0f99b1443ae3956d85a367db06810107b2"
	I0814 01:10:07.960734   61447 cri.go:89] found id: ""
	I0814 01:10:07.960744   61447 logs.go:276] 1 containers: [89953f1dc813e32748579ac4c2541c0f99b1443ae3956d85a367db06810107b2]
	I0814 01:10:07.960792   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:07.964923   61447 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:10:07.964984   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:10:08.000107   61447 cri.go:89] found id: "0ec88a5a7a9d566fabdf1393667a95e5eac0ed7f2a2aaa326ee51d3e99a72b12"
	I0814 01:10:08.000127   61447 cri.go:89] found id: ""
	I0814 01:10:08.000134   61447 logs.go:276] 1 containers: [0ec88a5a7a9d566fabdf1393667a95e5eac0ed7f2a2aaa326ee51d3e99a72b12]
	I0814 01:10:08.000187   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:08.004313   61447 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:10:08.004375   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:10:08.039317   61447 cri.go:89] found id: "3ef9bf666bbbc4c4c8b8036d2fa7e15e882f2bb301fe7d3b63a483d8b38e6091"
	I0814 01:10:08.039346   61447 cri.go:89] found id: ""
	I0814 01:10:08.039356   61447 logs.go:276] 1 containers: [3ef9bf666bbbc4c4c8b8036d2fa7e15e882f2bb301fe7d3b63a483d8b38e6091]
	I0814 01:10:08.039433   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:08.043054   61447 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:10:08.043122   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:10:08.078708   61447 cri.go:89] found id: ""
	I0814 01:10:08.078745   61447 logs.go:276] 0 containers: []
	W0814 01:10:08.078756   61447 logs.go:278] No container was found matching "kindnet"
	I0814 01:10:08.078764   61447 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0814 01:10:08.078826   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0814 01:10:08.119964   61447 cri.go:89] found id: "d4d7da10edbe3c5e4d9a0997c7f8909523ed34eacac6b44dc982bf6ab7504eff"
	I0814 01:10:08.119989   61447 cri.go:89] found id: "bacb411cbea2000ee28b8a5eb1c371d4094e22dea31e33892860568e08ee9768"
	I0814 01:10:08.119995   61447 cri.go:89] found id: ""
	I0814 01:10:08.120004   61447 logs.go:276] 2 containers: [d4d7da10edbe3c5e4d9a0997c7f8909523ed34eacac6b44dc982bf6ab7504eff bacb411cbea2000ee28b8a5eb1c371d4094e22dea31e33892860568e08ee9768]
	I0814 01:10:08.120067   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:08.123852   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:08.127530   61447 logs.go:123] Gathering logs for dmesg ...
	I0814 01:10:08.127553   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:10:08.144431   61447 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:10:08.144466   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 01:10:08.267719   61447 logs.go:123] Gathering logs for coredns [7d3cb1d648607a62a84856164fa12f6d400c0d4316d7bda5d83a448870c145fc] ...
	I0814 01:10:08.267751   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7d3cb1d648607a62a84856164fa12f6d400c0d4316d7bda5d83a448870c145fc"
	I0814 01:10:08.308901   61447 logs.go:123] Gathering logs for kube-controller-manager [3ef9bf666bbbc4c4c8b8036d2fa7e15e882f2bb301fe7d3b63a483d8b38e6091] ...
	I0814 01:10:08.308936   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ef9bf666bbbc4c4c8b8036d2fa7e15e882f2bb301fe7d3b63a483d8b38e6091"
	I0814 01:10:08.357837   61447 logs.go:123] Gathering logs for storage-provisioner [d4d7da10edbe3c5e4d9a0997c7f8909523ed34eacac6b44dc982bf6ab7504eff] ...
	I0814 01:10:08.357868   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d4d7da10edbe3c5e4d9a0997c7f8909523ed34eacac6b44dc982bf6ab7504eff"
	I0814 01:10:08.393863   61447 logs.go:123] Gathering logs for storage-provisioner [bacb411cbea2000ee28b8a5eb1c371d4094e22dea31e33892860568e08ee9768] ...
	I0814 01:10:08.393890   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bacb411cbea2000ee28b8a5eb1c371d4094e22dea31e33892860568e08ee9768"
	I0814 01:10:08.430599   61447 logs.go:123] Gathering logs for kubelet ...
	I0814 01:10:08.430631   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:10:08.512420   61447 logs.go:123] Gathering logs for etcd [1632d4b88f7f0e7e802d1848acf6c67950cfa7cdfbe5b4dd7f6fbc467a969388] ...
	I0814 01:10:08.512460   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1632d4b88f7f0e7e802d1848acf6c67950cfa7cdfbe5b4dd7f6fbc467a969388"
	I0814 01:10:08.561482   61447 logs.go:123] Gathering logs for kube-scheduler [89953f1dc813e32748579ac4c2541c0f99b1443ae3956d85a367db06810107b2] ...
	I0814 01:10:08.561512   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 89953f1dc813e32748579ac4c2541c0f99b1443ae3956d85a367db06810107b2"
	I0814 01:10:08.598681   61447 logs.go:123] Gathering logs for kube-proxy [0ec88a5a7a9d566fabdf1393667a95e5eac0ed7f2a2aaa326ee51d3e99a72b12] ...
	I0814 01:10:08.598705   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ec88a5a7a9d566fabdf1393667a95e5eac0ed7f2a2aaa326ee51d3e99a72b12"
	I0814 01:10:08.634798   61447 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:10:08.634835   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:10:09.113197   61447 logs.go:123] Gathering logs for container status ...
	I0814 01:10:09.113249   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:10:09.166264   61447 logs.go:123] Gathering logs for kube-apiserver [ddba3ebb8413d6647bf6c8fe0bff16a1a10b35bcd7219cad5b66979c372cd92e] ...
	I0814 01:10:09.166294   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ddba3ebb8413d6647bf6c8fe0bff16a1a10b35bcd7219cad5b66979c372cd92e"
	I0814 01:10:10.417612   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:12.418303   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:12.911546   61689 pod_ready.go:81] duration metric: took 4m0.00009953s for pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace to be "Ready" ...
	E0814 01:10:12.911580   61689 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0814 01:10:12.911610   61689 pod_ready.go:38] duration metric: took 4m7.021956674s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 01:10:12.911643   61689 kubeadm.go:597] duration metric: took 4m14.591841657s to restartPrimaryControlPlane
	W0814 01:10:12.911710   61689 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0814 01:10:12.911741   61689 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0814 01:10:09.517482   61804 cri.go:89] found id: ""
	I0814 01:10:09.517511   61804 logs.go:276] 0 containers: []
	W0814 01:10:09.517529   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:10:09.517538   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:10:09.517600   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:10:09.550825   61804 cri.go:89] found id: ""
	I0814 01:10:09.550849   61804 logs.go:276] 0 containers: []
	W0814 01:10:09.550857   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:10:09.550863   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:10:09.550923   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:10:09.585090   61804 cri.go:89] found id: ""
	I0814 01:10:09.585122   61804 logs.go:276] 0 containers: []
	W0814 01:10:09.585129   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:10:09.585137   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:10:09.585148   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:10:09.636337   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:10:09.636367   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:10:09.649807   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:10:09.649837   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:10:09.720720   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:10:09.720743   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:10:09.720759   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:10:09.805985   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:10:09.806027   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:10:12.350767   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:10:12.364446   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:10:12.364516   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:10:12.396353   61804 cri.go:89] found id: ""
	I0814 01:10:12.396387   61804 logs.go:276] 0 containers: []
	W0814 01:10:12.396400   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:10:12.396409   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:10:12.396478   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:10:12.427988   61804 cri.go:89] found id: ""
	I0814 01:10:12.428010   61804 logs.go:276] 0 containers: []
	W0814 01:10:12.428022   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:10:12.428033   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:10:12.428094   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:10:12.461269   61804 cri.go:89] found id: ""
	I0814 01:10:12.461295   61804 logs.go:276] 0 containers: []
	W0814 01:10:12.461304   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:10:12.461310   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:10:12.461364   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:10:12.495746   61804 cri.go:89] found id: ""
	I0814 01:10:12.495772   61804 logs.go:276] 0 containers: []
	W0814 01:10:12.495783   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:10:12.495791   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:10:12.495850   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:10:12.528862   61804 cri.go:89] found id: ""
	I0814 01:10:12.528891   61804 logs.go:276] 0 containers: []
	W0814 01:10:12.528901   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:10:12.528909   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:10:12.528969   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:10:12.562169   61804 cri.go:89] found id: ""
	I0814 01:10:12.562196   61804 logs.go:276] 0 containers: []
	W0814 01:10:12.562206   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:10:12.562214   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:10:12.562279   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:10:12.601089   61804 cri.go:89] found id: ""
	I0814 01:10:12.601118   61804 logs.go:276] 0 containers: []
	W0814 01:10:12.601129   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:10:12.601137   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:10:12.601200   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:10:12.635250   61804 cri.go:89] found id: ""
	I0814 01:10:12.635276   61804 logs.go:276] 0 containers: []
	W0814 01:10:12.635285   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:10:12.635293   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:10:12.635306   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:10:12.686904   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:10:12.686937   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:10:12.702218   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:10:12.702244   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:10:12.767008   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:10:12.767034   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:10:12.767051   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:10:12.849601   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:10:12.849639   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:10:11.090068   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:13.090518   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:11.715364   61447 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:10:11.731610   61447 api_server.go:72] duration metric: took 4m15.320142444s to wait for apiserver process to appear ...
	I0814 01:10:11.731645   61447 api_server.go:88] waiting for apiserver healthz status ...
	I0814 01:10:11.731689   61447 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:10:11.731748   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:10:11.769722   61447 cri.go:89] found id: "ddba3ebb8413d6647bf6c8fe0bff16a1a10b35bcd7219cad5b66979c372cd92e"
	I0814 01:10:11.769754   61447 cri.go:89] found id: ""
	I0814 01:10:11.769763   61447 logs.go:276] 1 containers: [ddba3ebb8413d6647bf6c8fe0bff16a1a10b35bcd7219cad5b66979c372cd92e]
	I0814 01:10:11.769824   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:11.774334   61447 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:10:11.774403   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:10:11.808392   61447 cri.go:89] found id: "1632d4b88f7f0e7e802d1848acf6c67950cfa7cdfbe5b4dd7f6fbc467a969388"
	I0814 01:10:11.808412   61447 cri.go:89] found id: ""
	I0814 01:10:11.808419   61447 logs.go:276] 1 containers: [1632d4b88f7f0e7e802d1848acf6c67950cfa7cdfbe5b4dd7f6fbc467a969388]
	I0814 01:10:11.808464   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:11.812100   61447 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:10:11.812154   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:10:11.846105   61447 cri.go:89] found id: "7d3cb1d648607a62a84856164fa12f6d400c0d4316d7bda5d83a448870c145fc"
	I0814 01:10:11.846133   61447 cri.go:89] found id: ""
	I0814 01:10:11.846144   61447 logs.go:276] 1 containers: [7d3cb1d648607a62a84856164fa12f6d400c0d4316d7bda5d83a448870c145fc]
	I0814 01:10:11.846202   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:11.850271   61447 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:10:11.850330   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:10:11.889364   61447 cri.go:89] found id: "89953f1dc813e32748579ac4c2541c0f99b1443ae3956d85a367db06810107b2"
	I0814 01:10:11.889389   61447 cri.go:89] found id: ""
	I0814 01:10:11.889399   61447 logs.go:276] 1 containers: [89953f1dc813e32748579ac4c2541c0f99b1443ae3956d85a367db06810107b2]
	I0814 01:10:11.889446   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:11.893413   61447 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:10:11.893483   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:10:11.929675   61447 cri.go:89] found id: "0ec88a5a7a9d566fabdf1393667a95e5eac0ed7f2a2aaa326ee51d3e99a72b12"
	I0814 01:10:11.929696   61447 cri.go:89] found id: ""
	I0814 01:10:11.929704   61447 logs.go:276] 1 containers: [0ec88a5a7a9d566fabdf1393667a95e5eac0ed7f2a2aaa326ee51d3e99a72b12]
	I0814 01:10:11.929764   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:11.933454   61447 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:10:11.933513   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:10:11.971708   61447 cri.go:89] found id: "3ef9bf666bbbc4c4c8b8036d2fa7e15e882f2bb301fe7d3b63a483d8b38e6091"
	I0814 01:10:11.971734   61447 cri.go:89] found id: ""
	I0814 01:10:11.971743   61447 logs.go:276] 1 containers: [3ef9bf666bbbc4c4c8b8036d2fa7e15e882f2bb301fe7d3b63a483d8b38e6091]
	I0814 01:10:11.971801   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:11.975943   61447 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:10:11.976005   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:10:12.010171   61447 cri.go:89] found id: ""
	I0814 01:10:12.010198   61447 logs.go:276] 0 containers: []
	W0814 01:10:12.010209   61447 logs.go:278] No container was found matching "kindnet"
	I0814 01:10:12.010217   61447 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0814 01:10:12.010277   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0814 01:10:12.045333   61447 cri.go:89] found id: "d4d7da10edbe3c5e4d9a0997c7f8909523ed34eacac6b44dc982bf6ab7504eff"
	I0814 01:10:12.045354   61447 cri.go:89] found id: "bacb411cbea2000ee28b8a5eb1c371d4094e22dea31e33892860568e08ee9768"
	I0814 01:10:12.045359   61447 cri.go:89] found id: ""
	I0814 01:10:12.045367   61447 logs.go:276] 2 containers: [d4d7da10edbe3c5e4d9a0997c7f8909523ed34eacac6b44dc982bf6ab7504eff bacb411cbea2000ee28b8a5eb1c371d4094e22dea31e33892860568e08ee9768]
	I0814 01:10:12.045431   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:12.049476   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:12.053712   61447 logs.go:123] Gathering logs for kube-controller-manager [3ef9bf666bbbc4c4c8b8036d2fa7e15e882f2bb301fe7d3b63a483d8b38e6091] ...
	I0814 01:10:12.053732   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ef9bf666bbbc4c4c8b8036d2fa7e15e882f2bb301fe7d3b63a483d8b38e6091"
	I0814 01:10:12.109678   61447 logs.go:123] Gathering logs for storage-provisioner [d4d7da10edbe3c5e4d9a0997c7f8909523ed34eacac6b44dc982bf6ab7504eff] ...
	I0814 01:10:12.109706   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d4d7da10edbe3c5e4d9a0997c7f8909523ed34eacac6b44dc982bf6ab7504eff"
	I0814 01:10:12.146300   61447 logs.go:123] Gathering logs for storage-provisioner [bacb411cbea2000ee28b8a5eb1c371d4094e22dea31e33892860568e08ee9768] ...
	I0814 01:10:12.146327   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bacb411cbea2000ee28b8a5eb1c371d4094e22dea31e33892860568e08ee9768"
	I0814 01:10:12.186556   61447 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:10:12.186585   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:10:12.660273   61447 logs.go:123] Gathering logs for kubelet ...
	I0814 01:10:12.660307   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:10:12.739687   61447 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:10:12.739723   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 01:10:12.859358   61447 logs.go:123] Gathering logs for kube-apiserver [ddba3ebb8413d6647bf6c8fe0bff16a1a10b35bcd7219cad5b66979c372cd92e] ...
	I0814 01:10:12.859388   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ddba3ebb8413d6647bf6c8fe0bff16a1a10b35bcd7219cad5b66979c372cd92e"
	I0814 01:10:12.908682   61447 logs.go:123] Gathering logs for kube-proxy [0ec88a5a7a9d566fabdf1393667a95e5eac0ed7f2a2aaa326ee51d3e99a72b12] ...
	I0814 01:10:12.908712   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ec88a5a7a9d566fabdf1393667a95e5eac0ed7f2a2aaa326ee51d3e99a72b12"
	I0814 01:10:12.943374   61447 logs.go:123] Gathering logs for container status ...
	I0814 01:10:12.943403   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:10:12.985875   61447 logs.go:123] Gathering logs for dmesg ...
	I0814 01:10:12.985915   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:10:13.001173   61447 logs.go:123] Gathering logs for etcd [1632d4b88f7f0e7e802d1848acf6c67950cfa7cdfbe5b4dd7f6fbc467a969388] ...
	I0814 01:10:13.001206   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1632d4b88f7f0e7e802d1848acf6c67950cfa7cdfbe5b4dd7f6fbc467a969388"
	I0814 01:10:13.048387   61447 logs.go:123] Gathering logs for coredns [7d3cb1d648607a62a84856164fa12f6d400c0d4316d7bda5d83a448870c145fc] ...
	I0814 01:10:13.048419   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7d3cb1d648607a62a84856164fa12f6d400c0d4316d7bda5d83a448870c145fc"
	I0814 01:10:13.088258   61447 logs.go:123] Gathering logs for kube-scheduler [89953f1dc813e32748579ac4c2541c0f99b1443ae3956d85a367db06810107b2] ...
	I0814 01:10:13.088295   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 89953f1dc813e32748579ac4c2541c0f99b1443ae3956d85a367db06810107b2"
	I0814 01:10:15.634029   61447 api_server.go:253] Checking apiserver healthz at https://192.168.72.94:8443/healthz ...
	I0814 01:10:15.639313   61447 api_server.go:279] https://192.168.72.94:8443/healthz returned 200:
	ok
	I0814 01:10:15.640756   61447 api_server.go:141] control plane version: v1.31.0
	I0814 01:10:15.640778   61447 api_server.go:131] duration metric: took 3.909125329s to wait for apiserver health ...
	I0814 01:10:15.640785   61447 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 01:10:15.640808   61447 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:10:15.640853   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:10:15.687350   61447 cri.go:89] found id: "ddba3ebb8413d6647bf6c8fe0bff16a1a10b35bcd7219cad5b66979c372cd92e"
	I0814 01:10:15.687373   61447 cri.go:89] found id: ""
	I0814 01:10:15.687381   61447 logs.go:276] 1 containers: [ddba3ebb8413d6647bf6c8fe0bff16a1a10b35bcd7219cad5b66979c372cd92e]
	I0814 01:10:15.687460   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:15.691407   61447 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:10:15.691473   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:10:15.730526   61447 cri.go:89] found id: "1632d4b88f7f0e7e802d1848acf6c67950cfa7cdfbe5b4dd7f6fbc467a969388"
	I0814 01:10:15.730551   61447 cri.go:89] found id: ""
	I0814 01:10:15.730560   61447 logs.go:276] 1 containers: [1632d4b88f7f0e7e802d1848acf6c67950cfa7cdfbe5b4dd7f6fbc467a969388]
	I0814 01:10:15.730618   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:15.734328   61447 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:10:15.734390   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:10:15.773166   61447 cri.go:89] found id: "7d3cb1d648607a62a84856164fa12f6d400c0d4316d7bda5d83a448870c145fc"
	I0814 01:10:15.773185   61447 cri.go:89] found id: ""
	I0814 01:10:15.773192   61447 logs.go:276] 1 containers: [7d3cb1d648607a62a84856164fa12f6d400c0d4316d7bda5d83a448870c145fc]
	I0814 01:10:15.773236   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:15.778757   61447 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:10:15.778815   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:10:15.813960   61447 cri.go:89] found id: "89953f1dc813e32748579ac4c2541c0f99b1443ae3956d85a367db06810107b2"
	I0814 01:10:15.813984   61447 cri.go:89] found id: ""
	I0814 01:10:15.813993   61447 logs.go:276] 1 containers: [89953f1dc813e32748579ac4c2541c0f99b1443ae3956d85a367db06810107b2]
	I0814 01:10:15.814068   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:15.818154   61447 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:10:15.818206   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:10:15.859408   61447 cri.go:89] found id: "0ec88a5a7a9d566fabdf1393667a95e5eac0ed7f2a2aaa326ee51d3e99a72b12"
	I0814 01:10:15.859432   61447 cri.go:89] found id: ""
	I0814 01:10:15.859440   61447 logs.go:276] 1 containers: [0ec88a5a7a9d566fabdf1393667a95e5eac0ed7f2a2aaa326ee51d3e99a72b12]
	I0814 01:10:15.859487   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:15.864494   61447 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:10:15.864583   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:10:15.900903   61447 cri.go:89] found id: "3ef9bf666bbbc4c4c8b8036d2fa7e15e882f2bb301fe7d3b63a483d8b38e6091"
	I0814 01:10:15.900922   61447 cri.go:89] found id: ""
	I0814 01:10:15.900932   61447 logs.go:276] 1 containers: [3ef9bf666bbbc4c4c8b8036d2fa7e15e882f2bb301fe7d3b63a483d8b38e6091]
	I0814 01:10:15.900982   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:15.905238   61447 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:10:15.905298   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:10:15.941185   61447 cri.go:89] found id: ""
	I0814 01:10:15.941215   61447 logs.go:276] 0 containers: []
	W0814 01:10:15.941226   61447 logs.go:278] No container was found matching "kindnet"
	I0814 01:10:15.941233   61447 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0814 01:10:15.941293   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0814 01:10:15.980737   61447 cri.go:89] found id: "d4d7da10edbe3c5e4d9a0997c7f8909523ed34eacac6b44dc982bf6ab7504eff"
	I0814 01:10:15.980756   61447 cri.go:89] found id: "bacb411cbea2000ee28b8a5eb1c371d4094e22dea31e33892860568e08ee9768"
	I0814 01:10:15.980760   61447 cri.go:89] found id: ""
	I0814 01:10:15.980766   61447 logs.go:276] 2 containers: [d4d7da10edbe3c5e4d9a0997c7f8909523ed34eacac6b44dc982bf6ab7504eff bacb411cbea2000ee28b8a5eb1c371d4094e22dea31e33892860568e08ee9768]
	I0814 01:10:15.980809   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:15.985209   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:15.989469   61447 logs.go:123] Gathering logs for coredns [7d3cb1d648607a62a84856164fa12f6d400c0d4316d7bda5d83a448870c145fc] ...
	I0814 01:10:15.989492   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7d3cb1d648607a62a84856164fa12f6d400c0d4316d7bda5d83a448870c145fc"
	I0814 01:10:16.026888   61447 logs.go:123] Gathering logs for kube-proxy [0ec88a5a7a9d566fabdf1393667a95e5eac0ed7f2a2aaa326ee51d3e99a72b12] ...
	I0814 01:10:16.026917   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ec88a5a7a9d566fabdf1393667a95e5eac0ed7f2a2aaa326ee51d3e99a72b12"
	I0814 01:10:16.071726   61447 logs.go:123] Gathering logs for storage-provisioner [d4d7da10edbe3c5e4d9a0997c7f8909523ed34eacac6b44dc982bf6ab7504eff] ...
	I0814 01:10:16.071754   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d4d7da10edbe3c5e4d9a0997c7f8909523ed34eacac6b44dc982bf6ab7504eff"
	I0814 01:10:16.109685   61447 logs.go:123] Gathering logs for storage-provisioner [bacb411cbea2000ee28b8a5eb1c371d4094e22dea31e33892860568e08ee9768] ...
	I0814 01:10:16.109710   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bacb411cbea2000ee28b8a5eb1c371d4094e22dea31e33892860568e08ee9768"
	I0814 01:10:16.145898   61447 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:10:16.145928   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:10:15.387785   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:10:15.401850   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:10:15.401916   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:10:15.441217   61804 cri.go:89] found id: ""
	I0814 01:10:15.441240   61804 logs.go:276] 0 containers: []
	W0814 01:10:15.441255   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:10:15.441261   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:10:15.441312   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:10:15.475123   61804 cri.go:89] found id: ""
	I0814 01:10:15.475158   61804 logs.go:276] 0 containers: []
	W0814 01:10:15.475167   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:10:15.475172   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:10:15.475234   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:10:15.509696   61804 cri.go:89] found id: ""
	I0814 01:10:15.509725   61804 logs.go:276] 0 containers: []
	W0814 01:10:15.509733   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:10:15.509739   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:10:15.509797   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:10:15.542584   61804 cri.go:89] found id: ""
	I0814 01:10:15.542615   61804 logs.go:276] 0 containers: []
	W0814 01:10:15.542625   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:10:15.542632   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:10:15.542701   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:10:15.576508   61804 cri.go:89] found id: ""
	I0814 01:10:15.576540   61804 logs.go:276] 0 containers: []
	W0814 01:10:15.576552   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:10:15.576558   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:10:15.576622   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:10:15.613618   61804 cri.go:89] found id: ""
	I0814 01:10:15.613649   61804 logs.go:276] 0 containers: []
	W0814 01:10:15.613660   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:10:15.613669   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:10:15.613732   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:10:15.646153   61804 cri.go:89] found id: ""
	I0814 01:10:15.646173   61804 logs.go:276] 0 containers: []
	W0814 01:10:15.646182   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:10:15.646189   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:10:15.646241   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:10:15.681417   61804 cri.go:89] found id: ""
	I0814 01:10:15.681444   61804 logs.go:276] 0 containers: []
	W0814 01:10:15.681455   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:10:15.681466   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:10:15.681483   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:10:15.763989   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:10:15.764026   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:10:15.803304   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:10:15.803337   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:10:15.872591   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:10:15.872630   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:10:15.886469   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:10:15.886504   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:10:15.956403   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:10:18.457103   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:10:18.470059   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:10:18.470138   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:10:18.505369   61804 cri.go:89] found id: ""
	I0814 01:10:18.505399   61804 logs.go:276] 0 containers: []
	W0814 01:10:18.505410   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:10:18.505419   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:10:18.505481   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:10:18.536719   61804 cri.go:89] found id: ""
	I0814 01:10:18.536750   61804 logs.go:276] 0 containers: []
	W0814 01:10:18.536781   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:10:18.536790   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:10:18.536845   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:10:18.571048   61804 cri.go:89] found id: ""
	I0814 01:10:18.571077   61804 logs.go:276] 0 containers: []
	W0814 01:10:18.571089   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:10:18.571096   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:10:18.571161   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:10:18.605547   61804 cri.go:89] found id: ""
	I0814 01:10:18.605569   61804 logs.go:276] 0 containers: []
	W0814 01:10:18.605578   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:10:18.605585   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:10:18.605645   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:10:18.637177   61804 cri.go:89] found id: ""
	I0814 01:10:18.637199   61804 logs.go:276] 0 containers: []
	W0814 01:10:18.637207   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:10:18.637213   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:10:18.637275   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:10:18.674976   61804 cri.go:89] found id: ""
	I0814 01:10:18.675003   61804 logs.go:276] 0 containers: []
	W0814 01:10:18.675012   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:10:18.675017   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:10:18.675066   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:10:18.709808   61804 cri.go:89] found id: ""
	I0814 01:10:18.709832   61804 logs.go:276] 0 containers: []
	W0814 01:10:18.709840   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:10:18.709846   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:10:18.709902   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:10:18.743577   61804 cri.go:89] found id: ""
	I0814 01:10:18.743601   61804 logs.go:276] 0 containers: []
	W0814 01:10:18.743607   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:10:18.743615   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:10:18.743635   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:10:18.794913   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:10:18.794944   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:10:18.807665   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:10:18.807692   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:10:18.877814   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:10:18.877835   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:10:18.877847   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:10:18.962319   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:10:18.962356   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:10:16.533474   61447 logs.go:123] Gathering logs for container status ...
	I0814 01:10:16.533523   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:10:16.579098   61447 logs.go:123] Gathering logs for kube-apiserver [ddba3ebb8413d6647bf6c8fe0bff16a1a10b35bcd7219cad5b66979c372cd92e] ...
	I0814 01:10:16.579129   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ddba3ebb8413d6647bf6c8fe0bff16a1a10b35bcd7219cad5b66979c372cd92e"
	I0814 01:10:16.620711   61447 logs.go:123] Gathering logs for dmesg ...
	I0814 01:10:16.620744   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:10:16.633968   61447 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:10:16.634005   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 01:10:16.733947   61447 logs.go:123] Gathering logs for etcd [1632d4b88f7f0e7e802d1848acf6c67950cfa7cdfbe5b4dd7f6fbc467a969388] ...
	I0814 01:10:16.733985   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1632d4b88f7f0e7e802d1848acf6c67950cfa7cdfbe5b4dd7f6fbc467a969388"
	I0814 01:10:16.785475   61447 logs.go:123] Gathering logs for kube-scheduler [89953f1dc813e32748579ac4c2541c0f99b1443ae3956d85a367db06810107b2] ...
	I0814 01:10:16.785512   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 89953f1dc813e32748579ac4c2541c0f99b1443ae3956d85a367db06810107b2"
	I0814 01:10:16.826307   61447 logs.go:123] Gathering logs for kube-controller-manager [3ef9bf666bbbc4c4c8b8036d2fa7e15e882f2bb301fe7d3b63a483d8b38e6091] ...
	I0814 01:10:16.826334   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ef9bf666bbbc4c4c8b8036d2fa7e15e882f2bb301fe7d3b63a483d8b38e6091"
	I0814 01:10:16.879391   61447 logs.go:123] Gathering logs for kubelet ...
	I0814 01:10:16.879422   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:10:19.453998   61447 system_pods.go:59] 8 kube-system pods found
	I0814 01:10:19.454028   61447 system_pods.go:61] "coredns-6f6b679f8f-dz9zk" [67e29ce3-7f67-4b96-8030-c980773b5772] Running
	I0814 01:10:19.454034   61447 system_pods.go:61] "etcd-no-preload-776907" [b81b7341-dcd8-4374-8241-8797eb33d707] Running
	I0814 01:10:19.454050   61447 system_pods.go:61] "kube-apiserver-no-preload-776907" [33b066e2-28ef-46a7-95d7-b17806cdbde6] Running
	I0814 01:10:19.454056   61447 system_pods.go:61] "kube-controller-manager-no-preload-776907" [1de07b1f-7e0d-4704-84dc-fbb1280fc3bf] Running
	I0814 01:10:19.454060   61447 system_pods.go:61] "kube-proxy-pgm9t" [efad60b0-c62e-4c47-974b-98fdca9d3496] Running
	I0814 01:10:19.454065   61447 system_pods.go:61] "kube-scheduler-no-preload-776907" [6a57c2f5-6194-4e84-bfd3-985a6ff2333d] Running
	I0814 01:10:19.454074   61447 system_pods.go:61] "metrics-server-6867b74b74-gb2dt" [c950c58e-c5c3-4535-b10f-f4379ff03409] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 01:10:19.454079   61447 system_pods.go:61] "storage-provisioner" [d0ba9510-e0a5-4558-98e3-a9510920f93a] Running
	I0814 01:10:19.454090   61447 system_pods.go:74] duration metric: took 3.813297982s to wait for pod list to return data ...
	I0814 01:10:19.454101   61447 default_sa.go:34] waiting for default service account to be created ...
	I0814 01:10:19.456941   61447 default_sa.go:45] found service account: "default"
	I0814 01:10:19.456969   61447 default_sa.go:55] duration metric: took 2.858057ms for default service account to be created ...
	I0814 01:10:19.456980   61447 system_pods.go:116] waiting for k8s-apps to be running ...
	I0814 01:10:19.461101   61447 system_pods.go:86] 8 kube-system pods found
	I0814 01:10:19.461125   61447 system_pods.go:89] "coredns-6f6b679f8f-dz9zk" [67e29ce3-7f67-4b96-8030-c980773b5772] Running
	I0814 01:10:19.461133   61447 system_pods.go:89] "etcd-no-preload-776907" [b81b7341-dcd8-4374-8241-8797eb33d707] Running
	I0814 01:10:19.461138   61447 system_pods.go:89] "kube-apiserver-no-preload-776907" [33b066e2-28ef-46a7-95d7-b17806cdbde6] Running
	I0814 01:10:19.461144   61447 system_pods.go:89] "kube-controller-manager-no-preload-776907" [1de07b1f-7e0d-4704-84dc-fbb1280fc3bf] Running
	I0814 01:10:19.461150   61447 system_pods.go:89] "kube-proxy-pgm9t" [efad60b0-c62e-4c47-974b-98fdca9d3496] Running
	I0814 01:10:19.461155   61447 system_pods.go:89] "kube-scheduler-no-preload-776907" [6a57c2f5-6194-4e84-bfd3-985a6ff2333d] Running
	I0814 01:10:19.461166   61447 system_pods.go:89] "metrics-server-6867b74b74-gb2dt" [c950c58e-c5c3-4535-b10f-f4379ff03409] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 01:10:19.461178   61447 system_pods.go:89] "storage-provisioner" [d0ba9510-e0a5-4558-98e3-a9510920f93a] Running
	I0814 01:10:19.461191   61447 system_pods.go:126] duration metric: took 4.203785ms to wait for k8s-apps to be running ...
	I0814 01:10:19.461203   61447 system_svc.go:44] waiting for kubelet service to be running ....
	I0814 01:10:19.461253   61447 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 01:10:19.476698   61447 system_svc.go:56] duration metric: took 15.486945ms WaitForService to wait for kubelet
	I0814 01:10:19.476735   61447 kubeadm.go:582] duration metric: took 4m23.065272349s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 01:10:19.476762   61447 node_conditions.go:102] verifying NodePressure condition ...
	I0814 01:10:19.480352   61447 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0814 01:10:19.480377   61447 node_conditions.go:123] node cpu capacity is 2
	I0814 01:10:19.480392   61447 node_conditions.go:105] duration metric: took 3.624166ms to run NodePressure ...
	I0814 01:10:19.480407   61447 start.go:241] waiting for startup goroutines ...
	I0814 01:10:19.480426   61447 start.go:246] waiting for cluster config update ...
	I0814 01:10:19.480440   61447 start.go:255] writing updated cluster config ...
	I0814 01:10:19.480790   61447 ssh_runner.go:195] Run: rm -f paused
	I0814 01:10:19.529809   61447 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0814 01:10:19.531666   61447 out.go:177] * Done! kubectl is now configured to use "no-preload-776907" cluster and "default" namespace by default
	I0814 01:10:15.590230   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:18.089286   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:21.500596   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:10:21.513404   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:10:21.513479   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:10:21.554150   61804 cri.go:89] found id: ""
	I0814 01:10:21.554179   61804 logs.go:276] 0 containers: []
	W0814 01:10:21.554188   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:10:21.554194   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:10:21.554251   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:10:21.588785   61804 cri.go:89] found id: ""
	I0814 01:10:21.588807   61804 logs.go:276] 0 containers: []
	W0814 01:10:21.588815   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:10:21.588820   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:10:21.588870   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:10:21.621537   61804 cri.go:89] found id: ""
	I0814 01:10:21.621572   61804 logs.go:276] 0 containers: []
	W0814 01:10:21.621581   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:10:21.621587   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:10:21.621640   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:10:21.660651   61804 cri.go:89] found id: ""
	I0814 01:10:21.660680   61804 logs.go:276] 0 containers: []
	W0814 01:10:21.660690   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:10:21.660698   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:10:21.660763   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:10:21.697233   61804 cri.go:89] found id: ""
	I0814 01:10:21.697259   61804 logs.go:276] 0 containers: []
	W0814 01:10:21.697269   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:10:21.697276   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:10:21.697347   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:10:21.728389   61804 cri.go:89] found id: ""
	I0814 01:10:21.728416   61804 logs.go:276] 0 containers: []
	W0814 01:10:21.728428   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:10:21.728435   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:10:21.728498   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:10:21.761502   61804 cri.go:89] found id: ""
	I0814 01:10:21.761534   61804 logs.go:276] 0 containers: []
	W0814 01:10:21.761546   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:10:21.761552   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:10:21.761624   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:10:21.796569   61804 cri.go:89] found id: ""
	I0814 01:10:21.796598   61804 logs.go:276] 0 containers: []
	W0814 01:10:21.796610   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:10:21.796621   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:10:21.796637   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:10:21.845444   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:10:21.845483   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:10:21.858017   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:10:21.858057   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:10:21.930417   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:10:21.930443   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:10:21.930460   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:10:22.005912   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:10:22.005951   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:10:20.089593   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:22.089797   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:24.591315   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:24.545241   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:10:24.559341   61804 kubeadm.go:597] duration metric: took 4m4.643567639s to restartPrimaryControlPlane
	W0814 01:10:24.559407   61804 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0814 01:10:24.559430   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0814 01:10:28.294241   61804 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (3.734785326s)
	I0814 01:10:28.294319   61804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 01:10:28.311148   61804 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 01:10:28.321145   61804 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 01:10:28.335025   61804 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 01:10:28.335042   61804 kubeadm.go:157] found existing configuration files:
	
	I0814 01:10:28.335084   61804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 01:10:28.348778   61804 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 01:10:28.348838   61804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 01:10:28.362209   61804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 01:10:28.374981   61804 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 01:10:28.375054   61804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 01:10:28.385686   61804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 01:10:28.396608   61804 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 01:10:28.396681   61804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 01:10:28.410155   61804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 01:10:28.419462   61804 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 01:10:28.419524   61804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 01:10:28.429089   61804 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0814 01:10:28.506715   61804 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0814 01:10:28.506816   61804 kubeadm.go:310] [preflight] Running pre-flight checks
	I0814 01:10:28.668770   61804 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0814 01:10:28.668908   61804 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0814 01:10:28.669020   61804 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0814 01:10:28.865442   61804 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0814 01:10:28.866971   61804 out.go:204]   - Generating certificates and keys ...
	I0814 01:10:28.867065   61804 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0814 01:10:28.867151   61804 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0814 01:10:28.867270   61804 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0814 01:10:28.867370   61804 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0814 01:10:28.867486   61804 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0814 01:10:28.867575   61804 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0814 01:10:28.867668   61804 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0814 01:10:28.867762   61804 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0814 01:10:28.867854   61804 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0814 01:10:28.867969   61804 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0814 01:10:28.868026   61804 kubeadm.go:310] [certs] Using the existing "sa" key
	I0814 01:10:28.868095   61804 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0814 01:10:29.109820   61804 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0814 01:10:29.305485   61804 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0814 01:10:29.447627   61804 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0814 01:10:29.519749   61804 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0814 01:10:29.534507   61804 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0814 01:10:29.535858   61804 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0814 01:10:29.535915   61804 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0814 01:10:29.679100   61804 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0814 01:10:27.089933   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:29.590579   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:29.681457   61804 out.go:204]   - Booting up control plane ...
	I0814 01:10:29.681596   61804 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0814 01:10:29.686193   61804 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0814 01:10:29.690458   61804 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0814 01:10:29.690602   61804 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0814 01:10:29.692526   61804 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0814 01:10:32.089926   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:34.090129   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:39.266092   61689 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.354324468s)
	I0814 01:10:39.266176   61689 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 01:10:39.281039   61689 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 01:10:39.290328   61689 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 01:10:39.299179   61689 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 01:10:39.299200   61689 kubeadm.go:157] found existing configuration files:
	
	I0814 01:10:39.299240   61689 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0814 01:10:39.307972   61689 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 01:10:39.308029   61689 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 01:10:39.316639   61689 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0814 01:10:39.324834   61689 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 01:10:39.324907   61689 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 01:10:39.333911   61689 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0814 01:10:39.342294   61689 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 01:10:39.342358   61689 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 01:10:39.351209   61689 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0814 01:10:39.361364   61689 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 01:10:39.361429   61689 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 01:10:39.370737   61689 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0814 01:10:39.422751   61689 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0814 01:10:39.422819   61689 kubeadm.go:310] [preflight] Running pre-flight checks
	I0814 01:10:39.536672   61689 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0814 01:10:39.536827   61689 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0814 01:10:39.536965   61689 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0814 01:10:39.546793   61689 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0814 01:10:36.590409   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:39.090160   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:39.548749   61689 out.go:204]   - Generating certificates and keys ...
	I0814 01:10:39.548852   61689 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0814 01:10:39.548936   61689 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0814 01:10:39.549054   61689 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0814 01:10:39.549147   61689 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0814 01:10:39.549236   61689 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0814 01:10:39.549354   61689 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0814 01:10:39.549454   61689 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0814 01:10:39.549540   61689 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0814 01:10:39.549647   61689 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0814 01:10:39.549725   61689 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0814 01:10:39.549779   61689 kubeadm.go:310] [certs] Using the existing "sa" key
	I0814 01:10:39.549857   61689 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0814 01:10:39.626351   61689 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0814 01:10:39.760278   61689 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0814 01:10:39.866008   61689 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0814 01:10:39.999161   61689 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0814 01:10:40.196721   61689 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0814 01:10:40.197188   61689 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0814 01:10:40.199882   61689 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0814 01:10:40.201618   61689 out.go:204]   - Booting up control plane ...
	I0814 01:10:40.201746   61689 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0814 01:10:40.201813   61689 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0814 01:10:40.201869   61689 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0814 01:10:40.219199   61689 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0814 01:10:40.227902   61689 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0814 01:10:40.227973   61689 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0814 01:10:40.361233   61689 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0814 01:10:40.361348   61689 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0814 01:10:40.862332   61689 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.269742ms
	I0814 01:10:40.862432   61689 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0814 01:10:41.590443   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:43.590766   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:45.864038   61689 kubeadm.go:310] [api-check] The API server is healthy after 5.001460061s
	I0814 01:10:45.878388   61689 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0814 01:10:45.896709   61689 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0814 01:10:45.940134   61689 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0814 01:10:45.940348   61689 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-585256 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0814 01:10:45.955748   61689 kubeadm.go:310] [bootstrap-token] Using token: 8dipep.54emqs990as2h2yu
	I0814 01:10:45.957107   61689 out.go:204]   - Configuring RBAC rules ...
	I0814 01:10:45.957260   61689 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0814 01:10:45.967198   61689 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0814 01:10:45.981109   61689 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0814 01:10:45.984971   61689 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0814 01:10:45.990218   61689 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0814 01:10:45.994132   61689 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0814 01:10:46.271392   61689 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0814 01:10:46.713198   61689 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0814 01:10:47.271788   61689 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0814 01:10:47.271821   61689 kubeadm.go:310] 
	I0814 01:10:47.271873   61689 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0814 01:10:47.271880   61689 kubeadm.go:310] 
	I0814 01:10:47.271970   61689 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0814 01:10:47.271983   61689 kubeadm.go:310] 
	I0814 01:10:47.272035   61689 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0814 01:10:47.272118   61689 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0814 01:10:47.272195   61689 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0814 01:10:47.272219   61689 kubeadm.go:310] 
	I0814 01:10:47.272313   61689 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0814 01:10:47.272340   61689 kubeadm.go:310] 
	I0814 01:10:47.272418   61689 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0814 01:10:47.272431   61689 kubeadm.go:310] 
	I0814 01:10:47.272493   61689 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0814 01:10:47.272603   61689 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0814 01:10:47.272718   61689 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0814 01:10:47.272736   61689 kubeadm.go:310] 
	I0814 01:10:47.272851   61689 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0814 01:10:47.272978   61689 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0814 01:10:47.272988   61689 kubeadm.go:310] 
	I0814 01:10:47.273093   61689 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 8dipep.54emqs990as2h2yu \
	I0814 01:10:47.273238   61689 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f608549e9c065598e2d494ba753eb8a7bbe7260a53a76decd30e6e17b5fe24c3 \
	I0814 01:10:47.273276   61689 kubeadm.go:310] 	--control-plane 
	I0814 01:10:47.273290   61689 kubeadm.go:310] 
	I0814 01:10:47.273405   61689 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0814 01:10:47.273413   61689 kubeadm.go:310] 
	I0814 01:10:47.273513   61689 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 8dipep.54emqs990as2h2yu \
	I0814 01:10:47.273659   61689 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f608549e9c065598e2d494ba753eb8a7bbe7260a53a76decd30e6e17b5fe24c3 
	I0814 01:10:47.274832   61689 kubeadm.go:310] W0814 01:10:39.407507    2549 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0814 01:10:47.275253   61689 kubeadm.go:310] W0814 01:10:39.408398    2549 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0814 01:10:47.275402   61689 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0814 01:10:47.275444   61689 cni.go:84] Creating CNI manager for ""
	I0814 01:10:47.275455   61689 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 01:10:47.277239   61689 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0814 01:10:47.278570   61689 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0814 01:10:47.289683   61689 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0814 01:10:47.306392   61689 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0814 01:10:47.306474   61689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:10:47.306474   61689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-585256 minikube.k8s.io/updated_at=2024_08_14T01_10_47_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a5ac17629aabe8562ef9220195ba6559cf416caf minikube.k8s.io/name=default-k8s-diff-port-585256 minikube.k8s.io/primary=true
	I0814 01:10:47.471053   61689 ops.go:34] apiserver oom_adj: -16
	I0814 01:10:47.471227   61689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:10:47.971669   61689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:10:46.089776   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:48.589378   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:48.472147   61689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:10:48.971874   61689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:10:49.471867   61689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:10:49.972002   61689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:10:50.471298   61689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:10:50.971656   61689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:10:51.471610   61689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:10:51.548562   61689 kubeadm.go:1113] duration metric: took 4.24215834s to wait for elevateKubeSystemPrivileges
	I0814 01:10:51.548600   61689 kubeadm.go:394] duration metric: took 4m53.28604263s to StartCluster
	I0814 01:10:51.548621   61689 settings.go:142] acquiring lock: {Name:mkb0f793aa2a6618ff3457f9cd2d34beec5f1b5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 01:10:51.548708   61689 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19429-9425/kubeconfig
	I0814 01:10:51.551834   61689 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19429-9425/kubeconfig: {Name:mkd99f966962f24d3ec49056c344f5320df43dba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 01:10:51.552154   61689 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.110 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0814 01:10:51.552236   61689 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0814 01:10:51.552311   61689 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-585256"
	I0814 01:10:51.552343   61689 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-585256"
	I0814 01:10:51.552341   61689 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-585256"
	W0814 01:10:51.552354   61689 addons.go:243] addon storage-provisioner should already be in state true
	I0814 01:10:51.552384   61689 host.go:66] Checking if "default-k8s-diff-port-585256" exists ...
	I0814 01:10:51.552387   61689 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-585256"
	W0814 01:10:51.552396   61689 addons.go:243] addon metrics-server should already be in state true
	I0814 01:10:51.552416   61689 host.go:66] Checking if "default-k8s-diff-port-585256" exists ...
	I0814 01:10:51.552423   61689 config.go:182] Loaded profile config "default-k8s-diff-port-585256": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 01:10:51.552805   61689 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:10:51.552842   61689 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:10:51.552855   61689 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:10:51.552865   61689 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:10:51.553056   61689 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-585256"
	I0814 01:10:51.553092   61689 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-585256"
	I0814 01:10:51.553476   61689 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:10:51.553519   61689 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:10:51.553870   61689 out.go:177] * Verifying Kubernetes components...
	I0814 01:10:51.555358   61689 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 01:10:51.569380   61689 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36961
	I0814 01:10:51.569570   61689 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38335
	I0814 01:10:51.569920   61689 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:10:51.570057   61689 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:10:51.570516   61689 main.go:141] libmachine: Using API Version  1
	I0814 01:10:51.570536   61689 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:10:51.570648   61689 main.go:141] libmachine: Using API Version  1
	I0814 01:10:51.570672   61689 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:10:51.570891   61689 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:10:51.570981   61689 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:10:51.571148   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetState
	I0814 01:10:51.571564   61689 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:10:51.571600   61689 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:10:51.572161   61689 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40351
	I0814 01:10:51.572637   61689 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:10:51.573134   61689 main.go:141] libmachine: Using API Version  1
	I0814 01:10:51.573153   61689 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:10:51.574142   61689 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:10:51.574576   61689 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:10:51.574600   61689 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:10:51.575008   61689 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-585256"
	W0814 01:10:51.575026   61689 addons.go:243] addon default-storageclass should already be in state true
	I0814 01:10:51.575056   61689 host.go:66] Checking if "default-k8s-diff-port-585256" exists ...
	I0814 01:10:51.575459   61689 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:10:51.575500   61689 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:10:51.587910   61689 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35335
	I0814 01:10:51.588640   61689 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:10:51.589298   61689 main.go:141] libmachine: Using API Version  1
	I0814 01:10:51.589318   61689 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:10:51.589938   61689 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:10:51.590198   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetState
	I0814 01:10:51.591151   61689 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40625
	I0814 01:10:51.591786   61689 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:10:51.592257   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .DriverName
	I0814 01:10:51.592427   61689 main.go:141] libmachine: Using API Version  1
	I0814 01:10:51.592444   61689 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:10:51.592742   61689 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:10:51.592959   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetState
	I0814 01:10:51.594517   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .DriverName
	I0814 01:10:51.594851   61689 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0814 01:10:51.596245   61689 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 01:10:51.596263   61689 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0814 01:10:51.596277   61689 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0814 01:10:51.596296   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHHostname
	I0814 01:10:51.597335   61689 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 01:10:51.597351   61689 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0814 01:10:51.597365   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHHostname
	I0814 01:10:51.599147   61689 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40567
	I0814 01:10:51.599559   61689 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:10:51.600041   61689 main.go:141] libmachine: Using API Version  1
	I0814 01:10:51.600062   61689 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:10:51.600442   61689 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:10:51.601105   61689 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:10:51.601131   61689 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:10:51.601316   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:10:51.601345   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:10:51.601367   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:10:51.601408   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHPort
	I0814 01:10:51.601889   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:10:51.601893   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 01:10:51.602060   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHUsername
	I0814 01:10:51.602226   61689 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/default-k8s-diff-port-585256/id_rsa Username:docker}
	I0814 01:10:51.606415   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:10:51.606437   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:10:51.606582   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHPort
	I0814 01:10:51.606793   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 01:10:51.607035   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHUsername
	I0814 01:10:51.607200   61689 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/default-k8s-diff-port-585256/id_rsa Username:docker}
	I0814 01:10:51.623773   61689 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33265
	I0814 01:10:51.624272   61689 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:10:51.624752   61689 main.go:141] libmachine: Using API Version  1
	I0814 01:10:51.624772   61689 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:10:51.625130   61689 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:10:51.625309   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetState
	I0814 01:10:51.627055   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .DriverName
	I0814 01:10:51.627259   61689 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0814 01:10:51.627272   61689 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0814 01:10:51.627284   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHHostname
	I0814 01:10:51.630492   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:10:51.630890   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:10:51.630904   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:10:51.631066   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHPort
	I0814 01:10:51.631226   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 01:10:51.631389   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHUsername
	I0814 01:10:51.631501   61689 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/default-k8s-diff-port-585256/id_rsa Username:docker}
	I0814 01:10:51.744471   61689 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 01:10:51.762256   61689 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-585256" to be "Ready" ...
	I0814 01:10:51.782968   61689 node_ready.go:49] node "default-k8s-diff-port-585256" has status "Ready":"True"
	I0814 01:10:51.782999   61689 node_ready.go:38] duration metric: took 20.706198ms for node "default-k8s-diff-port-585256" to be "Ready" ...
	I0814 01:10:51.783011   61689 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 01:10:51.796967   61689 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-hngz9" in "kube-system" namespace to be "Ready" ...
	I0814 01:10:51.866263   61689 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 01:10:51.867193   61689 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0814 01:10:51.880992   61689 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0814 01:10:51.881017   61689 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0814 01:10:51.927059   61689 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0814 01:10:51.927081   61689 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0814 01:10:51.987114   61689 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0814 01:10:51.987134   61689 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0814 01:10:52.053818   61689 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0814 01:10:52.977726   61689 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.111426777s)
	I0814 01:10:52.977791   61689 main.go:141] libmachine: Making call to close driver server
	I0814 01:10:52.977789   61689 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.110564484s)
	I0814 01:10:52.977844   61689 main.go:141] libmachine: Making call to close driver server
	I0814 01:10:52.977863   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .Close
	I0814 01:10:52.977805   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .Close
	I0814 01:10:52.978191   61689 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:10:52.978210   61689 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:10:52.978217   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | Closing plugin on server side
	I0814 01:10:52.978222   61689 main.go:141] libmachine: Making call to close driver server
	I0814 01:10:52.978230   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | Closing plugin on server side
	I0814 01:10:52.978236   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .Close
	I0814 01:10:52.978282   61689 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:10:52.978310   61689 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:10:52.978325   61689 main.go:141] libmachine: Making call to close driver server
	I0814 01:10:52.978335   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .Close
	I0814 01:10:52.978869   61689 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:10:52.978909   61689 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:10:52.979017   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | Closing plugin on server side
	I0814 01:10:52.981465   61689 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:10:52.981488   61689 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:10:53.039845   61689 main.go:141] libmachine: Making call to close driver server
	I0814 01:10:53.039866   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .Close
	I0814 01:10:53.040156   61689 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:10:53.040174   61689 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:10:53.040217   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | Closing plugin on server side
	I0814 01:10:53.239968   61689 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.186108272s)
	I0814 01:10:53.240018   61689 main.go:141] libmachine: Making call to close driver server
	I0814 01:10:53.240035   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .Close
	I0814 01:10:53.240360   61689 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:10:53.240378   61689 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:10:53.240387   61689 main.go:141] libmachine: Making call to close driver server
	I0814 01:10:53.240395   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .Close
	I0814 01:10:53.240672   61689 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:10:53.240686   61689 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:10:53.240696   61689 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-585256"
	I0814 01:10:53.242401   61689 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0814 01:10:50.591245   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:52.584492   61115 pod_ready.go:81] duration metric: took 4m0.000968161s for pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace to be "Ready" ...
	E0814 01:10:52.584532   61115 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace to be "Ready" (will not retry!)
	I0814 01:10:52.584557   61115 pod_ready.go:38] duration metric: took 4m8.538973262s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 01:10:52.584585   61115 kubeadm.go:597] duration metric: took 4m16.433276087s to restartPrimaryControlPlane
	W0814 01:10:52.584639   61115 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0814 01:10:52.584666   61115 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0814 01:10:53.243906   61689 addons.go:510] duration metric: took 1.691669156s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0814 01:10:53.804696   61689 pod_ready.go:102] pod "coredns-6f6b679f8f-hngz9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:56.305075   61689 pod_ready.go:102] pod "coredns-6f6b679f8f-hngz9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:57.805174   61689 pod_ready.go:92] pod "coredns-6f6b679f8f-hngz9" in "kube-system" namespace has status "Ready":"True"
	I0814 01:10:57.805202   61689 pod_ready.go:81] duration metric: took 6.008208867s for pod "coredns-6f6b679f8f-hngz9" in "kube-system" namespace to be "Ready" ...
	I0814 01:10:57.805214   61689 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-jmqk7" in "kube-system" namespace to be "Ready" ...
	I0814 01:10:57.809693   61689 pod_ready.go:92] pod "coredns-6f6b679f8f-jmqk7" in "kube-system" namespace has status "Ready":"True"
	I0814 01:10:57.809714   61689 pod_ready.go:81] duration metric: took 4.491999ms for pod "coredns-6f6b679f8f-jmqk7" in "kube-system" namespace to be "Ready" ...
	I0814 01:10:57.809726   61689 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-585256" in "kube-system" namespace to be "Ready" ...
	I0814 01:10:59.816199   61689 pod_ready.go:92] pod "etcd-default-k8s-diff-port-585256" in "kube-system" namespace has status "Ready":"True"
	I0814 01:10:59.816228   61689 pod_ready.go:81] duration metric: took 2.006493576s for pod "etcd-default-k8s-diff-port-585256" in "kube-system" namespace to be "Ready" ...
	I0814 01:10:59.816241   61689 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-585256" in "kube-system" namespace to be "Ready" ...
	I0814 01:10:59.821351   61689 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-585256" in "kube-system" namespace has status "Ready":"True"
	I0814 01:10:59.821374   61689 pod_ready.go:81] duration metric: took 5.126272ms for pod "kube-apiserver-default-k8s-diff-port-585256" in "kube-system" namespace to be "Ready" ...
	I0814 01:10:59.821384   61689 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-585256" in "kube-system" namespace to be "Ready" ...
	I0814 01:10:59.825182   61689 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-585256" in "kube-system" namespace has status "Ready":"True"
	I0814 01:10:59.825200   61689 pod_ready.go:81] duration metric: took 3.810193ms for pod "kube-controller-manager-default-k8s-diff-port-585256" in "kube-system" namespace to be "Ready" ...
	I0814 01:10:59.825209   61689 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rg8h9" in "kube-system" namespace to be "Ready" ...
	I0814 01:10:59.829240   61689 pod_ready.go:92] pod "kube-proxy-rg8h9" in "kube-system" namespace has status "Ready":"True"
	I0814 01:10:59.829259   61689 pod_ready.go:81] duration metric: took 4.043044ms for pod "kube-proxy-rg8h9" in "kube-system" namespace to be "Ready" ...
	I0814 01:10:59.829269   61689 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-585256" in "kube-system" namespace to be "Ready" ...
	I0814 01:11:00.602253   61689 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-585256" in "kube-system" namespace has status "Ready":"True"
	I0814 01:11:00.602276   61689 pod_ready.go:81] duration metric: took 773.000181ms for pod "kube-scheduler-default-k8s-diff-port-585256" in "kube-system" namespace to be "Ready" ...
	I0814 01:11:00.602285   61689 pod_ready.go:38] duration metric: took 8.819260447s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 01:11:00.602301   61689 api_server.go:52] waiting for apiserver process to appear ...
	I0814 01:11:00.602352   61689 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:11:00.620930   61689 api_server.go:72] duration metric: took 9.068741768s to wait for apiserver process to appear ...
	I0814 01:11:00.620954   61689 api_server.go:88] waiting for apiserver healthz status ...
	I0814 01:11:00.620973   61689 api_server.go:253] Checking apiserver healthz at https://192.168.39.110:8444/healthz ...
	I0814 01:11:00.625960   61689 api_server.go:279] https://192.168.39.110:8444/healthz returned 200:
	ok
	I0814 01:11:00.626930   61689 api_server.go:141] control plane version: v1.31.0
	I0814 01:11:00.626948   61689 api_server.go:131] duration metric: took 5.98825ms to wait for apiserver health ...
	I0814 01:11:00.626956   61689 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 01:11:00.805157   61689 system_pods.go:59] 9 kube-system pods found
	I0814 01:11:00.805183   61689 system_pods.go:61] "coredns-6f6b679f8f-hngz9" [213f9a45-596b-47b3-9c37-ceae021433ea] Running
	I0814 01:11:00.805187   61689 system_pods.go:61] "coredns-6f6b679f8f-jmqk7" [397fb54b-40cd-4c4e-9503-c077f814c6e5] Running
	I0814 01:11:00.805190   61689 system_pods.go:61] "etcd-default-k8s-diff-port-585256" [2fa04b3c-b311-4f0f-82e5-e512db3dd11b] Running
	I0814 01:11:00.805194   61689 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-585256" [ef1c1aeb-9cee-47d6-8cf5-14535208af62] Running
	I0814 01:11:00.805197   61689 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-585256" [ff5c5123-b01f-4023-b8ec-169065ddb88a] Running
	I0814 01:11:00.805200   61689 system_pods.go:61] "kube-proxy-rg8h9" [b2601104-a6f5-4065-87d5-c027d583f647] Running
	I0814 01:11:00.805203   61689 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-585256" [31e655e4-00c7-443a-9ee8-058a4020852d] Running
	I0814 01:11:00.805209   61689 system_pods.go:61] "metrics-server-6867b74b74-lzfpz" [2dd31ad2-c384-4edd-8d5c-561bc2fa72e4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 01:11:00.805213   61689 system_pods.go:61] "storage-provisioner" [1636777b-2347-4c48-b72a-3b5445c4862a] Running
	I0814 01:11:00.805219   61689 system_pods.go:74] duration metric: took 178.259422ms to wait for pod list to return data ...
	I0814 01:11:00.805226   61689 default_sa.go:34] waiting for default service account to be created ...
	I0814 01:11:01.001973   61689 default_sa.go:45] found service account: "default"
	I0814 01:11:01.002000   61689 default_sa.go:55] duration metric: took 196.764266ms for default service account to be created ...
	I0814 01:11:01.002010   61689 system_pods.go:116] waiting for k8s-apps to be running ...
	I0814 01:11:01.203660   61689 system_pods.go:86] 9 kube-system pods found
	I0814 01:11:01.203683   61689 system_pods.go:89] "coredns-6f6b679f8f-hngz9" [213f9a45-596b-47b3-9c37-ceae021433ea] Running
	I0814 01:11:01.203688   61689 system_pods.go:89] "coredns-6f6b679f8f-jmqk7" [397fb54b-40cd-4c4e-9503-c077f814c6e5] Running
	I0814 01:11:01.203695   61689 system_pods.go:89] "etcd-default-k8s-diff-port-585256" [2fa04b3c-b311-4f0f-82e5-e512db3dd11b] Running
	I0814 01:11:01.203702   61689 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-585256" [ef1c1aeb-9cee-47d6-8cf5-14535208af62] Running
	I0814 01:11:01.203708   61689 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-585256" [ff5c5123-b01f-4023-b8ec-169065ddb88a] Running
	I0814 01:11:01.203713   61689 system_pods.go:89] "kube-proxy-rg8h9" [b2601104-a6f5-4065-87d5-c027d583f647] Running
	I0814 01:11:01.203719   61689 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-585256" [31e655e4-00c7-443a-9ee8-058a4020852d] Running
	I0814 01:11:01.203727   61689 system_pods.go:89] "metrics-server-6867b74b74-lzfpz" [2dd31ad2-c384-4edd-8d5c-561bc2fa72e4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 01:11:01.203733   61689 system_pods.go:89] "storage-provisioner" [1636777b-2347-4c48-b72a-3b5445c4862a] Running
	I0814 01:11:01.203744   61689 system_pods.go:126] duration metric: took 201.72785ms to wait for k8s-apps to be running ...
	I0814 01:11:01.203752   61689 system_svc.go:44] waiting for kubelet service to be running ....
	I0814 01:11:01.203810   61689 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 01:11:01.218903   61689 system_svc.go:56] duration metric: took 15.144054ms WaitForService to wait for kubelet
	I0814 01:11:01.218925   61689 kubeadm.go:582] duration metric: took 9.666741267s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 01:11:01.218950   61689 node_conditions.go:102] verifying NodePressure condition ...
	I0814 01:11:01.403320   61689 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0814 01:11:01.403350   61689 node_conditions.go:123] node cpu capacity is 2
	I0814 01:11:01.403363   61689 node_conditions.go:105] duration metric: took 184.40754ms to run NodePressure ...
	I0814 01:11:01.403377   61689 start.go:241] waiting for startup goroutines ...
	I0814 01:11:01.403385   61689 start.go:246] waiting for cluster config update ...
	I0814 01:11:01.403398   61689 start.go:255] writing updated cluster config ...
	I0814 01:11:01.403690   61689 ssh_runner.go:195] Run: rm -f paused
	I0814 01:11:01.451211   61689 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0814 01:11:01.453288   61689 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-585256" cluster and "default" namespace by default
	I0814 01:11:09.693028   61804 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0814 01:11:09.693700   61804 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 01:11:09.693975   61804 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 01:11:18.892614   61115 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.307924274s)
	I0814 01:11:18.892692   61115 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 01:11:18.907571   61115 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 01:11:18.917775   61115 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 01:11:18.927492   61115 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 01:11:18.927521   61115 kubeadm.go:157] found existing configuration files:
	
	I0814 01:11:18.927588   61115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 01:11:18.936787   61115 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 01:11:18.936840   61115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 01:11:18.946163   61115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 01:11:18.954567   61115 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 01:11:18.954613   61115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 01:11:18.963437   61115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 01:11:18.971647   61115 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 01:11:18.971691   61115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 01:11:18.980676   61115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 01:11:18.989638   61115 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 01:11:18.989681   61115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 01:11:18.998834   61115 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0814 01:11:19.044209   61115 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0814 01:11:19.044286   61115 kubeadm.go:310] [preflight] Running pre-flight checks
	I0814 01:11:19.152983   61115 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0814 01:11:19.153147   61115 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0814 01:11:19.153253   61115 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0814 01:11:19.160933   61115 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0814 01:11:14.694223   61804 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 01:11:14.694446   61804 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 01:11:19.162856   61115 out.go:204]   - Generating certificates and keys ...
	I0814 01:11:19.162972   61115 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0814 01:11:19.163044   61115 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0814 01:11:19.163121   61115 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0814 01:11:19.163213   61115 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0814 01:11:19.163322   61115 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0814 01:11:19.163396   61115 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0814 01:11:19.163467   61115 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0814 01:11:19.163527   61115 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0814 01:11:19.163755   61115 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0814 01:11:19.163860   61115 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0814 01:11:19.163917   61115 kubeadm.go:310] [certs] Using the existing "sa" key
	I0814 01:11:19.163987   61115 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0814 01:11:19.615014   61115 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0814 01:11:19.777877   61115 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0814 01:11:19.917278   61115 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0814 01:11:20.190113   61115 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0814 01:11:20.351945   61115 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0814 01:11:20.352522   61115 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0814 01:11:20.355239   61115 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0814 01:11:20.356550   61115 out.go:204]   - Booting up control plane ...
	I0814 01:11:20.356683   61115 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0814 01:11:20.356784   61115 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0814 01:11:20.356993   61115 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0814 01:11:20.376382   61115 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0814 01:11:20.381926   61115 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0814 01:11:20.382001   61115 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0814 01:11:20.510283   61115 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0814 01:11:20.510394   61115 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0814 01:11:21.016575   61115 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 505.997518ms
	I0814 01:11:21.016716   61115 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0814 01:11:26.018203   61115 kubeadm.go:310] [api-check] The API server is healthy after 5.00166081s
	I0814 01:11:26.035867   61115 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0814 01:11:26.053660   61115 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0814 01:11:26.084727   61115 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0814 01:11:26.084987   61115 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-901410 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0814 01:11:26.100115   61115 kubeadm.go:310] [bootstrap-token] Using token: t7ews1.hirn7pq8otu9l2lh
	I0814 01:11:26.101532   61115 out.go:204]   - Configuring RBAC rules ...
	I0814 01:11:26.101691   61115 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0814 01:11:26.107165   61115 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0814 01:11:26.117715   61115 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0814 01:11:26.121222   61115 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0814 01:11:26.124371   61115 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0814 01:11:26.128216   61115 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0814 01:11:26.426496   61115 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0814 01:11:26.868163   61115 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0814 01:11:27.426401   61115 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0814 01:11:27.427484   61115 kubeadm.go:310] 
	I0814 01:11:27.427587   61115 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0814 01:11:27.427604   61115 kubeadm.go:310] 
	I0814 01:11:27.427727   61115 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0814 01:11:27.427743   61115 kubeadm.go:310] 
	I0814 01:11:27.427770   61115 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0814 01:11:27.427846   61115 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0814 01:11:27.427928   61115 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0814 01:11:27.427939   61115 kubeadm.go:310] 
	I0814 01:11:27.428020   61115 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0814 01:11:27.428027   61115 kubeadm.go:310] 
	I0814 01:11:27.428109   61115 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0814 01:11:27.428116   61115 kubeadm.go:310] 
	I0814 01:11:27.428192   61115 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0814 01:11:27.428289   61115 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0814 01:11:27.428389   61115 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0814 01:11:27.428397   61115 kubeadm.go:310] 
	I0814 01:11:27.428511   61115 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0814 01:11:27.428625   61115 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0814 01:11:27.428640   61115 kubeadm.go:310] 
	I0814 01:11:27.428778   61115 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token t7ews1.hirn7pq8otu9l2lh \
	I0814 01:11:27.428920   61115 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f608549e9c065598e2d494ba753eb8a7bbe7260a53a76decd30e6e17b5fe24c3 \
	I0814 01:11:27.428964   61115 kubeadm.go:310] 	--control-plane 
	I0814 01:11:27.428971   61115 kubeadm.go:310] 
	I0814 01:11:27.429085   61115 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0814 01:11:27.429097   61115 kubeadm.go:310] 
	I0814 01:11:27.429229   61115 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token t7ews1.hirn7pq8otu9l2lh \
	I0814 01:11:27.429381   61115 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f608549e9c065598e2d494ba753eb8a7bbe7260a53a76decd30e6e17b5fe24c3 
	I0814 01:11:27.430485   61115 kubeadm.go:310] W0814 01:11:19.012996    2597 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0814 01:11:27.430895   61115 kubeadm.go:310] W0814 01:11:19.013634    2597 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0814 01:11:27.431062   61115 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0814 01:11:27.431092   61115 cni.go:84] Creating CNI manager for ""
	I0814 01:11:27.431102   61115 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 01:11:27.432987   61115 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0814 01:11:24.694861   61804 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 01:11:24.695123   61804 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 01:11:27.434183   61115 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0814 01:11:27.446168   61115 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0814 01:11:27.466651   61115 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0814 01:11:27.466760   61115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-901410 minikube.k8s.io/updated_at=2024_08_14T01_11_27_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a5ac17629aabe8562ef9220195ba6559cf416caf minikube.k8s.io/name=embed-certs-901410 minikube.k8s.io/primary=true
	I0814 01:11:27.466760   61115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:11:27.495784   61115 ops.go:34] apiserver oom_adj: -16
	I0814 01:11:27.670097   61115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:11:28.170891   61115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:11:28.670320   61115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:11:29.170197   61115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:11:29.670157   61115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:11:30.170664   61115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:11:30.670254   61115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:11:31.170767   61115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:11:31.671004   61115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:11:31.762872   61115 kubeadm.go:1113] duration metric: took 4.296174293s to wait for elevateKubeSystemPrivileges
	I0814 01:11:31.762902   61115 kubeadm.go:394] duration metric: took 4m55.664668706s to StartCluster
	I0814 01:11:31.762924   61115 settings.go:142] acquiring lock: {Name:mkb0f793aa2a6618ff3457f9cd2d34beec5f1b5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 01:11:31.763010   61115 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19429-9425/kubeconfig
	I0814 01:11:31.764625   61115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19429-9425/kubeconfig: {Name:mkd99f966962f24d3ec49056c344f5320df43dba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 01:11:31.764876   61115 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.210 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0814 01:11:31.764951   61115 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0814 01:11:31.765038   61115 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-901410"
	I0814 01:11:31.765052   61115 addons.go:69] Setting default-storageclass=true in profile "embed-certs-901410"
	I0814 01:11:31.765070   61115 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-901410"
	I0814 01:11:31.765068   61115 addons.go:69] Setting metrics-server=true in profile "embed-certs-901410"
	I0814 01:11:31.765086   61115 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-901410"
	I0814 01:11:31.765092   61115 config.go:182] Loaded profile config "embed-certs-901410": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 01:11:31.765111   61115 addons.go:234] Setting addon metrics-server=true in "embed-certs-901410"
	W0814 01:11:31.765126   61115 addons.go:243] addon metrics-server should already be in state true
	I0814 01:11:31.765163   61115 host.go:66] Checking if "embed-certs-901410" exists ...
	W0814 01:11:31.765083   61115 addons.go:243] addon storage-provisioner should already be in state true
	I0814 01:11:31.765199   61115 host.go:66] Checking if "embed-certs-901410" exists ...
	I0814 01:11:31.765481   61115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:11:31.765516   61115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:11:31.765554   61115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:11:31.765570   61115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:11:31.765588   61115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:11:31.765614   61115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:11:31.766459   61115 out.go:177] * Verifying Kubernetes components...
	I0814 01:11:31.767835   61115 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 01:11:31.781637   61115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34599
	I0814 01:11:31.782146   61115 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:11:31.782517   61115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32983
	I0814 01:11:31.782700   61115 main.go:141] libmachine: Using API Version  1
	I0814 01:11:31.782732   61115 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:11:31.783038   61115 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:11:31.783052   61115 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:11:31.783213   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetState
	I0814 01:11:31.783540   61115 main.go:141] libmachine: Using API Version  1
	I0814 01:11:31.783569   61115 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:11:31.783897   61115 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:11:31.784326   61115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39503
	I0814 01:11:31.784458   61115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:11:31.784487   61115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:11:31.784791   61115 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:11:31.785281   61115 main.go:141] libmachine: Using API Version  1
	I0814 01:11:31.785306   61115 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:11:31.785665   61115 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:11:31.786175   61115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:11:31.786218   61115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:11:31.786466   61115 addons.go:234] Setting addon default-storageclass=true in "embed-certs-901410"
	W0814 01:11:31.786484   61115 addons.go:243] addon default-storageclass should already be in state true
	I0814 01:11:31.786513   61115 host.go:66] Checking if "embed-certs-901410" exists ...
	I0814 01:11:31.786853   61115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:11:31.786881   61115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:11:31.801208   61115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41561
	I0814 01:11:31.801592   61115 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:11:31.802016   61115 main.go:141] libmachine: Using API Version  1
	I0814 01:11:31.802032   61115 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:11:31.802382   61115 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:11:31.802555   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetState
	I0814 01:11:31.803106   61115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40669
	I0814 01:11:31.803589   61115 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:11:31.804133   61115 main.go:141] libmachine: Using API Version  1
	I0814 01:11:31.804159   61115 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:11:31.804462   61115 main.go:141] libmachine: (embed-certs-901410) Calling .DriverName
	I0814 01:11:31.804532   61115 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:11:31.804716   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetState
	I0814 01:11:31.805759   61115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39529
	I0814 01:11:31.806197   61115 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:11:31.806546   61115 main.go:141] libmachine: (embed-certs-901410) Calling .DriverName
	I0814 01:11:31.806590   61115 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0814 01:11:31.806667   61115 main.go:141] libmachine: Using API Version  1
	I0814 01:11:31.806692   61115 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:11:31.806982   61115 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:11:31.807572   61115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:11:31.807609   61115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:11:31.808223   61115 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 01:11:31.808225   61115 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0814 01:11:31.808301   61115 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0814 01:11:31.808335   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHHostname
	I0814 01:11:31.810018   61115 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 01:11:31.810057   61115 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0814 01:11:31.810125   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHHostname
	I0814 01:11:31.812029   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:11:31.812728   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:11:31.812862   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:11:31.813062   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHPort
	I0814 01:11:31.813261   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 01:11:31.813284   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:11:31.813420   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHUsername
	I0814 01:11:31.813562   61115 sshutil.go:53] new ssh client: &{IP:192.168.50.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/embed-certs-901410/id_rsa Username:docker}
	I0814 01:11:31.813864   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:11:31.813880   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:11:31.814032   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHPort
	I0814 01:11:31.814236   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 01:11:31.814398   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHUsername
	I0814 01:11:31.814542   61115 sshutil.go:53] new ssh client: &{IP:192.168.50.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/embed-certs-901410/id_rsa Username:docker}
	I0814 01:11:31.825081   61115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34269
	I0814 01:11:31.825523   61115 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:11:31.825944   61115 main.go:141] libmachine: Using API Version  1
	I0814 01:11:31.825967   61115 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:11:31.826327   61115 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:11:31.826537   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetState
	I0814 01:11:31.831060   61115 main.go:141] libmachine: (embed-certs-901410) Calling .DriverName
	I0814 01:11:31.831292   61115 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0814 01:11:31.831315   61115 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0814 01:11:31.831334   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHHostname
	I0814 01:11:31.834552   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:11:31.834934   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:11:31.834962   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:11:31.835102   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHPort
	I0814 01:11:31.835304   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 01:11:31.835476   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHUsername
	I0814 01:11:31.835610   61115 sshutil.go:53] new ssh client: &{IP:192.168.50.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/embed-certs-901410/id_rsa Username:docker}
	I0814 01:11:31.960224   61115 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 01:11:31.980097   61115 node_ready.go:35] waiting up to 6m0s for node "embed-certs-901410" to be "Ready" ...
	I0814 01:11:31.993130   61115 node_ready.go:49] node "embed-certs-901410" has status "Ready":"True"
	I0814 01:11:31.993152   61115 node_ready.go:38] duration metric: took 13.020022ms for node "embed-certs-901410" to be "Ready" ...
	I0814 01:11:31.993164   61115 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 01:11:31.998448   61115 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-901410" in "kube-system" namespace to be "Ready" ...
	I0814 01:11:32.075908   61115 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0814 01:11:32.075933   61115 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0814 01:11:32.114559   61115 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 01:11:32.137251   61115 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0814 01:11:32.144383   61115 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0814 01:11:32.144404   61115 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0814 01:11:32.207930   61115 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0814 01:11:32.207957   61115 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0814 01:11:32.235306   61115 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0814 01:11:32.769968   61115 main.go:141] libmachine: Making call to close driver server
	I0814 01:11:32.769994   61115 main.go:141] libmachine: (embed-certs-901410) Calling .Close
	I0814 01:11:32.770140   61115 main.go:141] libmachine: Making call to close driver server
	I0814 01:11:32.770164   61115 main.go:141] libmachine: (embed-certs-901410) Calling .Close
	I0814 01:11:32.770300   61115 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:11:32.770337   61115 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:11:32.770348   61115 main.go:141] libmachine: Making call to close driver server
	I0814 01:11:32.770351   61115 main.go:141] libmachine: (embed-certs-901410) DBG | Closing plugin on server side
	I0814 01:11:32.770360   61115 main.go:141] libmachine: (embed-certs-901410) Calling .Close
	I0814 01:11:32.770412   61115 main.go:141] libmachine: (embed-certs-901410) DBG | Closing plugin on server side
	I0814 01:11:32.770434   61115 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:11:32.770447   61115 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:11:32.770461   61115 main.go:141] libmachine: Making call to close driver server
	I0814 01:11:32.770472   61115 main.go:141] libmachine: (embed-certs-901410) Calling .Close
	I0814 01:11:32.770656   61115 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:11:32.770696   61115 main.go:141] libmachine: (embed-certs-901410) DBG | Closing plugin on server side
	I0814 01:11:32.770706   61115 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:11:32.770767   61115 main.go:141] libmachine: (embed-certs-901410) DBG | Closing plugin on server side
	I0814 01:11:32.770945   61115 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:11:32.770960   61115 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:11:32.779423   61115 main.go:141] libmachine: Making call to close driver server
	I0814 01:11:32.779437   61115 main.go:141] libmachine: (embed-certs-901410) Calling .Close
	I0814 01:11:32.779661   61115 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:11:32.779675   61115 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:11:32.779702   61115 main.go:141] libmachine: (embed-certs-901410) DBG | Closing plugin on server side
	I0814 01:11:33.063157   61115 main.go:141] libmachine: Making call to close driver server
	I0814 01:11:33.063187   61115 main.go:141] libmachine: (embed-certs-901410) Calling .Close
	I0814 01:11:33.064055   61115 main.go:141] libmachine: (embed-certs-901410) DBG | Closing plugin on server side
	I0814 01:11:33.064101   61115 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:11:33.064110   61115 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:11:33.064120   61115 main.go:141] libmachine: Making call to close driver server
	I0814 01:11:33.064127   61115 main.go:141] libmachine: (embed-certs-901410) Calling .Close
	I0814 01:11:33.064378   61115 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:11:33.064397   61115 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:11:33.064409   61115 addons.go:475] Verifying addon metrics-server=true in "embed-certs-901410"
	I0814 01:11:33.064458   61115 main.go:141] libmachine: (embed-certs-901410) DBG | Closing plugin on server side
	I0814 01:11:33.066122   61115 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0814 01:11:33.067534   61115 addons.go:510] duration metric: took 1.302585898s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0814 01:11:34.004078   61115 pod_ready.go:102] pod "etcd-embed-certs-901410" in "kube-system" namespace has status "Ready":"False"
	I0814 01:11:36.005391   61115 pod_ready.go:102] pod "etcd-embed-certs-901410" in "kube-system" namespace has status "Ready":"False"
	I0814 01:11:38.505031   61115 pod_ready.go:102] pod "etcd-embed-certs-901410" in "kube-system" namespace has status "Ready":"False"
	I0814 01:11:39.507006   61115 pod_ready.go:92] pod "etcd-embed-certs-901410" in "kube-system" namespace has status "Ready":"True"
	I0814 01:11:39.507026   61115 pod_ready.go:81] duration metric: took 7.508554233s for pod "etcd-embed-certs-901410" in "kube-system" namespace to be "Ready" ...
	I0814 01:11:39.507035   61115 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-901410" in "kube-system" namespace to be "Ready" ...
	I0814 01:11:39.517719   61115 pod_ready.go:92] pod "kube-apiserver-embed-certs-901410" in "kube-system" namespace has status "Ready":"True"
	I0814 01:11:39.517739   61115 pod_ready.go:81] duration metric: took 10.698211ms for pod "kube-apiserver-embed-certs-901410" in "kube-system" namespace to be "Ready" ...
	I0814 01:11:39.517751   61115 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-901410" in "kube-system" namespace to be "Ready" ...
	I0814 01:11:39.522245   61115 pod_ready.go:92] pod "kube-controller-manager-embed-certs-901410" in "kube-system" namespace has status "Ready":"True"
	I0814 01:11:39.522267   61115 pod_ready.go:81] duration metric: took 4.507786ms for pod "kube-controller-manager-embed-certs-901410" in "kube-system" namespace to be "Ready" ...
	I0814 01:11:39.522280   61115 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fqmzw" in "kube-system" namespace to be "Ready" ...
	I0814 01:11:39.527880   61115 pod_ready.go:92] pod "kube-proxy-fqmzw" in "kube-system" namespace has status "Ready":"True"
	I0814 01:11:39.527897   61115 pod_ready.go:81] duration metric: took 5.609617ms for pod "kube-proxy-fqmzw" in "kube-system" namespace to be "Ready" ...
	I0814 01:11:39.527904   61115 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-901410" in "kube-system" namespace to be "Ready" ...
	I0814 01:11:39.532430   61115 pod_ready.go:92] pod "kube-scheduler-embed-certs-901410" in "kube-system" namespace has status "Ready":"True"
	I0814 01:11:39.532448   61115 pod_ready.go:81] duration metric: took 4.536902ms for pod "kube-scheduler-embed-certs-901410" in "kube-system" namespace to be "Ready" ...
	I0814 01:11:39.532456   61115 pod_ready.go:38] duration metric: took 7.539280742s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 01:11:39.532471   61115 api_server.go:52] waiting for apiserver process to appear ...
	I0814 01:11:39.532537   61115 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:11:39.547608   61115 api_server.go:72] duration metric: took 7.782698582s to wait for apiserver process to appear ...
	I0814 01:11:39.547635   61115 api_server.go:88] waiting for apiserver healthz status ...
	I0814 01:11:39.547652   61115 api_server.go:253] Checking apiserver healthz at https://192.168.50.210:8443/healthz ...
	I0814 01:11:39.552021   61115 api_server.go:279] https://192.168.50.210:8443/healthz returned 200:
	ok
	I0814 01:11:39.552955   61115 api_server.go:141] control plane version: v1.31.0
	I0814 01:11:39.552972   61115 api_server.go:131] duration metric: took 5.330974ms to wait for apiserver health ...
	I0814 01:11:39.552979   61115 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 01:11:39.704928   61115 system_pods.go:59] 9 kube-system pods found
	I0814 01:11:39.704952   61115 system_pods.go:61] "coredns-6f6b679f8f-bq2xk" [6593bc2b-ef8f-4738-8674-dcaea675b88b] Running
	I0814 01:11:39.704959   61115 system_pods.go:61] "coredns-6f6b679f8f-lwd2j" [75f6e3fe-c5ac-4dbc-bbbb-bfb91796aaff] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0814 01:11:39.704964   61115 system_pods.go:61] "etcd-embed-certs-901410" [60eb6469-1be4-401b-9382-977428a0ead5] Running
	I0814 01:11:39.704970   61115 system_pods.go:61] "kube-apiserver-embed-certs-901410" [802d6cc2-d1d4-485c-98d8-e5b4afa9e632] Running
	I0814 01:11:39.704974   61115 system_pods.go:61] "kube-controller-manager-embed-certs-901410" [12e308db-7ca5-4d33-b62a-e144e7dd06c5] Running
	I0814 01:11:39.704977   61115 system_pods.go:61] "kube-proxy-fqmzw" [f9d63b14-ce56-4d0b-8511-1198b306b70e] Running
	I0814 01:11:39.704980   61115 system_pods.go:61] "kube-scheduler-embed-certs-901410" [668258a9-02d2-416d-ac07-b2b87deea00d] Running
	I0814 01:11:39.704985   61115 system_pods.go:61] "metrics-server-6867b74b74-mwl74" [065b6973-cd9d-4091-96b9-8dff2c5f85eb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 01:11:39.704989   61115 system_pods.go:61] "storage-provisioner" [e0f82856-b50c-4a5f-b0c7-4cd81e4b896e] Running
	I0814 01:11:39.704995   61115 system_pods.go:74] duration metric: took 152.010903ms to wait for pod list to return data ...
	I0814 01:11:39.705004   61115 default_sa.go:34] waiting for default service account to be created ...
	I0814 01:11:39.902622   61115 default_sa.go:45] found service account: "default"
	I0814 01:11:39.902662   61115 default_sa.go:55] duration metric: took 197.651811ms for default service account to be created ...
	I0814 01:11:39.902674   61115 system_pods.go:116] waiting for k8s-apps to be running ...
	I0814 01:11:40.105740   61115 system_pods.go:86] 9 kube-system pods found
	I0814 01:11:40.105767   61115 system_pods.go:89] "coredns-6f6b679f8f-bq2xk" [6593bc2b-ef8f-4738-8674-dcaea675b88b] Running
	I0814 01:11:40.105775   61115 system_pods.go:89] "coredns-6f6b679f8f-lwd2j" [75f6e3fe-c5ac-4dbc-bbbb-bfb91796aaff] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0814 01:11:40.105781   61115 system_pods.go:89] "etcd-embed-certs-901410" [60eb6469-1be4-401b-9382-977428a0ead5] Running
	I0814 01:11:40.105787   61115 system_pods.go:89] "kube-apiserver-embed-certs-901410" [802d6cc2-d1d4-485c-98d8-e5b4afa9e632] Running
	I0814 01:11:40.105791   61115 system_pods.go:89] "kube-controller-manager-embed-certs-901410" [12e308db-7ca5-4d33-b62a-e144e7dd06c5] Running
	I0814 01:11:40.105794   61115 system_pods.go:89] "kube-proxy-fqmzw" [f9d63b14-ce56-4d0b-8511-1198b306b70e] Running
	I0814 01:11:40.105798   61115 system_pods.go:89] "kube-scheduler-embed-certs-901410" [668258a9-02d2-416d-ac07-b2b87deea00d] Running
	I0814 01:11:40.105804   61115 system_pods.go:89] "metrics-server-6867b74b74-mwl74" [065b6973-cd9d-4091-96b9-8dff2c5f85eb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 01:11:40.105809   61115 system_pods.go:89] "storage-provisioner" [e0f82856-b50c-4a5f-b0c7-4cd81e4b896e] Running
	I0814 01:11:40.105815   61115 system_pods.go:126] duration metric: took 203.134555ms to wait for k8s-apps to be running ...
	I0814 01:11:40.105824   61115 system_svc.go:44] waiting for kubelet service to be running ....
	I0814 01:11:40.105866   61115 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 01:11:40.121399   61115 system_svc.go:56] duration metric: took 15.565745ms WaitForService to wait for kubelet
	I0814 01:11:40.121427   61115 kubeadm.go:582] duration metric: took 8.356517219s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 01:11:40.121445   61115 node_conditions.go:102] verifying NodePressure condition ...
	I0814 01:11:40.303687   61115 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0814 01:11:40.303720   61115 node_conditions.go:123] node cpu capacity is 2
	I0814 01:11:40.303732   61115 node_conditions.go:105] duration metric: took 182.281943ms to run NodePressure ...
	I0814 01:11:40.303745   61115 start.go:241] waiting for startup goroutines ...
	I0814 01:11:40.303754   61115 start.go:246] waiting for cluster config update ...
	I0814 01:11:40.303768   61115 start.go:255] writing updated cluster config ...
	I0814 01:11:40.304122   61115 ssh_runner.go:195] Run: rm -f paused
	I0814 01:11:40.350855   61115 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0814 01:11:40.352610   61115 out.go:177] * Done! kubectl is now configured to use "embed-certs-901410" cluster and "default" namespace by default
	I0814 01:11:44.695887   61804 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 01:11:44.696122   61804 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 01:12:24.697922   61804 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 01:12:24.698217   61804 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 01:12:24.698256   61804 kubeadm.go:310] 
	I0814 01:12:24.698318   61804 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0814 01:12:24.698406   61804 kubeadm.go:310] 		timed out waiting for the condition
	I0814 01:12:24.698434   61804 kubeadm.go:310] 
	I0814 01:12:24.698484   61804 kubeadm.go:310] 	This error is likely caused by:
	I0814 01:12:24.698530   61804 kubeadm.go:310] 		- The kubelet is not running
	I0814 01:12:24.698640   61804 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0814 01:12:24.698651   61804 kubeadm.go:310] 
	I0814 01:12:24.698784   61804 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0814 01:12:24.698841   61804 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0814 01:12:24.698874   61804 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0814 01:12:24.698878   61804 kubeadm.go:310] 
	I0814 01:12:24.699009   61804 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0814 01:12:24.699119   61804 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0814 01:12:24.699128   61804 kubeadm.go:310] 
	I0814 01:12:24.699294   61804 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0814 01:12:24.699431   61804 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0814 01:12:24.699536   61804 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0814 01:12:24.699635   61804 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0814 01:12:24.699647   61804 kubeadm.go:310] 
	I0814 01:12:24.700201   61804 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0814 01:12:24.700300   61804 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0814 01:12:24.700391   61804 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0814 01:12:24.700527   61804 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0814 01:12:24.700577   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0814 01:12:30.038180   61804 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.337582505s)
	I0814 01:12:30.038256   61804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 01:12:30.052476   61804 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 01:12:30.062330   61804 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 01:12:30.062357   61804 kubeadm.go:157] found existing configuration files:
	
	I0814 01:12:30.062409   61804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 01:12:30.072303   61804 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 01:12:30.072355   61804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 01:12:30.081331   61804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 01:12:30.090105   61804 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 01:12:30.090163   61804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 01:12:30.099446   61804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 01:12:30.108290   61804 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 01:12:30.108346   61804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 01:12:30.117872   61804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 01:12:30.126357   61804 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 01:12:30.126424   61804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 01:12:30.136277   61804 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0814 01:12:30.342736   61804 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0814 01:14:26.274820   61804 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0814 01:14:26.274958   61804 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0814 01:14:26.276512   61804 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0814 01:14:26.276601   61804 kubeadm.go:310] [preflight] Running pre-flight checks
	I0814 01:14:26.276743   61804 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0814 01:14:26.276887   61804 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0814 01:14:26.277017   61804 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0814 01:14:26.277097   61804 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0814 01:14:26.278845   61804 out.go:204]   - Generating certificates and keys ...
	I0814 01:14:26.278935   61804 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0814 01:14:26.279005   61804 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0814 01:14:26.279103   61804 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0814 01:14:26.279187   61804 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0814 01:14:26.279278   61804 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0814 01:14:26.279351   61804 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0814 01:14:26.279433   61804 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0814 01:14:26.279515   61804 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0814 01:14:26.279623   61804 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0814 01:14:26.279725   61804 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0814 01:14:26.279776   61804 kubeadm.go:310] [certs] Using the existing "sa" key
	I0814 01:14:26.279858   61804 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0814 01:14:26.279933   61804 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0814 01:14:26.280086   61804 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0814 01:14:26.280188   61804 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0814 01:14:26.280289   61804 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0814 01:14:26.280424   61804 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0814 01:14:26.280517   61804 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0814 01:14:26.280573   61804 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0814 01:14:26.280648   61804 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0814 01:14:26.281982   61804 out.go:204]   - Booting up control plane ...
	I0814 01:14:26.282070   61804 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0814 01:14:26.282159   61804 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0814 01:14:26.282249   61804 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0814 01:14:26.282389   61804 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0814 01:14:26.282564   61804 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0814 01:14:26.282624   61804 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0814 01:14:26.282685   61804 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 01:14:26.282866   61804 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 01:14:26.282971   61804 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 01:14:26.283161   61804 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 01:14:26.283235   61804 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 01:14:26.283494   61804 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 01:14:26.283611   61804 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 01:14:26.283768   61804 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 01:14:26.283830   61804 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 01:14:26.284021   61804 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 01:14:26.284032   61804 kubeadm.go:310] 
	I0814 01:14:26.284069   61804 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0814 01:14:26.284126   61804 kubeadm.go:310] 		timed out waiting for the condition
	I0814 01:14:26.284135   61804 kubeadm.go:310] 
	I0814 01:14:26.284188   61804 kubeadm.go:310] 	This error is likely caused by:
	I0814 01:14:26.284234   61804 kubeadm.go:310] 		- The kubelet is not running
	I0814 01:14:26.284336   61804 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0814 01:14:26.284344   61804 kubeadm.go:310] 
	I0814 01:14:26.284429   61804 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0814 01:14:26.284463   61804 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0814 01:14:26.284490   61804 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0814 01:14:26.284499   61804 kubeadm.go:310] 
	I0814 01:14:26.284587   61804 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0814 01:14:26.284726   61804 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0814 01:14:26.284747   61804 kubeadm.go:310] 
	I0814 01:14:26.284889   61804 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0814 01:14:26.285007   61804 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0814 01:14:26.285083   61804 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0814 01:14:26.285158   61804 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0814 01:14:26.285174   61804 kubeadm.go:310] 
	I0814 01:14:26.285220   61804 kubeadm.go:394] duration metric: took 8m6.417053649s to StartCluster
	I0814 01:14:26.285266   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:14:26.285318   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:14:26.327320   61804 cri.go:89] found id: ""
	I0814 01:14:26.327351   61804 logs.go:276] 0 containers: []
	W0814 01:14:26.327359   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:14:26.327366   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:14:26.327435   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:14:26.362074   61804 cri.go:89] found id: ""
	I0814 01:14:26.362101   61804 logs.go:276] 0 containers: []
	W0814 01:14:26.362109   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:14:26.362115   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:14:26.362192   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:14:26.395777   61804 cri.go:89] found id: ""
	I0814 01:14:26.395802   61804 logs.go:276] 0 containers: []
	W0814 01:14:26.395814   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:14:26.395821   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:14:26.395884   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:14:26.429263   61804 cri.go:89] found id: ""
	I0814 01:14:26.429290   61804 logs.go:276] 0 containers: []
	W0814 01:14:26.429299   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:14:26.429307   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:14:26.429370   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:14:26.463278   61804 cri.go:89] found id: ""
	I0814 01:14:26.463307   61804 logs.go:276] 0 containers: []
	W0814 01:14:26.463314   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:14:26.463321   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:14:26.463381   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:14:26.496454   61804 cri.go:89] found id: ""
	I0814 01:14:26.496493   61804 logs.go:276] 0 containers: []
	W0814 01:14:26.496513   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:14:26.496521   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:14:26.496591   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:14:26.530536   61804 cri.go:89] found id: ""
	I0814 01:14:26.530567   61804 logs.go:276] 0 containers: []
	W0814 01:14:26.530579   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:14:26.530587   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:14:26.530659   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:14:26.564201   61804 cri.go:89] found id: ""
	I0814 01:14:26.564232   61804 logs.go:276] 0 containers: []
	W0814 01:14:26.564245   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:14:26.564258   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:14:26.564274   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:14:26.614225   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:14:26.614263   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:14:26.632126   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:14:26.632162   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:14:26.733732   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:14:26.733757   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:14:26.733773   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:14:26.849177   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:14:26.849218   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0814 01:14:26.885741   61804 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0814 01:14:26.885794   61804 out.go:239] * 
	W0814 01:14:26.885846   61804 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0814 01:14:26.885871   61804 out.go:239] * 
	W0814 01:14:26.886747   61804 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0814 01:14:26.889874   61804 out.go:177] 
	W0814 01:14:26.891040   61804 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0814 01:14:26.891083   61804 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0814 01:14:26.891101   61804 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0814 01:14:26.892501   61804 out.go:177] 
	
	
	==> CRI-O <==
	Aug 14 01:23:31 old-k8s-version-179312 crio[648]: time="2024-08-14 01:23:31.991637747Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598611991605684,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3f597c94-6a2a-4c84-9681-57c9339dd7f0 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 01:23:31 old-k8s-version-179312 crio[648]: time="2024-08-14 01:23:31.992186223Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4df7d53d-777b-48d1-bc7c-476272907fdf name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 01:23:31 old-k8s-version-179312 crio[648]: time="2024-08-14 01:23:31.992302703Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4df7d53d-777b-48d1-bc7c-476272907fdf name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 01:23:31 old-k8s-version-179312 crio[648]: time="2024-08-14 01:23:31.992349803Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=4df7d53d-777b-48d1-bc7c-476272907fdf name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 01:23:32 old-k8s-version-179312 crio[648]: time="2024-08-14 01:23:32.022797282Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c55cd438-6590-4f59-9a99-b61b9f786acd name=/runtime.v1.RuntimeService/Version
	Aug 14 01:23:32 old-k8s-version-179312 crio[648]: time="2024-08-14 01:23:32.022890509Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c55cd438-6590-4f59-9a99-b61b9f786acd name=/runtime.v1.RuntimeService/Version
	Aug 14 01:23:32 old-k8s-version-179312 crio[648]: time="2024-08-14 01:23:32.023846663Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f985838b-a332-44e4-8488-44a5cc4330ce name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 01:23:32 old-k8s-version-179312 crio[648]: time="2024-08-14 01:23:32.024302939Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598612024244871,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f985838b-a332-44e4-8488-44a5cc4330ce name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 01:23:32 old-k8s-version-179312 crio[648]: time="2024-08-14 01:23:32.024979629Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7f0fb71e-f171-481d-a3da-ba4c6abeeff5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 01:23:32 old-k8s-version-179312 crio[648]: time="2024-08-14 01:23:32.025035980Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7f0fb71e-f171-481d-a3da-ba4c6abeeff5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 01:23:32 old-k8s-version-179312 crio[648]: time="2024-08-14 01:23:32.025077044Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=7f0fb71e-f171-481d-a3da-ba4c6abeeff5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 01:23:32 old-k8s-version-179312 crio[648]: time="2024-08-14 01:23:32.056195361Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a17a27ac-379b-4f16-9775-e894ff7038d8 name=/runtime.v1.RuntimeService/Version
	Aug 14 01:23:32 old-k8s-version-179312 crio[648]: time="2024-08-14 01:23:32.056354789Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a17a27ac-379b-4f16-9775-e894ff7038d8 name=/runtime.v1.RuntimeService/Version
	Aug 14 01:23:32 old-k8s-version-179312 crio[648]: time="2024-08-14 01:23:32.057525552Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f47da459-b051-4e1e-b45c-5f355a77d203 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 01:23:32 old-k8s-version-179312 crio[648]: time="2024-08-14 01:23:32.058045448Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598612058021522,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f47da459-b051-4e1e-b45c-5f355a77d203 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 01:23:32 old-k8s-version-179312 crio[648]: time="2024-08-14 01:23:32.058567358Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a4e79e1c-f0db-41e3-80b9-dfde79fbff80 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 01:23:32 old-k8s-version-179312 crio[648]: time="2024-08-14 01:23:32.058624890Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a4e79e1c-f0db-41e3-80b9-dfde79fbff80 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 01:23:32 old-k8s-version-179312 crio[648]: time="2024-08-14 01:23:32.058661789Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=a4e79e1c-f0db-41e3-80b9-dfde79fbff80 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 01:23:32 old-k8s-version-179312 crio[648]: time="2024-08-14 01:23:32.093111505Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e5d7fb3c-c142-49d1-90eb-eb9638dd45bf name=/runtime.v1.RuntimeService/Version
	Aug 14 01:23:32 old-k8s-version-179312 crio[648]: time="2024-08-14 01:23:32.093193251Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e5d7fb3c-c142-49d1-90eb-eb9638dd45bf name=/runtime.v1.RuntimeService/Version
	Aug 14 01:23:32 old-k8s-version-179312 crio[648]: time="2024-08-14 01:23:32.094408291Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=201dbafe-615c-4b02-bfae-95b65da57f99 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 01:23:32 old-k8s-version-179312 crio[648]: time="2024-08-14 01:23:32.094783775Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598612094761855,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=201dbafe-615c-4b02-bfae-95b65da57f99 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 01:23:32 old-k8s-version-179312 crio[648]: time="2024-08-14 01:23:32.095323794Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e1fa0856-2cb7-4268-935a-d313ef6bbe69 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 01:23:32 old-k8s-version-179312 crio[648]: time="2024-08-14 01:23:32.095376844Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e1fa0856-2cb7-4268-935a-d313ef6bbe69 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 01:23:32 old-k8s-version-179312 crio[648]: time="2024-08-14 01:23:32.095416370Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=e1fa0856-2cb7-4268-935a-d313ef6bbe69 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Aug14 01:05] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051654] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037900] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Aug14 01:06] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.069039] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.556159] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.745693] systemd-fstab-generator[568]: Ignoring "noauto" option for root device
	[  +0.067571] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.073344] systemd-fstab-generator[580]: Ignoring "noauto" option for root device
	[  +0.191121] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.114642] systemd-fstab-generator[606]: Ignoring "noauto" option for root device
	[  +0.237276] systemd-fstab-generator[632]: Ignoring "noauto" option for root device
	[  +6.127376] systemd-fstab-generator[900]: Ignoring "noauto" option for root device
	[  +0.063905] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.036138] systemd-fstab-generator[1027]: Ignoring "noauto" option for root device
	[ +12.708573] kauditd_printk_skb: 46 callbacks suppressed
	[Aug14 01:10] systemd-fstab-generator[5126]: Ignoring "noauto" option for root device
	[Aug14 01:12] systemd-fstab-generator[5405]: Ignoring "noauto" option for root device
	[  +0.068703] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 01:23:32 up 17 min,  0 users,  load average: 0.06, 0.05, 0.01
	Linux old-k8s-version-179312 5.10.207 #1 SMP Tue Aug 13 22:05:29 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Aug 14 01:23:26 old-k8s-version-179312 kubelet[6586]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:138 +0x185
	Aug 14 01:23:26 old-k8s-version-179312 kubelet[6586]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run.func1()
	Aug 14 01:23:26 old-k8s-version-179312 kubelet[6586]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:222 +0x70
	Aug 14 01:23:26 old-k8s-version-179312 kubelet[6586]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc000be3ef0)
	Aug 14 01:23:26 old-k8s-version-179312 kubelet[6586]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
	Aug 14 01:23:26 old-k8s-version-179312 kubelet[6586]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000a71ef0, 0x4f0ac20, 0xc000051540, 0x1, 0xc0001000c0)
	Aug 14 01:23:26 old-k8s-version-179312 kubelet[6586]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xad
	Aug 14 01:23:26 old-k8s-version-179312 kubelet[6586]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc0002e0380, 0xc0001000c0)
	Aug 14 01:23:26 old-k8s-version-179312 kubelet[6586]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Aug 14 01:23:26 old-k8s-version-179312 kubelet[6586]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Aug 14 01:23:26 old-k8s-version-179312 kubelet[6586]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Aug 14 01:23:26 old-k8s-version-179312 kubelet[6586]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000ab37f0, 0xc000ad5fa0)
	Aug 14 01:23:26 old-k8s-version-179312 kubelet[6586]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Aug 14 01:23:26 old-k8s-version-179312 kubelet[6586]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Aug 14 01:23:26 old-k8s-version-179312 kubelet[6586]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Aug 14 01:23:26 old-k8s-version-179312 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Aug 14 01:23:26 old-k8s-version-179312 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Aug 14 01:23:27 old-k8s-version-179312 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	Aug 14 01:23:27 old-k8s-version-179312 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Aug 14 01:23:27 old-k8s-version-179312 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Aug 14 01:23:27 old-k8s-version-179312 kubelet[6595]: I0814 01:23:27.471651    6595 server.go:416] Version: v1.20.0
	Aug 14 01:23:27 old-k8s-version-179312 kubelet[6595]: I0814 01:23:27.471920    6595 server.go:837] Client rotation is on, will bootstrap in background
	Aug 14 01:23:27 old-k8s-version-179312 kubelet[6595]: I0814 01:23:27.473986    6595 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Aug 14 01:23:27 old-k8s-version-179312 kubelet[6595]: W0814 01:23:27.474877    6595 manager.go:159] Cannot detect current cgroup on cgroup v2
	Aug 14 01:23:27 old-k8s-version-179312 kubelet[6595]: I0814 01:23:27.475325    6595 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-179312 -n old-k8s-version-179312
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-179312 -n old-k8s-version-179312: exit status 2 (220.691401ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-179312" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (454.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-776907 -n no-preload-776907
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-08-14 01:26:56.169707648 +0000 UTC m=+6006.333204853
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-776907 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-776907 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.75µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-776907 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-776907 -n no-preload-776907
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-776907 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-776907 logs -n 25: (1.250571295s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p                                                     | default-k8s-diff-port-585256 | jenkins | v1.33.1 | 14 Aug 24 00:59 UTC |                     |
	|         | default-k8s-diff-port-585256                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-179312        | old-k8s-version-179312       | jenkins | v1.33.1 | 14 Aug 24 01:00 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-901410                 | embed-certs-901410           | jenkins | v1.33.1 | 14 Aug 24 01:00 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-901410                                  | embed-certs-901410           | jenkins | v1.33.1 | 14 Aug 24 01:00 UTC | 14 Aug 24 01:11 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-776907                  | no-preload-776907            | jenkins | v1.33.1 | 14 Aug 24 01:01 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-776907                                   | no-preload-776907            | jenkins | v1.33.1 | 14 Aug 24 01:01 UTC | 14 Aug 24 01:10 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-585256       | default-k8s-diff-port-585256 | jenkins | v1.33.1 | 14 Aug 24 01:01 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-179312                              | old-k8s-version-179312       | jenkins | v1.33.1 | 14 Aug 24 01:01 UTC | 14 Aug 24 01:01 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-585256 | jenkins | v1.33.1 | 14 Aug 24 01:01 UTC | 14 Aug 24 01:11 UTC |
	|         | default-k8s-diff-port-585256                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-179312             | old-k8s-version-179312       | jenkins | v1.33.1 | 14 Aug 24 01:01 UTC | 14 Aug 24 01:01 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-179312                              | old-k8s-version-179312       | jenkins | v1.33.1 | 14 Aug 24 01:01 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-179312                              | old-k8s-version-179312       | jenkins | v1.33.1 | 14 Aug 24 01:25 UTC | 14 Aug 24 01:25 UTC |
	| start   | -p newest-cni-137211 --memory=2200 --alsologtostderr   | newest-cni-137211            | jenkins | v1.33.1 | 14 Aug 24 01:25 UTC | 14 Aug 24 01:25 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-137211             | newest-cni-137211            | jenkins | v1.33.1 | 14 Aug 24 01:25 UTC | 14 Aug 24 01:25 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-137211                                   | newest-cni-137211            | jenkins | v1.33.1 | 14 Aug 24 01:25 UTC | 14 Aug 24 01:26 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-137211                  | newest-cni-137211            | jenkins | v1.33.1 | 14 Aug 24 01:26 UTC | 14 Aug 24 01:26 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-137211 --memory=2200 --alsologtostderr   | newest-cni-137211            | jenkins | v1.33.1 | 14 Aug 24 01:26 UTC | 14 Aug 24 01:26 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| delete  | -p embed-certs-901410                                  | embed-certs-901410           | jenkins | v1.33.1 | 14 Aug 24 01:26 UTC | 14 Aug 24 01:26 UTC |
	| start   | -p auto-612440 --memory=3072                           | auto-612440                  | jenkins | v1.33.1 | 14 Aug 24 01:26 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| image   | newest-cni-137211 image list                           | newest-cni-137211            | jenkins | v1.33.1 | 14 Aug 24 01:26 UTC | 14 Aug 24 01:26 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-137211                                   | newest-cni-137211            | jenkins | v1.33.1 | 14 Aug 24 01:26 UTC | 14 Aug 24 01:26 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-137211                                   | newest-cni-137211            | jenkins | v1.33.1 | 14 Aug 24 01:26 UTC | 14 Aug 24 01:26 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-137211                                   | newest-cni-137211            | jenkins | v1.33.1 | 14 Aug 24 01:26 UTC | 14 Aug 24 01:26 UTC |
	| delete  | -p newest-cni-137211                                   | newest-cni-137211            | jenkins | v1.33.1 | 14 Aug 24 01:26 UTC | 14 Aug 24 01:26 UTC |
	| start   | -p kindnet-612440                                      | kindnet-612440               | jenkins | v1.33.1 | 14 Aug 24 01:26 UTC |                     |
	|         | --memory=3072                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --cni=kindnet --driver=kvm2                            |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/14 01:26:45
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0814 01:26:45.689743   70015 out.go:291] Setting OutFile to fd 1 ...
	I0814 01:26:45.690022   70015 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 01:26:45.690032   70015 out.go:304] Setting ErrFile to fd 2...
	I0814 01:26:45.690037   70015 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 01:26:45.690237   70015 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19429-9425/.minikube/bin
	I0814 01:26:45.690806   70015 out.go:298] Setting JSON to false
	I0814 01:26:45.691734   70015 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7752,"bootTime":1723591054,"procs":225,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0814 01:26:45.691794   70015 start.go:139] virtualization: kvm guest
	I0814 01:26:45.693801   70015 out.go:177] * [kindnet-612440] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0814 01:26:45.695006   70015 notify.go:220] Checking for updates...
	I0814 01:26:45.695022   70015 out.go:177]   - MINIKUBE_LOCATION=19429
	I0814 01:26:45.696146   70015 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0814 01:26:45.697215   70015 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19429-9425/kubeconfig
	I0814 01:26:45.698303   70015 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19429-9425/.minikube
	I0814 01:26:45.699402   70015 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0814 01:26:45.700549   70015 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0814 01:26:45.702059   70015 config.go:182] Loaded profile config "auto-612440": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 01:26:45.702158   70015 config.go:182] Loaded profile config "default-k8s-diff-port-585256": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 01:26:45.702250   70015 config.go:182] Loaded profile config "no-preload-776907": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 01:26:45.702314   70015 driver.go:392] Setting default libvirt URI to qemu:///system
	I0814 01:26:45.737403   70015 out.go:177] * Using the kvm2 driver based on user configuration
	I0814 01:26:45.738452   70015 start.go:297] selected driver: kvm2
	I0814 01:26:45.738463   70015 start.go:901] validating driver "kvm2" against <nil>
	I0814 01:26:45.738473   70015 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0814 01:26:45.739124   70015 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 01:26:45.739191   70015 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19429-9425/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0814 01:26:45.755692   70015 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0814 01:26:45.755755   70015 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0814 01:26:45.756024   70015 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 01:26:45.756108   70015 cni.go:84] Creating CNI manager for "kindnet"
	I0814 01:26:45.756122   70015 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0814 01:26:45.756192   70015 start.go:340] cluster config:
	{Name:kindnet-612440 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:kindnet-612440 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 01:26:45.756342   70015 iso.go:125] acquiring lock: {Name:mk654171f0e78c238a265344dbbd1eacb21d0f1b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 01:26:45.758786   70015 out.go:177] * Starting "kindnet-612440" primary control-plane node in "kindnet-612440" cluster
	I0814 01:26:45.873848   69391 main.go:141] libmachine: (auto-612440) DBG | domain auto-612440 has defined MAC address 52:54:00:b0:2f:05 in network mk-auto-612440
	I0814 01:26:45.874327   69391 main.go:141] libmachine: (auto-612440) DBG | unable to find current IP address of domain auto-612440 in network mk-auto-612440
	I0814 01:26:45.874367   69391 main.go:141] libmachine: (auto-612440) DBG | I0814 01:26:45.874255   69414 retry.go:31] will retry after 4.411535252s: waiting for machine to come up
	I0814 01:26:45.759819   70015 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0814 01:26:45.759854   70015 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0814 01:26:45.759861   70015 cache.go:56] Caching tarball of preloaded images
	I0814 01:26:45.759935   70015 preload.go:172] Found /home/jenkins/minikube-integration/19429-9425/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0814 01:26:45.759945   70015 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0814 01:26:45.760043   70015 profile.go:143] Saving config to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/kindnet-612440/config.json ...
	I0814 01:26:45.760068   70015 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/kindnet-612440/config.json: {Name:mk09c3ac9ea9a55db63f2c250f53c5986dc97426 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 01:26:45.760227   70015 start.go:360] acquireMachinesLock for kindnet-612440: {Name:mk8ab1b491404dd6cc95a37a50c74a14d6d0e4c9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 01:26:51.706550   70015 start.go:364] duration metric: took 5.946298659s to acquireMachinesLock for "kindnet-612440"
	I0814 01:26:51.706627   70015 start.go:93] Provisioning new machine with config: &{Name:kindnet-612440 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:kindnet-612440 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0814 01:26:51.706736   70015 start.go:125] createHost starting for "" (driver="kvm2")
	I0814 01:26:50.287648   69391 main.go:141] libmachine: (auto-612440) DBG | domain auto-612440 has defined MAC address 52:54:00:b0:2f:05 in network mk-auto-612440
	I0814 01:26:50.288145   69391 main.go:141] libmachine: (auto-612440) Found IP for machine: 192.168.50.74
	I0814 01:26:50.288172   69391 main.go:141] libmachine: (auto-612440) Reserving static IP address...
	I0814 01:26:50.288185   69391 main.go:141] libmachine: (auto-612440) DBG | domain auto-612440 has current primary IP address 192.168.50.74 and MAC address 52:54:00:b0:2f:05 in network mk-auto-612440
	I0814 01:26:50.288574   69391 main.go:141] libmachine: (auto-612440) DBG | unable to find host DHCP lease matching {name: "auto-612440", mac: "52:54:00:b0:2f:05", ip: "192.168.50.74"} in network mk-auto-612440
	I0814 01:26:50.362639   69391 main.go:141] libmachine: (auto-612440) DBG | Getting to WaitForSSH function...
	I0814 01:26:50.362676   69391 main.go:141] libmachine: (auto-612440) Reserved static IP address: 192.168.50.74
	I0814 01:26:50.362690   69391 main.go:141] libmachine: (auto-612440) Waiting for SSH to be available...
	I0814 01:26:50.365393   69391 main.go:141] libmachine: (auto-612440) DBG | domain auto-612440 has defined MAC address 52:54:00:b0:2f:05 in network mk-auto-612440
	I0814 01:26:50.365762   69391 main.go:141] libmachine: (auto-612440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2f:05", ip: ""} in network mk-auto-612440: {Iface:virbr2 ExpiryTime:2024-08-14 02:26:43 +0000 UTC Type:0 Mac:52:54:00:b0:2f:05 Iaid: IPaddr:192.168.50.74 Prefix:24 Hostname:minikube Clientid:01:52:54:00:b0:2f:05}
	I0814 01:26:50.365783   69391 main.go:141] libmachine: (auto-612440) DBG | domain auto-612440 has defined IP address 192.168.50.74 and MAC address 52:54:00:b0:2f:05 in network mk-auto-612440
	I0814 01:26:50.365904   69391 main.go:141] libmachine: (auto-612440) DBG | Using SSH client type: external
	I0814 01:26:50.365940   69391 main.go:141] libmachine: (auto-612440) DBG | Using SSH private key: /home/jenkins/minikube-integration/19429-9425/.minikube/machines/auto-612440/id_rsa (-rw-------)
	I0814 01:26:50.366006   69391 main.go:141] libmachine: (auto-612440) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.74 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19429-9425/.minikube/machines/auto-612440/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0814 01:26:50.366034   69391 main.go:141] libmachine: (auto-612440) DBG | About to run SSH command:
	I0814 01:26:50.366082   69391 main.go:141] libmachine: (auto-612440) DBG | exit 0
	I0814 01:26:50.489930   69391 main.go:141] libmachine: (auto-612440) DBG | SSH cmd err, output: <nil>: 
	I0814 01:26:50.490194   69391 main.go:141] libmachine: (auto-612440) KVM machine creation complete!
	I0814 01:26:50.490451   69391 main.go:141] libmachine: (auto-612440) Calling .GetConfigRaw
	I0814 01:26:50.490914   69391 main.go:141] libmachine: (auto-612440) Calling .DriverName
	I0814 01:26:50.491116   69391 main.go:141] libmachine: (auto-612440) Calling .DriverName
	I0814 01:26:50.491303   69391 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0814 01:26:50.491318   69391 main.go:141] libmachine: (auto-612440) Calling .GetState
	I0814 01:26:50.492392   69391 main.go:141] libmachine: Detecting operating system of created instance...
	I0814 01:26:50.492406   69391 main.go:141] libmachine: Waiting for SSH to be available...
	I0814 01:26:50.492411   69391 main.go:141] libmachine: Getting to WaitForSSH function...
	I0814 01:26:50.492416   69391 main.go:141] libmachine: (auto-612440) Calling .GetSSHHostname
	I0814 01:26:50.494479   69391 main.go:141] libmachine: (auto-612440) DBG | domain auto-612440 has defined MAC address 52:54:00:b0:2f:05 in network mk-auto-612440
	I0814 01:26:50.494806   69391 main.go:141] libmachine: (auto-612440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2f:05", ip: ""} in network mk-auto-612440: {Iface:virbr2 ExpiryTime:2024-08-14 02:26:43 +0000 UTC Type:0 Mac:52:54:00:b0:2f:05 Iaid: IPaddr:192.168.50.74 Prefix:24 Hostname:auto-612440 Clientid:01:52:54:00:b0:2f:05}
	I0814 01:26:50.494829   69391 main.go:141] libmachine: (auto-612440) DBG | domain auto-612440 has defined IP address 192.168.50.74 and MAC address 52:54:00:b0:2f:05 in network mk-auto-612440
	I0814 01:26:50.494962   69391 main.go:141] libmachine: (auto-612440) Calling .GetSSHPort
	I0814 01:26:50.495113   69391 main.go:141] libmachine: (auto-612440) Calling .GetSSHKeyPath
	I0814 01:26:50.495257   69391 main.go:141] libmachine: (auto-612440) Calling .GetSSHKeyPath
	I0814 01:26:50.495425   69391 main.go:141] libmachine: (auto-612440) Calling .GetSSHUsername
	I0814 01:26:50.495555   69391 main.go:141] libmachine: Using SSH client type: native
	I0814 01:26:50.495757   69391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.74 22 <nil> <nil>}
	I0814 01:26:50.495770   69391 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0814 01:26:50.597034   69391 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 01:26:50.597060   69391 main.go:141] libmachine: Detecting the provisioner...
	I0814 01:26:50.597069   69391 main.go:141] libmachine: (auto-612440) Calling .GetSSHHostname
	I0814 01:26:50.599685   69391 main.go:141] libmachine: (auto-612440) DBG | domain auto-612440 has defined MAC address 52:54:00:b0:2f:05 in network mk-auto-612440
	I0814 01:26:50.599995   69391 main.go:141] libmachine: (auto-612440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2f:05", ip: ""} in network mk-auto-612440: {Iface:virbr2 ExpiryTime:2024-08-14 02:26:43 +0000 UTC Type:0 Mac:52:54:00:b0:2f:05 Iaid: IPaddr:192.168.50.74 Prefix:24 Hostname:auto-612440 Clientid:01:52:54:00:b0:2f:05}
	I0814 01:26:50.600023   69391 main.go:141] libmachine: (auto-612440) DBG | domain auto-612440 has defined IP address 192.168.50.74 and MAC address 52:54:00:b0:2f:05 in network mk-auto-612440
	I0814 01:26:50.600189   69391 main.go:141] libmachine: (auto-612440) Calling .GetSSHPort
	I0814 01:26:50.600406   69391 main.go:141] libmachine: (auto-612440) Calling .GetSSHKeyPath
	I0814 01:26:50.600584   69391 main.go:141] libmachine: (auto-612440) Calling .GetSSHKeyPath
	I0814 01:26:50.600760   69391 main.go:141] libmachine: (auto-612440) Calling .GetSSHUsername
	I0814 01:26:50.600940   69391 main.go:141] libmachine: Using SSH client type: native
	I0814 01:26:50.601085   69391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.74 22 <nil> <nil>}
	I0814 01:26:50.601095   69391 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0814 01:26:50.706304   69391 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0814 01:26:50.706381   69391 main.go:141] libmachine: found compatible host: buildroot
	I0814 01:26:50.706400   69391 main.go:141] libmachine: Provisioning with buildroot...
	I0814 01:26:50.706413   69391 main.go:141] libmachine: (auto-612440) Calling .GetMachineName
	I0814 01:26:50.706632   69391 buildroot.go:166] provisioning hostname "auto-612440"
	I0814 01:26:50.706654   69391 main.go:141] libmachine: (auto-612440) Calling .GetMachineName
	I0814 01:26:50.706853   69391 main.go:141] libmachine: (auto-612440) Calling .GetSSHHostname
	I0814 01:26:50.709324   69391 main.go:141] libmachine: (auto-612440) DBG | domain auto-612440 has defined MAC address 52:54:00:b0:2f:05 in network mk-auto-612440
	I0814 01:26:50.709658   69391 main.go:141] libmachine: (auto-612440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2f:05", ip: ""} in network mk-auto-612440: {Iface:virbr2 ExpiryTime:2024-08-14 02:26:43 +0000 UTC Type:0 Mac:52:54:00:b0:2f:05 Iaid: IPaddr:192.168.50.74 Prefix:24 Hostname:auto-612440 Clientid:01:52:54:00:b0:2f:05}
	I0814 01:26:50.709710   69391 main.go:141] libmachine: (auto-612440) DBG | domain auto-612440 has defined IP address 192.168.50.74 and MAC address 52:54:00:b0:2f:05 in network mk-auto-612440
	I0814 01:26:50.709860   69391 main.go:141] libmachine: (auto-612440) Calling .GetSSHPort
	I0814 01:26:50.710066   69391 main.go:141] libmachine: (auto-612440) Calling .GetSSHKeyPath
	I0814 01:26:50.710225   69391 main.go:141] libmachine: (auto-612440) Calling .GetSSHKeyPath
	I0814 01:26:50.710369   69391 main.go:141] libmachine: (auto-612440) Calling .GetSSHUsername
	I0814 01:26:50.710532   69391 main.go:141] libmachine: Using SSH client type: native
	I0814 01:26:50.710710   69391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.74 22 <nil> <nil>}
	I0814 01:26:50.710721   69391 main.go:141] libmachine: About to run SSH command:
	sudo hostname auto-612440 && echo "auto-612440" | sudo tee /etc/hostname
	I0814 01:26:50.832930   69391 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-612440
	
	I0814 01:26:50.832957   69391 main.go:141] libmachine: (auto-612440) Calling .GetSSHHostname
	I0814 01:26:50.835293   69391 main.go:141] libmachine: (auto-612440) DBG | domain auto-612440 has defined MAC address 52:54:00:b0:2f:05 in network mk-auto-612440
	I0814 01:26:50.835597   69391 main.go:141] libmachine: (auto-612440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2f:05", ip: ""} in network mk-auto-612440: {Iface:virbr2 ExpiryTime:2024-08-14 02:26:43 +0000 UTC Type:0 Mac:52:54:00:b0:2f:05 Iaid: IPaddr:192.168.50.74 Prefix:24 Hostname:auto-612440 Clientid:01:52:54:00:b0:2f:05}
	I0814 01:26:50.835625   69391 main.go:141] libmachine: (auto-612440) DBG | domain auto-612440 has defined IP address 192.168.50.74 and MAC address 52:54:00:b0:2f:05 in network mk-auto-612440
	I0814 01:26:50.835767   69391 main.go:141] libmachine: (auto-612440) Calling .GetSSHPort
	I0814 01:26:50.835943   69391 main.go:141] libmachine: (auto-612440) Calling .GetSSHKeyPath
	I0814 01:26:50.836103   69391 main.go:141] libmachine: (auto-612440) Calling .GetSSHKeyPath
	I0814 01:26:50.836241   69391 main.go:141] libmachine: (auto-612440) Calling .GetSSHUsername
	I0814 01:26:50.836398   69391 main.go:141] libmachine: Using SSH client type: native
	I0814 01:26:50.836592   69391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.74 22 <nil> <nil>}
	I0814 01:26:50.836613   69391 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-612440' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-612440/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-612440' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0814 01:26:50.951798   69391 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 01:26:50.951826   69391 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19429-9425/.minikube CaCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19429-9425/.minikube}
	I0814 01:26:50.951847   69391 buildroot.go:174] setting up certificates
	I0814 01:26:50.951859   69391 provision.go:84] configureAuth start
	I0814 01:26:50.951871   69391 main.go:141] libmachine: (auto-612440) Calling .GetMachineName
	I0814 01:26:50.952193   69391 main.go:141] libmachine: (auto-612440) Calling .GetIP
	I0814 01:26:50.954800   69391 main.go:141] libmachine: (auto-612440) DBG | domain auto-612440 has defined MAC address 52:54:00:b0:2f:05 in network mk-auto-612440
	I0814 01:26:50.955178   69391 main.go:141] libmachine: (auto-612440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2f:05", ip: ""} in network mk-auto-612440: {Iface:virbr2 ExpiryTime:2024-08-14 02:26:43 +0000 UTC Type:0 Mac:52:54:00:b0:2f:05 Iaid: IPaddr:192.168.50.74 Prefix:24 Hostname:auto-612440 Clientid:01:52:54:00:b0:2f:05}
	I0814 01:26:50.955201   69391 main.go:141] libmachine: (auto-612440) DBG | domain auto-612440 has defined IP address 192.168.50.74 and MAC address 52:54:00:b0:2f:05 in network mk-auto-612440
	I0814 01:26:50.955356   69391 main.go:141] libmachine: (auto-612440) Calling .GetSSHHostname
	I0814 01:26:50.957974   69391 main.go:141] libmachine: (auto-612440) DBG | domain auto-612440 has defined MAC address 52:54:00:b0:2f:05 in network mk-auto-612440
	I0814 01:26:50.958434   69391 main.go:141] libmachine: (auto-612440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2f:05", ip: ""} in network mk-auto-612440: {Iface:virbr2 ExpiryTime:2024-08-14 02:26:43 +0000 UTC Type:0 Mac:52:54:00:b0:2f:05 Iaid: IPaddr:192.168.50.74 Prefix:24 Hostname:auto-612440 Clientid:01:52:54:00:b0:2f:05}
	I0814 01:26:50.958459   69391 main.go:141] libmachine: (auto-612440) DBG | domain auto-612440 has defined IP address 192.168.50.74 and MAC address 52:54:00:b0:2f:05 in network mk-auto-612440
	I0814 01:26:50.958623   69391 provision.go:143] copyHostCerts
	I0814 01:26:50.958688   69391 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem, removing ...
	I0814 01:26:50.958700   69391 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem
	I0814 01:26:50.958768   69391 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem (1123 bytes)
	I0814 01:26:50.958898   69391 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem, removing ...
	I0814 01:26:50.958909   69391 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem
	I0814 01:26:50.958938   69391 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem (1675 bytes)
	I0814 01:26:50.959029   69391 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem, removing ...
	I0814 01:26:50.959039   69391 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem
	I0814 01:26:50.959068   69391 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem (1082 bytes)
	I0814 01:26:50.959154   69391 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem org=jenkins.auto-612440 san=[127.0.0.1 192.168.50.74 auto-612440 localhost minikube]
	I0814 01:26:51.060312   69391 provision.go:177] copyRemoteCerts
	I0814 01:26:51.060379   69391 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0814 01:26:51.060408   69391 main.go:141] libmachine: (auto-612440) Calling .GetSSHHostname
	I0814 01:26:51.063194   69391 main.go:141] libmachine: (auto-612440) DBG | domain auto-612440 has defined MAC address 52:54:00:b0:2f:05 in network mk-auto-612440
	I0814 01:26:51.063545   69391 main.go:141] libmachine: (auto-612440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2f:05", ip: ""} in network mk-auto-612440: {Iface:virbr2 ExpiryTime:2024-08-14 02:26:43 +0000 UTC Type:0 Mac:52:54:00:b0:2f:05 Iaid: IPaddr:192.168.50.74 Prefix:24 Hostname:auto-612440 Clientid:01:52:54:00:b0:2f:05}
	I0814 01:26:51.063573   69391 main.go:141] libmachine: (auto-612440) DBG | domain auto-612440 has defined IP address 192.168.50.74 and MAC address 52:54:00:b0:2f:05 in network mk-auto-612440
	I0814 01:26:51.063721   69391 main.go:141] libmachine: (auto-612440) Calling .GetSSHPort
	I0814 01:26:51.063914   69391 main.go:141] libmachine: (auto-612440) Calling .GetSSHKeyPath
	I0814 01:26:51.064069   69391 main.go:141] libmachine: (auto-612440) Calling .GetSSHUsername
	I0814 01:26:51.064205   69391 sshutil.go:53] new ssh client: &{IP:192.168.50.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/auto-612440/id_rsa Username:docker}
	I0814 01:26:51.143885   69391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0814 01:26:51.166131   69391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0814 01:26:51.187875   69391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0814 01:26:51.209141   69391 provision.go:87] duration metric: took 257.269874ms to configureAuth
	I0814 01:26:51.209168   69391 buildroot.go:189] setting minikube options for container-runtime
	I0814 01:26:51.209335   69391 config.go:182] Loaded profile config "auto-612440": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 01:26:51.209432   69391 main.go:141] libmachine: (auto-612440) Calling .GetSSHHostname
	I0814 01:26:51.212021   69391 main.go:141] libmachine: (auto-612440) DBG | domain auto-612440 has defined MAC address 52:54:00:b0:2f:05 in network mk-auto-612440
	I0814 01:26:51.212290   69391 main.go:141] libmachine: (auto-612440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2f:05", ip: ""} in network mk-auto-612440: {Iface:virbr2 ExpiryTime:2024-08-14 02:26:43 +0000 UTC Type:0 Mac:52:54:00:b0:2f:05 Iaid: IPaddr:192.168.50.74 Prefix:24 Hostname:auto-612440 Clientid:01:52:54:00:b0:2f:05}
	I0814 01:26:51.212315   69391 main.go:141] libmachine: (auto-612440) DBG | domain auto-612440 has defined IP address 192.168.50.74 and MAC address 52:54:00:b0:2f:05 in network mk-auto-612440
	I0814 01:26:51.212535   69391 main.go:141] libmachine: (auto-612440) Calling .GetSSHPort
	I0814 01:26:51.212711   69391 main.go:141] libmachine: (auto-612440) Calling .GetSSHKeyPath
	I0814 01:26:51.212828   69391 main.go:141] libmachine: (auto-612440) Calling .GetSSHKeyPath
	I0814 01:26:51.212933   69391 main.go:141] libmachine: (auto-612440) Calling .GetSSHUsername
	I0814 01:26:51.213100   69391 main.go:141] libmachine: Using SSH client type: native
	I0814 01:26:51.213270   69391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.74 22 <nil> <nil>}
	I0814 01:26:51.213284   69391 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0814 01:26:51.466377   69391 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0814 01:26:51.466403   69391 main.go:141] libmachine: Checking connection to Docker...
	I0814 01:26:51.466413   69391 main.go:141] libmachine: (auto-612440) Calling .GetURL
	I0814 01:26:51.467758   69391 main.go:141] libmachine: (auto-612440) DBG | Using libvirt version 6000000
	I0814 01:26:51.469983   69391 main.go:141] libmachine: (auto-612440) DBG | domain auto-612440 has defined MAC address 52:54:00:b0:2f:05 in network mk-auto-612440
	I0814 01:26:51.470292   69391 main.go:141] libmachine: (auto-612440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2f:05", ip: ""} in network mk-auto-612440: {Iface:virbr2 ExpiryTime:2024-08-14 02:26:43 +0000 UTC Type:0 Mac:52:54:00:b0:2f:05 Iaid: IPaddr:192.168.50.74 Prefix:24 Hostname:auto-612440 Clientid:01:52:54:00:b0:2f:05}
	I0814 01:26:51.470320   69391 main.go:141] libmachine: (auto-612440) DBG | domain auto-612440 has defined IP address 192.168.50.74 and MAC address 52:54:00:b0:2f:05 in network mk-auto-612440
	I0814 01:26:51.470437   69391 main.go:141] libmachine: Docker is up and running!
	I0814 01:26:51.470466   69391 main.go:141] libmachine: Reticulating splines...
	I0814 01:26:51.470477   69391 client.go:171] duration metric: took 22.019814766s to LocalClient.Create
	I0814 01:26:51.470501   69391 start.go:167] duration metric: took 22.019881933s to libmachine.API.Create "auto-612440"
	I0814 01:26:51.470512   69391 start.go:293] postStartSetup for "auto-612440" (driver="kvm2")
	I0814 01:26:51.470526   69391 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0814 01:26:51.470551   69391 main.go:141] libmachine: (auto-612440) Calling .DriverName
	I0814 01:26:51.470804   69391 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0814 01:26:51.470827   69391 main.go:141] libmachine: (auto-612440) Calling .GetSSHHostname
	I0814 01:26:51.472668   69391 main.go:141] libmachine: (auto-612440) DBG | domain auto-612440 has defined MAC address 52:54:00:b0:2f:05 in network mk-auto-612440
	I0814 01:26:51.472952   69391 main.go:141] libmachine: (auto-612440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2f:05", ip: ""} in network mk-auto-612440: {Iface:virbr2 ExpiryTime:2024-08-14 02:26:43 +0000 UTC Type:0 Mac:52:54:00:b0:2f:05 Iaid: IPaddr:192.168.50.74 Prefix:24 Hostname:auto-612440 Clientid:01:52:54:00:b0:2f:05}
	I0814 01:26:51.472990   69391 main.go:141] libmachine: (auto-612440) DBG | domain auto-612440 has defined IP address 192.168.50.74 and MAC address 52:54:00:b0:2f:05 in network mk-auto-612440
	I0814 01:26:51.473056   69391 main.go:141] libmachine: (auto-612440) Calling .GetSSHPort
	I0814 01:26:51.473200   69391 main.go:141] libmachine: (auto-612440) Calling .GetSSHKeyPath
	I0814 01:26:51.473368   69391 main.go:141] libmachine: (auto-612440) Calling .GetSSHUsername
	I0814 01:26:51.473526   69391 sshutil.go:53] new ssh client: &{IP:192.168.50.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/auto-612440/id_rsa Username:docker}
	I0814 01:26:51.556548   69391 ssh_runner.go:195] Run: cat /etc/os-release
	I0814 01:26:51.560504   69391 info.go:137] Remote host: Buildroot 2023.02.9
	I0814 01:26:51.560526   69391 filesync.go:126] Scanning /home/jenkins/minikube-integration/19429-9425/.minikube/addons for local assets ...
	I0814 01:26:51.560594   69391 filesync.go:126] Scanning /home/jenkins/minikube-integration/19429-9425/.minikube/files for local assets ...
	I0814 01:26:51.560708   69391 filesync.go:149] local asset: /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem -> 165892.pem in /etc/ssl/certs
	I0814 01:26:51.560824   69391 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0814 01:26:51.569697   69391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem --> /etc/ssl/certs/165892.pem (1708 bytes)
	I0814 01:26:51.591669   69391 start.go:296] duration metric: took 121.142825ms for postStartSetup
	I0814 01:26:51.591720   69391 main.go:141] libmachine: (auto-612440) Calling .GetConfigRaw
	I0814 01:26:51.592369   69391 main.go:141] libmachine: (auto-612440) Calling .GetIP
	I0814 01:26:51.595134   69391 main.go:141] libmachine: (auto-612440) DBG | domain auto-612440 has defined MAC address 52:54:00:b0:2f:05 in network mk-auto-612440
	I0814 01:26:51.595536   69391 main.go:141] libmachine: (auto-612440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2f:05", ip: ""} in network mk-auto-612440: {Iface:virbr2 ExpiryTime:2024-08-14 02:26:43 +0000 UTC Type:0 Mac:52:54:00:b0:2f:05 Iaid: IPaddr:192.168.50.74 Prefix:24 Hostname:auto-612440 Clientid:01:52:54:00:b0:2f:05}
	I0814 01:26:51.595570   69391 main.go:141] libmachine: (auto-612440) DBG | domain auto-612440 has defined IP address 192.168.50.74 and MAC address 52:54:00:b0:2f:05 in network mk-auto-612440
	I0814 01:26:51.595782   69391 profile.go:143] Saving config to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/auto-612440/config.json ...
	I0814 01:26:51.595980   69391 start.go:128] duration metric: took 22.163946992s to createHost
	I0814 01:26:51.596008   69391 main.go:141] libmachine: (auto-612440) Calling .GetSSHHostname
	I0814 01:26:51.598309   69391 main.go:141] libmachine: (auto-612440) DBG | domain auto-612440 has defined MAC address 52:54:00:b0:2f:05 in network mk-auto-612440
	I0814 01:26:51.598598   69391 main.go:141] libmachine: (auto-612440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2f:05", ip: ""} in network mk-auto-612440: {Iface:virbr2 ExpiryTime:2024-08-14 02:26:43 +0000 UTC Type:0 Mac:52:54:00:b0:2f:05 Iaid: IPaddr:192.168.50.74 Prefix:24 Hostname:auto-612440 Clientid:01:52:54:00:b0:2f:05}
	I0814 01:26:51.598623   69391 main.go:141] libmachine: (auto-612440) DBG | domain auto-612440 has defined IP address 192.168.50.74 and MAC address 52:54:00:b0:2f:05 in network mk-auto-612440
	I0814 01:26:51.598759   69391 main.go:141] libmachine: (auto-612440) Calling .GetSSHPort
	I0814 01:26:51.598935   69391 main.go:141] libmachine: (auto-612440) Calling .GetSSHKeyPath
	I0814 01:26:51.599091   69391 main.go:141] libmachine: (auto-612440) Calling .GetSSHKeyPath
	I0814 01:26:51.599248   69391 main.go:141] libmachine: (auto-612440) Calling .GetSSHUsername
	I0814 01:26:51.599406   69391 main.go:141] libmachine: Using SSH client type: native
	I0814 01:26:51.599618   69391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.74 22 <nil> <nil>}
	I0814 01:26:51.599628   69391 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0814 01:26:51.706370   69391 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723598811.674775038
	
	I0814 01:26:51.706399   69391 fix.go:216] guest clock: 1723598811.674775038
	I0814 01:26:51.706407   69391 fix.go:229] Guest: 2024-08-14 01:26:51.674775038 +0000 UTC Remote: 2024-08-14 01:26:51.595994468 +0000 UTC m=+22.277284796 (delta=78.78057ms)
	I0814 01:26:51.706444   69391 fix.go:200] guest clock delta is within tolerance: 78.78057ms
	I0814 01:26:51.706469   69391 start.go:83] releasing machines lock for "auto-612440", held for 22.274559225s
	I0814 01:26:51.706505   69391 main.go:141] libmachine: (auto-612440) Calling .DriverName
	I0814 01:26:51.706789   69391 main.go:141] libmachine: (auto-612440) Calling .GetIP
	I0814 01:26:51.709788   69391 main.go:141] libmachine: (auto-612440) DBG | domain auto-612440 has defined MAC address 52:54:00:b0:2f:05 in network mk-auto-612440
	I0814 01:26:51.710219   69391 main.go:141] libmachine: (auto-612440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2f:05", ip: ""} in network mk-auto-612440: {Iface:virbr2 ExpiryTime:2024-08-14 02:26:43 +0000 UTC Type:0 Mac:52:54:00:b0:2f:05 Iaid: IPaddr:192.168.50.74 Prefix:24 Hostname:auto-612440 Clientid:01:52:54:00:b0:2f:05}
	I0814 01:26:51.710246   69391 main.go:141] libmachine: (auto-612440) DBG | domain auto-612440 has defined IP address 192.168.50.74 and MAC address 52:54:00:b0:2f:05 in network mk-auto-612440
	I0814 01:26:51.710510   69391 main.go:141] libmachine: (auto-612440) Calling .DriverName
	I0814 01:26:51.711100   69391 main.go:141] libmachine: (auto-612440) Calling .DriverName
	I0814 01:26:51.711318   69391 main.go:141] libmachine: (auto-612440) Calling .DriverName
	I0814 01:26:51.711432   69391 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0814 01:26:51.711472   69391 main.go:141] libmachine: (auto-612440) Calling .GetSSHHostname
	I0814 01:26:51.711577   69391 ssh_runner.go:195] Run: cat /version.json
	I0814 01:26:51.711601   69391 main.go:141] libmachine: (auto-612440) Calling .GetSSHHostname
	I0814 01:26:51.713991   69391 main.go:141] libmachine: (auto-612440) DBG | domain auto-612440 has defined MAC address 52:54:00:b0:2f:05 in network mk-auto-612440
	I0814 01:26:51.714390   69391 main.go:141] libmachine: (auto-612440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2f:05", ip: ""} in network mk-auto-612440: {Iface:virbr2 ExpiryTime:2024-08-14 02:26:43 +0000 UTC Type:0 Mac:52:54:00:b0:2f:05 Iaid: IPaddr:192.168.50.74 Prefix:24 Hostname:auto-612440 Clientid:01:52:54:00:b0:2f:05}
	I0814 01:26:51.714416   69391 main.go:141] libmachine: (auto-612440) DBG | domain auto-612440 has defined IP address 192.168.50.74 and MAC address 52:54:00:b0:2f:05 in network mk-auto-612440
	I0814 01:26:51.714434   69391 main.go:141] libmachine: (auto-612440) DBG | domain auto-612440 has defined MAC address 52:54:00:b0:2f:05 in network mk-auto-612440
	I0814 01:26:51.714680   69391 main.go:141] libmachine: (auto-612440) Calling .GetSSHPort
	I0814 01:26:51.714829   69391 main.go:141] libmachine: (auto-612440) Calling .GetSSHKeyPath
	I0814 01:26:51.714880   69391 main.go:141] libmachine: (auto-612440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2f:05", ip: ""} in network mk-auto-612440: {Iface:virbr2 ExpiryTime:2024-08-14 02:26:43 +0000 UTC Type:0 Mac:52:54:00:b0:2f:05 Iaid: IPaddr:192.168.50.74 Prefix:24 Hostname:auto-612440 Clientid:01:52:54:00:b0:2f:05}
	I0814 01:26:51.714946   69391 main.go:141] libmachine: (auto-612440) DBG | domain auto-612440 has defined IP address 192.168.50.74 and MAC address 52:54:00:b0:2f:05 in network mk-auto-612440
	I0814 01:26:51.714956   69391 main.go:141] libmachine: (auto-612440) Calling .GetSSHUsername
	I0814 01:26:51.715069   69391 main.go:141] libmachine: (auto-612440) Calling .GetSSHPort
	I0814 01:26:51.715133   69391 sshutil.go:53] new ssh client: &{IP:192.168.50.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/auto-612440/id_rsa Username:docker}
	I0814 01:26:51.715238   69391 main.go:141] libmachine: (auto-612440) Calling .GetSSHKeyPath
	I0814 01:26:51.715372   69391 main.go:141] libmachine: (auto-612440) Calling .GetSSHUsername
	I0814 01:26:51.715486   69391 sshutil.go:53] new ssh client: &{IP:192.168.50.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/auto-612440/id_rsa Username:docker}
	I0814 01:26:51.825211   69391 ssh_runner.go:195] Run: systemctl --version
	I0814 01:26:51.831460   69391 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0814 01:26:51.989994   69391 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0814 01:26:51.995543   69391 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0814 01:26:51.995640   69391 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0814 01:26:52.011394   69391 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0814 01:26:52.011416   69391 start.go:495] detecting cgroup driver to use...
	I0814 01:26:52.011494   69391 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0814 01:26:52.028963   69391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0814 01:26:52.042105   69391 docker.go:217] disabling cri-docker service (if available) ...
	I0814 01:26:52.042169   69391 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0814 01:26:52.054611   69391 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0814 01:26:52.067583   69391 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0814 01:26:52.174451   69391 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0814 01:26:52.336545   69391 docker.go:233] disabling docker service ...
	I0814 01:26:52.336619   69391 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0814 01:26:52.350328   69391 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0814 01:26:52.363519   69391 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0814 01:26:52.492692   69391 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0814 01:26:52.618437   69391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0814 01:26:52.634728   69391 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 01:26:52.652976   69391 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0814 01:26:52.653041   69391 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:26:52.662574   69391 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0814 01:26:52.662635   69391 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:26:52.672310   69391 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:26:52.681996   69391 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:26:52.694805   69391 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0814 01:26:52.707341   69391 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:26:52.719353   69391 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:26:52.734884   69391 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:26:52.744305   69391 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0814 01:26:52.752739   69391 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0814 01:26:52.752792   69391 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0814 01:26:52.765282   69391 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0814 01:26:52.775586   69391 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 01:26:52.905963   69391 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0814 01:26:53.053351   69391 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0814 01:26:53.053439   69391 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0814 01:26:53.058394   69391 start.go:563] Will wait 60s for crictl version
	I0814 01:26:53.058451   69391 ssh_runner.go:195] Run: which crictl
	I0814 01:26:53.061931   69391 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0814 01:26:53.101054   69391 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0814 01:26:53.101130   69391 ssh_runner.go:195] Run: crio --version
	I0814 01:26:53.130383   69391 ssh_runner.go:195] Run: crio --version
	I0814 01:26:53.166666   69391 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0814 01:26:53.167811   69391 main.go:141] libmachine: (auto-612440) Calling .GetIP
	I0814 01:26:53.170918   69391 main.go:141] libmachine: (auto-612440) DBG | domain auto-612440 has defined MAC address 52:54:00:b0:2f:05 in network mk-auto-612440
	I0814 01:26:53.171319   69391 main.go:141] libmachine: (auto-612440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2f:05", ip: ""} in network mk-auto-612440: {Iface:virbr2 ExpiryTime:2024-08-14 02:26:43 +0000 UTC Type:0 Mac:52:54:00:b0:2f:05 Iaid: IPaddr:192.168.50.74 Prefix:24 Hostname:auto-612440 Clientid:01:52:54:00:b0:2f:05}
	I0814 01:26:53.171360   69391 main.go:141] libmachine: (auto-612440) DBG | domain auto-612440 has defined IP address 192.168.50.74 and MAC address 52:54:00:b0:2f:05 in network mk-auto-612440
	I0814 01:26:53.171576   69391 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0814 01:26:53.175543   69391 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 01:26:53.188514   69391 kubeadm.go:883] updating cluster {Name:auto-612440 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0
ClusterName:auto-612440 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.74 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0814 01:26:53.188663   69391 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0814 01:26:53.188714   69391 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 01:26:53.221177   69391 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0814 01:26:53.221267   69391 ssh_runner.go:195] Run: which lz4
	I0814 01:26:53.225153   69391 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0814 01:26:53.229014   69391 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0814 01:26:53.229039   69391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0814 01:26:51.708591   70015 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0814 01:26:51.708779   70015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:26:51.708837   70015 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:26:51.725856   70015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33795
	I0814 01:26:51.726298   70015 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:26:51.726868   70015 main.go:141] libmachine: Using API Version  1
	I0814 01:26:51.726894   70015 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:26:51.727237   70015 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:26:51.727419   70015 main.go:141] libmachine: (kindnet-612440) Calling .GetMachineName
	I0814 01:26:51.727557   70015 main.go:141] libmachine: (kindnet-612440) Calling .DriverName
	I0814 01:26:51.727751   70015 start.go:159] libmachine.API.Create for "kindnet-612440" (driver="kvm2")
	I0814 01:26:51.727777   70015 client.go:168] LocalClient.Create starting
	I0814 01:26:51.727812   70015 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem
	I0814 01:26:51.727851   70015 main.go:141] libmachine: Decoding PEM data...
	I0814 01:26:51.727874   70015 main.go:141] libmachine: Parsing certificate...
	I0814 01:26:51.727947   70015 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem
	I0814 01:26:51.727977   70015 main.go:141] libmachine: Decoding PEM data...
	I0814 01:26:51.727998   70015 main.go:141] libmachine: Parsing certificate...
	I0814 01:26:51.728022   70015 main.go:141] libmachine: Running pre-create checks...
	I0814 01:26:51.728041   70015 main.go:141] libmachine: (kindnet-612440) Calling .PreCreateCheck
	I0814 01:26:51.728341   70015 main.go:141] libmachine: (kindnet-612440) Calling .GetConfigRaw
	I0814 01:26:51.728800   70015 main.go:141] libmachine: Creating machine...
	I0814 01:26:51.728817   70015 main.go:141] libmachine: (kindnet-612440) Calling .Create
	I0814 01:26:51.728938   70015 main.go:141] libmachine: (kindnet-612440) Creating KVM machine...
	I0814 01:26:51.730120   70015 main.go:141] libmachine: (kindnet-612440) DBG | found existing default KVM network
	I0814 01:26:51.731614   70015 main.go:141] libmachine: (kindnet-612440) DBG | I0814 01:26:51.731454   70097 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:54:46:e4} reservation:<nil>}
	I0814 01:26:51.732919   70015 main.go:141] libmachine: (kindnet-612440) DBG | I0814 01:26:51.732837   70097 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:84:01:bb} reservation:<nil>}
	I0814 01:26:51.734257   70015 main.go:141] libmachine: (kindnet-612440) DBG | I0814 01:26:51.734187   70097 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002e8b00}
	I0814 01:26:51.734287   70015 main.go:141] libmachine: (kindnet-612440) DBG | created network xml: 
	I0814 01:26:51.734297   70015 main.go:141] libmachine: (kindnet-612440) DBG | <network>
	I0814 01:26:51.734304   70015 main.go:141] libmachine: (kindnet-612440) DBG |   <name>mk-kindnet-612440</name>
	I0814 01:26:51.734316   70015 main.go:141] libmachine: (kindnet-612440) DBG |   <dns enable='no'/>
	I0814 01:26:51.734330   70015 main.go:141] libmachine: (kindnet-612440) DBG |   
	I0814 01:26:51.734352   70015 main.go:141] libmachine: (kindnet-612440) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0814 01:26:51.734364   70015 main.go:141] libmachine: (kindnet-612440) DBG |     <dhcp>
	I0814 01:26:51.734377   70015 main.go:141] libmachine: (kindnet-612440) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0814 01:26:51.734388   70015 main.go:141] libmachine: (kindnet-612440) DBG |     </dhcp>
	I0814 01:26:51.734396   70015 main.go:141] libmachine: (kindnet-612440) DBG |   </ip>
	I0814 01:26:51.734406   70015 main.go:141] libmachine: (kindnet-612440) DBG |   
	I0814 01:26:51.734417   70015 main.go:141] libmachine: (kindnet-612440) DBG | </network>
	I0814 01:26:51.734439   70015 main.go:141] libmachine: (kindnet-612440) DBG | 
	I0814 01:26:51.739408   70015 main.go:141] libmachine: (kindnet-612440) DBG | trying to create private KVM network mk-kindnet-612440 192.168.61.0/24...
	I0814 01:26:51.810546   70015 main.go:141] libmachine: (kindnet-612440) DBG | private KVM network mk-kindnet-612440 192.168.61.0/24 created
	I0814 01:26:51.810576   70015 main.go:141] libmachine: (kindnet-612440) Setting up store path in /home/jenkins/minikube-integration/19429-9425/.minikube/machines/kindnet-612440 ...
	I0814 01:26:51.810589   70015 main.go:141] libmachine: (kindnet-612440) DBG | I0814 01:26:51.810527   70097 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19429-9425/.minikube
	I0814 01:26:51.810606   70015 main.go:141] libmachine: (kindnet-612440) Building disk image from file:///home/jenkins/minikube-integration/19429-9425/.minikube/cache/iso/amd64/minikube-v1.33.1-1723567878-19429-amd64.iso
	I0814 01:26:51.810668   70015 main.go:141] libmachine: (kindnet-612440) Downloading /home/jenkins/minikube-integration/19429-9425/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19429-9425/.minikube/cache/iso/amd64/minikube-v1.33.1-1723567878-19429-amd64.iso...
	I0814 01:26:52.053173   70015 main.go:141] libmachine: (kindnet-612440) DBG | I0814 01:26:52.053028   70097 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19429-9425/.minikube/machines/kindnet-612440/id_rsa...
	I0814 01:26:52.321474   70015 main.go:141] libmachine: (kindnet-612440) DBG | I0814 01:26:52.321364   70097 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19429-9425/.minikube/machines/kindnet-612440/kindnet-612440.rawdisk...
	I0814 01:26:52.321503   70015 main.go:141] libmachine: (kindnet-612440) DBG | Writing magic tar header
	I0814 01:26:52.321516   70015 main.go:141] libmachine: (kindnet-612440) DBG | Writing SSH key tar header
	I0814 01:26:52.321540   70015 main.go:141] libmachine: (kindnet-612440) DBG | I0814 01:26:52.321504   70097 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19429-9425/.minikube/machines/kindnet-612440 ...
	I0814 01:26:52.321673   70015 main.go:141] libmachine: (kindnet-612440) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19429-9425/.minikube/machines/kindnet-612440
	I0814 01:26:52.321705   70015 main.go:141] libmachine: (kindnet-612440) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19429-9425/.minikube/machines
	I0814 01:26:52.321720   70015 main.go:141] libmachine: (kindnet-612440) Setting executable bit set on /home/jenkins/minikube-integration/19429-9425/.minikube/machines/kindnet-612440 (perms=drwx------)
	I0814 01:26:52.321741   70015 main.go:141] libmachine: (kindnet-612440) Setting executable bit set on /home/jenkins/minikube-integration/19429-9425/.minikube/machines (perms=drwxr-xr-x)
	I0814 01:26:52.321750   70015 main.go:141] libmachine: (kindnet-612440) Setting executable bit set on /home/jenkins/minikube-integration/19429-9425/.minikube (perms=drwxr-xr-x)
	I0814 01:26:52.321763   70015 main.go:141] libmachine: (kindnet-612440) Setting executable bit set on /home/jenkins/minikube-integration/19429-9425 (perms=drwxrwxr-x)
	I0814 01:26:52.321773   70015 main.go:141] libmachine: (kindnet-612440) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0814 01:26:52.321786   70015 main.go:141] libmachine: (kindnet-612440) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19429-9425/.minikube
	I0814 01:26:52.321795   70015 main.go:141] libmachine: (kindnet-612440) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0814 01:26:52.321807   70015 main.go:141] libmachine: (kindnet-612440) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19429-9425
	I0814 01:26:52.321820   70015 main.go:141] libmachine: (kindnet-612440) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0814 01:26:52.321831   70015 main.go:141] libmachine: (kindnet-612440) DBG | Checking permissions on dir: /home/jenkins
	I0814 01:26:52.321841   70015 main.go:141] libmachine: (kindnet-612440) DBG | Checking permissions on dir: /home
	I0814 01:26:52.321850   70015 main.go:141] libmachine: (kindnet-612440) DBG | Skipping /home - not owner
	I0814 01:26:52.321860   70015 main.go:141] libmachine: (kindnet-612440) Creating domain...
	I0814 01:26:52.323386   70015 main.go:141] libmachine: (kindnet-612440) define libvirt domain using xml: 
	I0814 01:26:52.323408   70015 main.go:141] libmachine: (kindnet-612440) <domain type='kvm'>
	I0814 01:26:52.323419   70015 main.go:141] libmachine: (kindnet-612440)   <name>kindnet-612440</name>
	I0814 01:26:52.323427   70015 main.go:141] libmachine: (kindnet-612440)   <memory unit='MiB'>3072</memory>
	I0814 01:26:52.323442   70015 main.go:141] libmachine: (kindnet-612440)   <vcpu>2</vcpu>
	I0814 01:26:52.323449   70015 main.go:141] libmachine: (kindnet-612440)   <features>
	I0814 01:26:52.323457   70015 main.go:141] libmachine: (kindnet-612440)     <acpi/>
	I0814 01:26:52.323475   70015 main.go:141] libmachine: (kindnet-612440)     <apic/>
	I0814 01:26:52.323487   70015 main.go:141] libmachine: (kindnet-612440)     <pae/>
	I0814 01:26:52.323497   70015 main.go:141] libmachine: (kindnet-612440)     
	I0814 01:26:52.323561   70015 main.go:141] libmachine: (kindnet-612440)   </features>
	I0814 01:26:52.323601   70015 main.go:141] libmachine: (kindnet-612440)   <cpu mode='host-passthrough'>
	I0814 01:26:52.323681   70015 main.go:141] libmachine: (kindnet-612440)   
	I0814 01:26:52.323711   70015 main.go:141] libmachine: (kindnet-612440)   </cpu>
	I0814 01:26:52.323720   70015 main.go:141] libmachine: (kindnet-612440)   <os>
	I0814 01:26:52.323734   70015 main.go:141] libmachine: (kindnet-612440)     <type>hvm</type>
	I0814 01:26:52.323745   70015 main.go:141] libmachine: (kindnet-612440)     <boot dev='cdrom'/>
	I0814 01:26:52.323754   70015 main.go:141] libmachine: (kindnet-612440)     <boot dev='hd'/>
	I0814 01:26:52.323766   70015 main.go:141] libmachine: (kindnet-612440)     <bootmenu enable='no'/>
	I0814 01:26:52.323776   70015 main.go:141] libmachine: (kindnet-612440)   </os>
	I0814 01:26:52.323787   70015 main.go:141] libmachine: (kindnet-612440)   <devices>
	I0814 01:26:52.323798   70015 main.go:141] libmachine: (kindnet-612440)     <disk type='file' device='cdrom'>
	I0814 01:26:52.323824   70015 main.go:141] libmachine: (kindnet-612440)       <source file='/home/jenkins/minikube-integration/19429-9425/.minikube/machines/kindnet-612440/boot2docker.iso'/>
	I0814 01:26:52.323845   70015 main.go:141] libmachine: (kindnet-612440)       <target dev='hdc' bus='scsi'/>
	I0814 01:26:52.323865   70015 main.go:141] libmachine: (kindnet-612440)       <readonly/>
	I0814 01:26:52.323882   70015 main.go:141] libmachine: (kindnet-612440)     </disk>
	I0814 01:26:52.323893   70015 main.go:141] libmachine: (kindnet-612440)     <disk type='file' device='disk'>
	I0814 01:26:52.323907   70015 main.go:141] libmachine: (kindnet-612440)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0814 01:26:52.323925   70015 main.go:141] libmachine: (kindnet-612440)       <source file='/home/jenkins/minikube-integration/19429-9425/.minikube/machines/kindnet-612440/kindnet-612440.rawdisk'/>
	I0814 01:26:52.323938   70015 main.go:141] libmachine: (kindnet-612440)       <target dev='hda' bus='virtio'/>
	I0814 01:26:52.323958   70015 main.go:141] libmachine: (kindnet-612440)     </disk>
	I0814 01:26:52.323974   70015 main.go:141] libmachine: (kindnet-612440)     <interface type='network'>
	I0814 01:26:52.323987   70015 main.go:141] libmachine: (kindnet-612440)       <source network='mk-kindnet-612440'/>
	I0814 01:26:52.323999   70015 main.go:141] libmachine: (kindnet-612440)       <model type='virtio'/>
	I0814 01:26:52.324012   70015 main.go:141] libmachine: (kindnet-612440)     </interface>
	I0814 01:26:52.324023   70015 main.go:141] libmachine: (kindnet-612440)     <interface type='network'>
	I0814 01:26:52.324035   70015 main.go:141] libmachine: (kindnet-612440)       <source network='default'/>
	I0814 01:26:52.324046   70015 main.go:141] libmachine: (kindnet-612440)       <model type='virtio'/>
	I0814 01:26:52.324058   70015 main.go:141] libmachine: (kindnet-612440)     </interface>
	I0814 01:26:52.324070   70015 main.go:141] libmachine: (kindnet-612440)     <serial type='pty'>
	I0814 01:26:52.324080   70015 main.go:141] libmachine: (kindnet-612440)       <target port='0'/>
	I0814 01:26:52.324115   70015 main.go:141] libmachine: (kindnet-612440)     </serial>
	I0814 01:26:52.324134   70015 main.go:141] libmachine: (kindnet-612440)     <console type='pty'>
	I0814 01:26:52.324148   70015 main.go:141] libmachine: (kindnet-612440)       <target type='serial' port='0'/>
	I0814 01:26:52.324159   70015 main.go:141] libmachine: (kindnet-612440)     </console>
	I0814 01:26:52.324171   70015 main.go:141] libmachine: (kindnet-612440)     <rng model='virtio'>
	I0814 01:26:52.324184   70015 main.go:141] libmachine: (kindnet-612440)       <backend model='random'>/dev/random</backend>
	I0814 01:26:52.324195   70015 main.go:141] libmachine: (kindnet-612440)     </rng>
	I0814 01:26:52.324205   70015 main.go:141] libmachine: (kindnet-612440)     
	I0814 01:26:52.324217   70015 main.go:141] libmachine: (kindnet-612440)     
	I0814 01:26:52.324245   70015 main.go:141] libmachine: (kindnet-612440)   </devices>
	I0814 01:26:52.324258   70015 main.go:141] libmachine: (kindnet-612440) </domain>
	I0814 01:26:52.324268   70015 main.go:141] libmachine: (kindnet-612440) 
	I0814 01:26:52.328242   70015 main.go:141] libmachine: (kindnet-612440) DBG | domain kindnet-612440 has defined MAC address 52:54:00:32:05:5c in network default
	I0814 01:26:52.328846   70015 main.go:141] libmachine: (kindnet-612440) Ensuring networks are active...
	I0814 01:26:52.328865   70015 main.go:141] libmachine: (kindnet-612440) DBG | domain kindnet-612440 has defined MAC address 52:54:00:2e:21:e4 in network mk-kindnet-612440
	I0814 01:26:52.329704   70015 main.go:141] libmachine: (kindnet-612440) Ensuring network default is active
	I0814 01:26:52.330065   70015 main.go:141] libmachine: (kindnet-612440) Ensuring network mk-kindnet-612440 is active
	I0814 01:26:52.330679   70015 main.go:141] libmachine: (kindnet-612440) Getting domain xml...
	I0814 01:26:52.331432   70015 main.go:141] libmachine: (kindnet-612440) Creating domain...
	I0814 01:26:53.729978   70015 main.go:141] libmachine: (kindnet-612440) Waiting to get IP...
	I0814 01:26:53.731109   70015 main.go:141] libmachine: (kindnet-612440) DBG | domain kindnet-612440 has defined MAC address 52:54:00:2e:21:e4 in network mk-kindnet-612440
	I0814 01:26:53.731655   70015 main.go:141] libmachine: (kindnet-612440) DBG | unable to find current IP address of domain kindnet-612440 in network mk-kindnet-612440
	I0814 01:26:53.731685   70015 main.go:141] libmachine: (kindnet-612440) DBG | I0814 01:26:53.731634   70097 retry.go:31] will retry after 263.092697ms: waiting for machine to come up
	I0814 01:26:53.996192   70015 main.go:141] libmachine: (kindnet-612440) DBG | domain kindnet-612440 has defined MAC address 52:54:00:2e:21:e4 in network mk-kindnet-612440
	I0814 01:26:53.996781   70015 main.go:141] libmachine: (kindnet-612440) DBG | unable to find current IP address of domain kindnet-612440 in network mk-kindnet-612440
	I0814 01:26:53.996857   70015 main.go:141] libmachine: (kindnet-612440) DBG | I0814 01:26:53.996769   70097 retry.go:31] will retry after 273.190954ms: waiting for machine to come up
	I0814 01:26:54.271229   70015 main.go:141] libmachine: (kindnet-612440) DBG | domain kindnet-612440 has defined MAC address 52:54:00:2e:21:e4 in network mk-kindnet-612440
	I0814 01:26:54.273176   70015 main.go:141] libmachine: (kindnet-612440) DBG | unable to find current IP address of domain kindnet-612440 in network mk-kindnet-612440
	I0814 01:26:54.273214   70015 main.go:141] libmachine: (kindnet-612440) DBG | I0814 01:26:54.273113   70097 retry.go:31] will retry after 401.213002ms: waiting for machine to come up
	I0814 01:26:54.675556   70015 main.go:141] libmachine: (kindnet-612440) DBG | domain kindnet-612440 has defined MAC address 52:54:00:2e:21:e4 in network mk-kindnet-612440
	I0814 01:26:54.676065   70015 main.go:141] libmachine: (kindnet-612440) DBG | unable to find current IP address of domain kindnet-612440 in network mk-kindnet-612440
	I0814 01:26:54.676094   70015 main.go:141] libmachine: (kindnet-612440) DBG | I0814 01:26:54.676025   70097 retry.go:31] will retry after 565.0233ms: waiting for machine to come up
	I0814 01:26:55.243106   70015 main.go:141] libmachine: (kindnet-612440) DBG | domain kindnet-612440 has defined MAC address 52:54:00:2e:21:e4 in network mk-kindnet-612440
	I0814 01:26:55.243661   70015 main.go:141] libmachine: (kindnet-612440) DBG | unable to find current IP address of domain kindnet-612440 in network mk-kindnet-612440
	I0814 01:26:55.243691   70015 main.go:141] libmachine: (kindnet-612440) DBG | I0814 01:26:55.243639   70097 retry.go:31] will retry after 632.390616ms: waiting for machine to come up
	
	
	==> CRI-O <==
	Aug 14 01:26:56 no-preload-776907 crio[731]: time="2024-08-14 01:26:56.874959821Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598816874938743,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3f5db639-503c-464f-ad78-bbbfe922b596 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 01:26:56 no-preload-776907 crio[731]: time="2024-08-14 01:26:56.875586197Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=216bf0b7-417f-43f9-94f9-1accd1e87714 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 01:26:56 no-preload-776907 crio[731]: time="2024-08-14 01:26:56.875680845Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=216bf0b7-417f-43f9-94f9-1accd1e87714 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 01:26:56 no-preload-776907 crio[731]: time="2024-08-14 01:26:56.876222667Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d4d7da10edbe3c5e4d9a0997c7f8909523ed34eacac6b44dc982bf6ab7504eff,PodSandboxId:c94f7f9e7de031c457a749f2cefd26e7eaecac814369bea2a126dc540ae95e8c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723597584961991657,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0ba9510-e0a5-4558-98e3-a9510920f93a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5778f1274c91f6882fd1efbc2d7c2f484c2f1daf8c772baf6f7d6398b11d2bcd,PodSandboxId:d9f891d25e8e1aaf25d0e48e092294c60510a060f2c32f09c772127917dfbc71,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723597564688648231,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c514e832-2998-4439-bb97-0d6d4eb4e499,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d3cb1d648607a62a84856164fa12f6d400c0d4316d7bda5d83a448870c145fc,PodSandboxId:83a1a082fd506659affe2870d9ff9a0d6fdf28c0c211596a2c186635a8880fc7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723597561866844465,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-dz9zk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67e29ce3-7f67-4b96-8030-c980773b5772,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ec88a5a7a9d566fabdf1393667a95e5eac0ed7f2a2aaa326ee51d3e99a72b12,PodSandboxId:6b12b85e75b67325c97708feca61417980a8504ed000e11ffe7929e7666afa80,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723597554114575314,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pgm9t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efad60b0-c62e-4c47-97
4b-98fdca9d3496,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bacb411cbea2000ee28b8a5eb1c371d4094e22dea31e33892860568e08ee9768,PodSandboxId:c94f7f9e7de031c457a749f2cefd26e7eaecac814369bea2a126dc540ae95e8c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723597554103696166,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0ba9510-e0a5-4558-98e3-a9510920f93
a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89953f1dc813e32748579ac4c2541c0f99b1443ae3956d85a367db06810107b2,PodSandboxId:8d49aac6a7eb624a202a61b82b0a35a7ce0277e4c21afb45f0db4970a93af7ae,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723597550385420710,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-776907,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f30aa569f7332a3771c25ad0568b0e7d,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1632d4b88f7f0e7e802d1848acf6c67950cfa7cdfbe5b4dd7f6fbc467a969388,PodSandboxId:cc557c0f92cc4b2da21354ba61b5934a1951b181ab44212a8a2bde2717195d7d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723597550340466652,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-776907,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1727822331a98d206a1c6455e6be9d1a,},Annotations:map[string]string{io.kubernetes.containe
r.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ef9bf666bbbc4c4c8b8036d2fa7e15e882f2bb301fe7d3b63a483d8b38e6091,PodSandboxId:ee7ecc8991ff707504a4b1e27f2e6763b86e88139265a015c5dc25179958f68d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723597550361489327,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-776907,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da5bdcd48f884b5b86c729f49cf3dd71,},Annotations:map[string]string{io.kuber
netes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddba3ebb8413d6647bf6c8fe0bff16a1a10b35bcd7219cad5b66979c372cd92e,PodSandboxId:ff775dc6fd48640328c7d30640188a25141e6e31471f94649135b200cc891a46,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723597550293425759,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-776907,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3d30aa4c418230085009c5296d2a369,},Annotations:map[string]string{io.kubernetes.contain
er.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=216bf0b7-417f-43f9-94f9-1accd1e87714 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 01:26:56 no-preload-776907 crio[731]: time="2024-08-14 01:26:56.916995107Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8806c5a3-eb54-4ab7-b572-260ded501e50 name=/runtime.v1.RuntimeService/Version
	Aug 14 01:26:56 no-preload-776907 crio[731]: time="2024-08-14 01:26:56.917162487Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8806c5a3-eb54-4ab7-b572-260ded501e50 name=/runtime.v1.RuntimeService/Version
	Aug 14 01:26:56 no-preload-776907 crio[731]: time="2024-08-14 01:26:56.918343306Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=152da06c-1a02-423c-ac5b-e1ad02d78a2a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 01:26:56 no-preload-776907 crio[731]: time="2024-08-14 01:26:56.918882563Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598816918851809,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=152da06c-1a02-423c-ac5b-e1ad02d78a2a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 01:26:56 no-preload-776907 crio[731]: time="2024-08-14 01:26:56.919523806Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4a583d92-e321-4f9b-9aa3-b0314f2dada7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 01:26:56 no-preload-776907 crio[731]: time="2024-08-14 01:26:56.919612944Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4a583d92-e321-4f9b-9aa3-b0314f2dada7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 01:26:56 no-preload-776907 crio[731]: time="2024-08-14 01:26:56.919881908Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d4d7da10edbe3c5e4d9a0997c7f8909523ed34eacac6b44dc982bf6ab7504eff,PodSandboxId:c94f7f9e7de031c457a749f2cefd26e7eaecac814369bea2a126dc540ae95e8c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723597584961991657,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0ba9510-e0a5-4558-98e3-a9510920f93a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5778f1274c91f6882fd1efbc2d7c2f484c2f1daf8c772baf6f7d6398b11d2bcd,PodSandboxId:d9f891d25e8e1aaf25d0e48e092294c60510a060f2c32f09c772127917dfbc71,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723597564688648231,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c514e832-2998-4439-bb97-0d6d4eb4e499,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d3cb1d648607a62a84856164fa12f6d400c0d4316d7bda5d83a448870c145fc,PodSandboxId:83a1a082fd506659affe2870d9ff9a0d6fdf28c0c211596a2c186635a8880fc7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723597561866844465,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-dz9zk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67e29ce3-7f67-4b96-8030-c980773b5772,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ec88a5a7a9d566fabdf1393667a95e5eac0ed7f2a2aaa326ee51d3e99a72b12,PodSandboxId:6b12b85e75b67325c97708feca61417980a8504ed000e11ffe7929e7666afa80,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723597554114575314,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pgm9t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efad60b0-c62e-4c47-97
4b-98fdca9d3496,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bacb411cbea2000ee28b8a5eb1c371d4094e22dea31e33892860568e08ee9768,PodSandboxId:c94f7f9e7de031c457a749f2cefd26e7eaecac814369bea2a126dc540ae95e8c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723597554103696166,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0ba9510-e0a5-4558-98e3-a9510920f93
a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89953f1dc813e32748579ac4c2541c0f99b1443ae3956d85a367db06810107b2,PodSandboxId:8d49aac6a7eb624a202a61b82b0a35a7ce0277e4c21afb45f0db4970a93af7ae,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723597550385420710,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-776907,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f30aa569f7332a3771c25ad0568b0e7d,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1632d4b88f7f0e7e802d1848acf6c67950cfa7cdfbe5b4dd7f6fbc467a969388,PodSandboxId:cc557c0f92cc4b2da21354ba61b5934a1951b181ab44212a8a2bde2717195d7d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723597550340466652,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-776907,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1727822331a98d206a1c6455e6be9d1a,},Annotations:map[string]string{io.kubernetes.containe
r.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ef9bf666bbbc4c4c8b8036d2fa7e15e882f2bb301fe7d3b63a483d8b38e6091,PodSandboxId:ee7ecc8991ff707504a4b1e27f2e6763b86e88139265a015c5dc25179958f68d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723597550361489327,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-776907,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da5bdcd48f884b5b86c729f49cf3dd71,},Annotations:map[string]string{io.kuber
netes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddba3ebb8413d6647bf6c8fe0bff16a1a10b35bcd7219cad5b66979c372cd92e,PodSandboxId:ff775dc6fd48640328c7d30640188a25141e6e31471f94649135b200cc891a46,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723597550293425759,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-776907,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3d30aa4c418230085009c5296d2a369,},Annotations:map[string]string{io.kubernetes.contain
er.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4a583d92-e321-4f9b-9aa3-b0314f2dada7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 01:26:56 no-preload-776907 crio[731]: time="2024-08-14 01:26:56.957929866Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6c5c60aa-df85-4e41-b3df-a7f2bc77a225 name=/runtime.v1.RuntimeService/Version
	Aug 14 01:26:56 no-preload-776907 crio[731]: time="2024-08-14 01:26:56.958009009Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6c5c60aa-df85-4e41-b3df-a7f2bc77a225 name=/runtime.v1.RuntimeService/Version
	Aug 14 01:26:56 no-preload-776907 crio[731]: time="2024-08-14 01:26:56.959395500Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8034fcb9-1c62-4c66-9ada-9e1dc0fb268a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 01:26:56 no-preload-776907 crio[731]: time="2024-08-14 01:26:56.959923639Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598816959900023,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8034fcb9-1c62-4c66-9ada-9e1dc0fb268a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 01:26:56 no-preload-776907 crio[731]: time="2024-08-14 01:26:56.960425155Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ad18629d-b997-4fa6-8b44-a3650ddf559d name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 01:26:56 no-preload-776907 crio[731]: time="2024-08-14 01:26:56.960495936Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ad18629d-b997-4fa6-8b44-a3650ddf559d name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 01:26:56 no-preload-776907 crio[731]: time="2024-08-14 01:26:56.960799546Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d4d7da10edbe3c5e4d9a0997c7f8909523ed34eacac6b44dc982bf6ab7504eff,PodSandboxId:c94f7f9e7de031c457a749f2cefd26e7eaecac814369bea2a126dc540ae95e8c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723597584961991657,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0ba9510-e0a5-4558-98e3-a9510920f93a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5778f1274c91f6882fd1efbc2d7c2f484c2f1daf8c772baf6f7d6398b11d2bcd,PodSandboxId:d9f891d25e8e1aaf25d0e48e092294c60510a060f2c32f09c772127917dfbc71,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723597564688648231,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c514e832-2998-4439-bb97-0d6d4eb4e499,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d3cb1d648607a62a84856164fa12f6d400c0d4316d7bda5d83a448870c145fc,PodSandboxId:83a1a082fd506659affe2870d9ff9a0d6fdf28c0c211596a2c186635a8880fc7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723597561866844465,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-dz9zk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67e29ce3-7f67-4b96-8030-c980773b5772,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ec88a5a7a9d566fabdf1393667a95e5eac0ed7f2a2aaa326ee51d3e99a72b12,PodSandboxId:6b12b85e75b67325c97708feca61417980a8504ed000e11ffe7929e7666afa80,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723597554114575314,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pgm9t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efad60b0-c62e-4c47-97
4b-98fdca9d3496,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bacb411cbea2000ee28b8a5eb1c371d4094e22dea31e33892860568e08ee9768,PodSandboxId:c94f7f9e7de031c457a749f2cefd26e7eaecac814369bea2a126dc540ae95e8c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723597554103696166,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0ba9510-e0a5-4558-98e3-a9510920f93
a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89953f1dc813e32748579ac4c2541c0f99b1443ae3956d85a367db06810107b2,PodSandboxId:8d49aac6a7eb624a202a61b82b0a35a7ce0277e4c21afb45f0db4970a93af7ae,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723597550385420710,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-776907,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f30aa569f7332a3771c25ad0568b0e7d,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1632d4b88f7f0e7e802d1848acf6c67950cfa7cdfbe5b4dd7f6fbc467a969388,PodSandboxId:cc557c0f92cc4b2da21354ba61b5934a1951b181ab44212a8a2bde2717195d7d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723597550340466652,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-776907,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1727822331a98d206a1c6455e6be9d1a,},Annotations:map[string]string{io.kubernetes.containe
r.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ef9bf666bbbc4c4c8b8036d2fa7e15e882f2bb301fe7d3b63a483d8b38e6091,PodSandboxId:ee7ecc8991ff707504a4b1e27f2e6763b86e88139265a015c5dc25179958f68d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723597550361489327,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-776907,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da5bdcd48f884b5b86c729f49cf3dd71,},Annotations:map[string]string{io.kuber
netes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddba3ebb8413d6647bf6c8fe0bff16a1a10b35bcd7219cad5b66979c372cd92e,PodSandboxId:ff775dc6fd48640328c7d30640188a25141e6e31471f94649135b200cc891a46,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723597550293425759,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-776907,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3d30aa4c418230085009c5296d2a369,},Annotations:map[string]string{io.kubernetes.contain
er.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ad18629d-b997-4fa6-8b44-a3650ddf559d name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 01:26:57 no-preload-776907 crio[731]: time="2024-08-14 01:26:57.001620272Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f9ef3268-40f4-47f2-b33e-64a92dfdb069 name=/runtime.v1.RuntimeService/Version
	Aug 14 01:26:57 no-preload-776907 crio[731]: time="2024-08-14 01:26:57.001734200Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f9ef3268-40f4-47f2-b33e-64a92dfdb069 name=/runtime.v1.RuntimeService/Version
	Aug 14 01:26:57 no-preload-776907 crio[731]: time="2024-08-14 01:26:57.003125644Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f83782dc-f4b9-4f8c-aba7-1d56d11887ea name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 01:26:57 no-preload-776907 crio[731]: time="2024-08-14 01:26:57.003574682Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598817003542206,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f83782dc-f4b9-4f8c-aba7-1d56d11887ea name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 01:26:57 no-preload-776907 crio[731]: time="2024-08-14 01:26:57.004521443Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=90d07a09-e1fa-4ebc-8ff1-4df51237f5bf name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 01:26:57 no-preload-776907 crio[731]: time="2024-08-14 01:26:57.004622811Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=90d07a09-e1fa-4ebc-8ff1-4df51237f5bf name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 01:26:57 no-preload-776907 crio[731]: time="2024-08-14 01:26:57.004918944Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d4d7da10edbe3c5e4d9a0997c7f8909523ed34eacac6b44dc982bf6ab7504eff,PodSandboxId:c94f7f9e7de031c457a749f2cefd26e7eaecac814369bea2a126dc540ae95e8c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723597584961991657,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0ba9510-e0a5-4558-98e3-a9510920f93a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5778f1274c91f6882fd1efbc2d7c2f484c2f1daf8c772baf6f7d6398b11d2bcd,PodSandboxId:d9f891d25e8e1aaf25d0e48e092294c60510a060f2c32f09c772127917dfbc71,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723597564688648231,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c514e832-2998-4439-bb97-0d6d4eb4e499,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d3cb1d648607a62a84856164fa12f6d400c0d4316d7bda5d83a448870c145fc,PodSandboxId:83a1a082fd506659affe2870d9ff9a0d6fdf28c0c211596a2c186635a8880fc7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723597561866844465,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-dz9zk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67e29ce3-7f67-4b96-8030-c980773b5772,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ec88a5a7a9d566fabdf1393667a95e5eac0ed7f2a2aaa326ee51d3e99a72b12,PodSandboxId:6b12b85e75b67325c97708feca61417980a8504ed000e11ffe7929e7666afa80,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723597554114575314,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pgm9t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efad60b0-c62e-4c47-97
4b-98fdca9d3496,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bacb411cbea2000ee28b8a5eb1c371d4094e22dea31e33892860568e08ee9768,PodSandboxId:c94f7f9e7de031c457a749f2cefd26e7eaecac814369bea2a126dc540ae95e8c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723597554103696166,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0ba9510-e0a5-4558-98e3-a9510920f93
a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89953f1dc813e32748579ac4c2541c0f99b1443ae3956d85a367db06810107b2,PodSandboxId:8d49aac6a7eb624a202a61b82b0a35a7ce0277e4c21afb45f0db4970a93af7ae,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723597550385420710,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-776907,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f30aa569f7332a3771c25ad0568b0e7d,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1632d4b88f7f0e7e802d1848acf6c67950cfa7cdfbe5b4dd7f6fbc467a969388,PodSandboxId:cc557c0f92cc4b2da21354ba61b5934a1951b181ab44212a8a2bde2717195d7d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723597550340466652,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-776907,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1727822331a98d206a1c6455e6be9d1a,},Annotations:map[string]string{io.kubernetes.containe
r.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ef9bf666bbbc4c4c8b8036d2fa7e15e882f2bb301fe7d3b63a483d8b38e6091,PodSandboxId:ee7ecc8991ff707504a4b1e27f2e6763b86e88139265a015c5dc25179958f68d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723597550361489327,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-776907,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da5bdcd48f884b5b86c729f49cf3dd71,},Annotations:map[string]string{io.kuber
netes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddba3ebb8413d6647bf6c8fe0bff16a1a10b35bcd7219cad5b66979c372cd92e,PodSandboxId:ff775dc6fd48640328c7d30640188a25141e6e31471f94649135b200cc891a46,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723597550293425759,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-776907,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3d30aa4c418230085009c5296d2a369,},Annotations:map[string]string{io.kubernetes.contain
er.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=90d07a09-e1fa-4ebc-8ff1-4df51237f5bf name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d4d7da10edbe3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      20 minutes ago      Running             storage-provisioner       2                   c94f7f9e7de03       storage-provisioner
	5778f1274c91f       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   20 minutes ago      Running             busybox                   1                   d9f891d25e8e1       busybox
	7d3cb1d648607       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      20 minutes ago      Running             coredns                   1                   83a1a082fd506       coredns-6f6b679f8f-dz9zk
	0ec88a5a7a9d5       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      21 minutes ago      Running             kube-proxy                1                   6b12b85e75b67       kube-proxy-pgm9t
	bacb411cbea20       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      21 minutes ago      Exited              storage-provisioner       1                   c94f7f9e7de03       storage-provisioner
	89953f1dc813e       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      21 minutes ago      Running             kube-scheduler            1                   8d49aac6a7eb6       kube-scheduler-no-preload-776907
	3ef9bf666bbbc       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      21 minutes ago      Running             kube-controller-manager   1                   ee7ecc8991ff7       kube-controller-manager-no-preload-776907
	1632d4b88f7f0       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      21 minutes ago      Running             etcd                      1                   cc557c0f92cc4       etcd-no-preload-776907
	ddba3ebb8413d       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      21 minutes ago      Running             kube-apiserver            1                   ff775dc6fd486       kube-apiserver-no-preload-776907
	
	
	==> coredns [7d3cb1d648607a62a84856164fa12f6d400c0d4316d7bda5d83a448870c145fc] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:58321 - 41401 "HINFO IN 3415938331824396986.8339278305176018987. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.008149157s
	
	
	==> describe nodes <==
	Name:               no-preload-776907
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-776907
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a5ac17629aabe8562ef9220195ba6559cf416caf
	                    minikube.k8s.io/name=no-preload-776907
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_14T00_57_52_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 14 Aug 2024 00:57:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-776907
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 14 Aug 2024 01:26:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 14 Aug 2024 01:26:48 +0000   Wed, 14 Aug 2024 00:57:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 14 Aug 2024 01:26:48 +0000   Wed, 14 Aug 2024 00:57:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 14 Aug 2024 01:26:48 +0000   Wed, 14 Aug 2024 00:57:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 14 Aug 2024 01:26:48 +0000   Wed, 14 Aug 2024 01:06:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.94
	  Hostname:    no-preload-776907
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8aa38961189044b487fbdbba224d46d9
	  System UUID:                8aa38961-1890-44b4-87fb-dbba224d46d9
	  Boot ID:                    c38d77c1-1566-4add-8535-79ad41888d31
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 coredns-6f6b679f8f-dz9zk                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     29m
	  kube-system                 etcd-no-preload-776907                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         29m
	  kube-system                 kube-apiserver-no-preload-776907             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-controller-manager-no-preload-776907    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-proxy-pgm9t                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-scheduler-no-preload-776907             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 metrics-server-6867b74b74-gb2dt              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28m                kube-proxy       
	  Normal  Starting                 21m                kube-proxy       
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  29m (x2 over 29m)  kubelet          Node no-preload-776907 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m (x2 over 29m)  kubelet          Node no-preload-776907 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29m (x2 over 29m)  kubelet          Node no-preload-776907 status is now: NodeHasSufficientPID
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  NodeReady                29m                kubelet          Node no-preload-776907 status is now: NodeReady
	  Normal  RegisteredNode           29m                node-controller  Node no-preload-776907 event: Registered Node no-preload-776907 in Controller
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node no-preload-776907 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node no-preload-776907 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node no-preload-776907 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           21m                node-controller  Node no-preload-776907 event: Registered Node no-preload-776907 in Controller
	
	
	==> dmesg <==
	[Aug14 01:05] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050312] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036190] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.659382] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.819760] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.544186] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.265452] systemd-fstab-generator[649]: Ignoring "noauto" option for root device
	[  +0.058814] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056161] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[  +0.175041] systemd-fstab-generator[675]: Ignoring "noauto" option for root device
	[  +0.137703] systemd-fstab-generator[687]: Ignoring "noauto" option for root device
	[  +0.272850] systemd-fstab-generator[717]: Ignoring "noauto" option for root device
	[ +14.726382] systemd-fstab-generator[1308]: Ignoring "noauto" option for root device
	[  +0.053351] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.754942] systemd-fstab-generator[1429]: Ignoring "noauto" option for root device
	[  +3.835726] kauditd_printk_skb: 97 callbacks suppressed
	[  +3.204496] systemd-fstab-generator[2062]: Ignoring "noauto" option for root device
	[  +3.258017] kauditd_printk_skb: 61 callbacks suppressed
	[Aug14 01:06] kauditd_printk_skb: 46 callbacks suppressed
	
	
	==> etcd [1632d4b88f7f0e7e802d1848acf6c67950cfa7cdfbe5b4dd7f6fbc467a969388] <==
	{"level":"info","ts":"2024-08-14T01:06:00.713876Z","caller":"traceutil/trace.go:171","msg":"trace[1321742999] transaction","detail":"{read_only:false; response_revision:578; number_of_response:1; }","duration":"599.854489ms","start":"2024-08-14T01:06:00.114007Z","end":"2024-08-14T01:06:00.713861Z","steps":["trace[1321742999] 'process raft request'  (duration: 281.002886ms)","trace[1321742999] 'compare'  (duration: 317.861878ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-14T01:06:00.714365Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-14T01:06:00.113977Z","time spent":"600.353758ms","remote":"127.0.0.1:44046","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":2880,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/daemonsets/kube-system/kube-proxy\" mod_revision:519 > success:<request_put:<key:\"/registry/daemonsets/kube-system/kube-proxy\" value_size:2829 >> failure:<request_range:<key:\"/registry/daemonsets/kube-system/kube-proxy\" > >"}
	{"level":"info","ts":"2024-08-14T01:06:00.719504Z","caller":"traceutil/trace.go:171","msg":"trace[1153514793] transaction","detail":"{read_only:false; response_revision:580; number_of_response:1; }","duration":"316.504152ms","start":"2024-08-14T01:06:00.402953Z","end":"2024-08-14T01:06:00.719457Z","steps":["trace[1153514793] 'process raft request'  (duration: 316.395846ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-14T01:06:00.719587Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-14T01:06:00.402937Z","time spent":"316.614323ms","remote":"127.0.0.1:43632","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":802,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/metrics-server-6867b74b74-gb2dt.17eb72d96368e34b\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/metrics-server-6867b74b74-gb2dt.17eb72d96368e34b\" value_size:707 lease:2270255453167696814 >> failure:<>"}
	{"level":"warn","ts":"2024-08-14T01:06:20.558393Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"242.380863ms","expected-duration":"100ms","prefix":"","request":"header:<ID:11493627490022472974 > lease_revoke:<id:1f81914e6b743c81>","response":"size:28"}
	{"level":"info","ts":"2024-08-14T01:06:20.558660Z","caller":"traceutil/trace.go:171","msg":"trace[1788565001] linearizableReadLoop","detail":"{readStateIndex:664; appliedIndex:663; }","duration":"288.370866ms","start":"2024-08-14T01:06:20.270254Z","end":"2024-08-14T01:06:20.558625Z","steps":["trace[1788565001] 'read index received'  (duration: 45.634344ms)","trace[1788565001] 'applied index is now lower than readState.Index'  (duration: 242.735561ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-14T01:06:20.558855Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"288.571674ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-6867b74b74-gb2dt\" ","response":"range_response_count:1 size:4339"}
	{"level":"info","ts":"2024-08-14T01:06:20.558955Z","caller":"traceutil/trace.go:171","msg":"trace[1568626227] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-6867b74b74-gb2dt; range_end:; response_count:1; response_revision:622; }","duration":"288.690307ms","start":"2024-08-14T01:06:20.270249Z","end":"2024-08-14T01:06:20.558939Z","steps":["trace[1568626227] 'agreement among raft nodes before linearized reading'  (duration: 288.479034ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-14T01:15:52.053318Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":859}
	{"level":"info","ts":"2024-08-14T01:15:52.068734Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":859,"took":"14.681704ms","hash":3522602525,"current-db-size-bytes":2699264,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":2699264,"current-db-size-in-use":"2.7 MB"}
	{"level":"info","ts":"2024-08-14T01:15:52.068847Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3522602525,"revision":859,"compact-revision":-1}
	{"level":"info","ts":"2024-08-14T01:20:52.060745Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1101}
	{"level":"info","ts":"2024-08-14T01:20:52.065368Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1101,"took":"4.196137ms","hash":2020050129,"current-db-size-bytes":2699264,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":1601536,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-08-14T01:20:52.065422Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2020050129,"revision":1101,"compact-revision":859}
	{"level":"info","ts":"2024-08-14T01:25:39.965542Z","caller":"traceutil/trace.go:171","msg":"trace[921939002] transaction","detail":"{read_only:false; response_revision:1578; number_of_response:1; }","duration":"108.214383ms","start":"2024-08-14T01:25:39.857277Z","end":"2024-08-14T01:25:39.965491Z","steps":["trace[921939002] 'process raft request'  (duration: 108.052102ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-14T01:25:40.380947Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"202.061565ms","expected-duration":"100ms","prefix":"","request":"header:<ID:11493627490022480651 > lease_revoke:<id:1f81914e6b745aac>","response":"size:28"}
	{"level":"info","ts":"2024-08-14T01:25:40.381241Z","caller":"traceutil/trace.go:171","msg":"trace[2096684342] linearizableReadLoop","detail":"{readStateIndex:1858; appliedIndex:1857; }","duration":"262.926941ms","start":"2024-08-14T01:25:40.118268Z","end":"2024-08-14T01:25:40.381195Z","steps":["trace[2096684342] 'read index received'  (duration: 60.592404ms)","trace[2096684342] 'applied index is now lower than readState.Index'  (duration: 202.332802ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-14T01:25:40.381357Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"263.064825ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-14T01:25:40.381411Z","caller":"traceutil/trace.go:171","msg":"trace[621487222] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1578; }","duration":"263.127953ms","start":"2024-08-14T01:25:40.118259Z","end":"2024-08-14T01:25:40.381387Z","steps":["trace[621487222] 'agreement among raft nodes before linearized reading'  (duration: 263.038397ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-14T01:25:52.067873Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1344}
	{"level":"info","ts":"2024-08-14T01:25:52.072013Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1344,"took":"3.806723ms","hash":3528713098,"current-db-size-bytes":2699264,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":1572864,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-08-14T01:25:52.072096Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3528713098,"revision":1344,"compact-revision":1101}
	{"level":"info","ts":"2024-08-14T01:26:32.314746Z","caller":"traceutil/trace.go:171","msg":"trace[746678710] transaction","detail":"{read_only:false; response_revision:1621; number_of_response:1; }","duration":"105.832199ms","start":"2024-08-14T01:26:32.208870Z","end":"2024-08-14T01:26:32.314702Z","steps":["trace[746678710] 'process raft request'  (duration: 105.651893ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-14T01:26:32.727541Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"256.425735ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-14T01:26:32.727710Z","caller":"traceutil/trace.go:171","msg":"trace[738396773] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1621; }","duration":"256.668216ms","start":"2024-08-14T01:26:32.471026Z","end":"2024-08-14T01:26:32.727694Z","steps":["trace[738396773] 'range keys from in-memory index tree'  (duration: 256.401345ms)"],"step_count":1}
	
	
	==> kernel <==
	 01:26:57 up 21 min,  0 users,  load average: 0.80, 0.32, 0.17
	Linux no-preload-776907 5.10.207 #1 SMP Tue Aug 13 22:05:29 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [ddba3ebb8413d6647bf6c8fe0bff16a1a10b35bcd7219cad5b66979c372cd92e] <==
	I0814 01:23:54.457470       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0814 01:23:54.457542       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0814 01:25:53.455504       1 handler_proxy.go:99] no RequestInfo found in the context
	E0814 01:25:53.455892       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0814 01:25:54.457818       1 handler_proxy.go:99] no RequestInfo found in the context
	E0814 01:25:54.457949       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0814 01:25:54.457831       1 handler_proxy.go:99] no RequestInfo found in the context
	E0814 01:25:54.458112       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0814 01:25:54.459175       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0814 01:25:54.459232       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0814 01:26:54.460310       1 handler_proxy.go:99] no RequestInfo found in the context
	E0814 01:26:54.460421       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0814 01:26:54.460645       1 handler_proxy.go:99] no RequestInfo found in the context
	E0814 01:26:54.460736       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0814 01:26:54.461641       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0814 01:26:54.462755       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [3ef9bf666bbbc4c4c8b8036d2fa7e15e882f2bb301fe7d3b63a483d8b38e6091] <==
	I0814 01:21:41.971343       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-776907"
	E0814 01:21:57.205589       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 01:21:57.737638       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0814 01:21:59.732563       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="247.818µs"
	I0814 01:22:14.728287       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="153.739µs"
	E0814 01:22:27.211394       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 01:22:27.744475       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0814 01:22:57.217915       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 01:22:57.752103       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0814 01:23:27.224389       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 01:23:27.759331       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0814 01:23:57.229726       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 01:23:57.766661       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0814 01:24:27.236441       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 01:24:27.774122       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0814 01:24:57.244256       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 01:24:57.781965       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0814 01:25:27.251595       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 01:25:27.791468       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0814 01:25:57.258847       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 01:25:57.801026       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0814 01:26:27.267312       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 01:26:27.812658       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0814 01:26:48.818997       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-776907"
	E0814 01:26:57.275712       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	
	
	==> kube-proxy [0ec88a5a7a9d566fabdf1393667a95e5eac0ed7f2a2aaa326ee51d3e99a72b12] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0814 01:05:54.606252       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0814 01:05:54.662856       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.72.94"]
	E0814 01:05:54.662980       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0814 01:05:54.719407       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0814 01:05:54.719534       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0814 01:05:54.719630       1 server_linux.go:169] "Using iptables Proxier"
	I0814 01:05:54.740931       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0814 01:05:54.741921       1 server.go:483] "Version info" version="v1.31.0"
	I0814 01:05:54.741961       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0814 01:05:54.749780       1 config.go:197] "Starting service config controller"
	I0814 01:05:54.750936       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0814 01:05:54.751586       1 config.go:326] "Starting node config controller"
	I0814 01:05:54.751614       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0814 01:05:54.754101       1 config.go:104] "Starting endpoint slice config controller"
	I0814 01:05:54.754145       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0814 01:05:54.852257       1 shared_informer.go:320] Caches are synced for service config
	I0814 01:05:54.852269       1 shared_informer.go:320] Caches are synced for node config
	I0814 01:05:54.855100       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [89953f1dc813e32748579ac4c2541c0f99b1443ae3956d85a367db06810107b2] <==
	I0814 01:05:51.574748       1 serving.go:386] Generated self-signed cert in-memory
	I0814 01:05:53.506579       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0814 01:05:53.509123       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0814 01:05:53.524184       1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController
	I0814 01:05:53.524302       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0814 01:05:53.524387       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0814 01:05:53.524433       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0814 01:05:53.524594       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0814 01:05:53.524525       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0814 01:05:53.524752       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0814 01:05:53.524777       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0814 01:05:53.624944       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0814 01:05:53.625176       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I0814 01:05:53.625349       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 14 01:25:59 no-preload-776907 kubelet[1436]: E0814 01:25:59.959408    1436 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598759959202530,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 01:25:59 no-preload-776907 kubelet[1436]: E0814 01:25:59.959451    1436 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598759959202530,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 01:26:00 no-preload-776907 kubelet[1436]: E0814 01:26:00.714793    1436 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-gb2dt" podUID="c950c58e-c5c3-4535-b10f-f4379ff03409"
	Aug 14 01:26:09 no-preload-776907 kubelet[1436]: E0814 01:26:09.961375    1436 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598769960969032,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 01:26:09 no-preload-776907 kubelet[1436]: E0814 01:26:09.961688    1436 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598769960969032,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 01:26:15 no-preload-776907 kubelet[1436]: E0814 01:26:15.716303    1436 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-gb2dt" podUID="c950c58e-c5c3-4535-b10f-f4379ff03409"
	Aug 14 01:26:19 no-preload-776907 kubelet[1436]: E0814 01:26:19.962828    1436 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598779962582783,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 01:26:19 no-preload-776907 kubelet[1436]: E0814 01:26:19.963145    1436 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598779962582783,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 01:26:29 no-preload-776907 kubelet[1436]: E0814 01:26:29.715597    1436 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-gb2dt" podUID="c950c58e-c5c3-4535-b10f-f4379ff03409"
	Aug 14 01:26:29 no-preload-776907 kubelet[1436]: E0814 01:26:29.964860    1436 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598789964523206,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 01:26:29 no-preload-776907 kubelet[1436]: E0814 01:26:29.964961    1436 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598789964523206,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 01:26:39 no-preload-776907 kubelet[1436]: E0814 01:26:39.966563    1436 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598799966139195,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 01:26:39 no-preload-776907 kubelet[1436]: E0814 01:26:39.966962    1436 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598799966139195,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 01:26:44 no-preload-776907 kubelet[1436]: E0814 01:26:44.714749    1436 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-gb2dt" podUID="c950c58e-c5c3-4535-b10f-f4379ff03409"
	Aug 14 01:26:49 no-preload-776907 kubelet[1436]: E0814 01:26:49.729247    1436 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 14 01:26:49 no-preload-776907 kubelet[1436]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 14 01:26:49 no-preload-776907 kubelet[1436]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 14 01:26:49 no-preload-776907 kubelet[1436]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 14 01:26:49 no-preload-776907 kubelet[1436]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 14 01:26:49 no-preload-776907 kubelet[1436]: E0814 01:26:49.969497    1436 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598809969124674,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 01:26:49 no-preload-776907 kubelet[1436]: E0814 01:26:49.969521    1436 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598809969124674,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 01:26:56 no-preload-776907 kubelet[1436]: E0814 01:26:56.728273    1436 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Aug 14 01:26:56 no-preload-776907 kubelet[1436]: E0814 01:26:56.728360    1436 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Aug 14 01:26:56 no-preload-776907 kubelet[1436]: E0814 01:26:56.728648    1436 kuberuntime_manager.go:1272] "Unhandled Error" err="container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9f65v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:
nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdi
n:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-6867b74b74-gb2dt_kube-system(c950c58e-c5c3-4535-b10f-f4379ff03409): ErrImagePull: pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" logger="UnhandledError"
	Aug 14 01:26:56 no-preload-776907 kubelet[1436]: E0814 01:26:56.730207    1436 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-6867b74b74-gb2dt" podUID="c950c58e-c5c3-4535-b10f-f4379ff03409"
	
	
	==> storage-provisioner [bacb411cbea2000ee28b8a5eb1c371d4094e22dea31e33892860568e08ee9768] <==
	I0814 01:05:54.251864       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0814 01:06:24.255920       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [d4d7da10edbe3c5e4d9a0997c7f8909523ed34eacac6b44dc982bf6ab7504eff] <==
	I0814 01:06:25.043632       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0814 01:06:25.053138       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0814 01:06:25.053251       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0814 01:06:42.455501       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0814 01:06:42.457132       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-776907_a5dfc76c-d470-49ff-ba3b-6cf96c638390!
	I0814 01:06:42.459143       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b183bb2f-bdc3-4b88-9bc3-98a8a2a13ac5", APIVersion:"v1", ResourceVersion:"638", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-776907_a5dfc76c-d470-49ff-ba3b-6cf96c638390 became leader
	I0814 01:06:42.557857       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-776907_a5dfc76c-d470-49ff-ba3b-6cf96c638390!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-776907 -n no-preload-776907
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-776907 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-gb2dt
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-776907 describe pod metrics-server-6867b74b74-gb2dt
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-776907 describe pod metrics-server-6867b74b74-gb2dt: exit status 1 (83.061014ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-gb2dt" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-776907 describe pod metrics-server-6867b74b74-gb2dt: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (454.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (478.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-585256 -n default-k8s-diff-port-585256
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-08-14 01:28:01.454571828 +0000 UTC m=+6071.618069031
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-585256 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-585256 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.24µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-585256 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-585256 -n default-k8s-diff-port-585256
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-585256 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-585256 logs -n 25: (1.659703506s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p old-k8s-version-179312                              | old-k8s-version-179312       | jenkins | v1.33.1 | 14 Aug 24 01:01 UTC | 14 Aug 24 01:01 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-585256 | jenkins | v1.33.1 | 14 Aug 24 01:01 UTC | 14 Aug 24 01:11 UTC |
	|         | default-k8s-diff-port-585256                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-179312             | old-k8s-version-179312       | jenkins | v1.33.1 | 14 Aug 24 01:01 UTC | 14 Aug 24 01:01 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-179312                              | old-k8s-version-179312       | jenkins | v1.33.1 | 14 Aug 24 01:01 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-179312                              | old-k8s-version-179312       | jenkins | v1.33.1 | 14 Aug 24 01:25 UTC | 14 Aug 24 01:25 UTC |
	| start   | -p newest-cni-137211 --memory=2200 --alsologtostderr   | newest-cni-137211            | jenkins | v1.33.1 | 14 Aug 24 01:25 UTC | 14 Aug 24 01:25 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-137211             | newest-cni-137211            | jenkins | v1.33.1 | 14 Aug 24 01:25 UTC | 14 Aug 24 01:25 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-137211                                   | newest-cni-137211            | jenkins | v1.33.1 | 14 Aug 24 01:25 UTC | 14 Aug 24 01:26 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-137211                  | newest-cni-137211            | jenkins | v1.33.1 | 14 Aug 24 01:26 UTC | 14 Aug 24 01:26 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-137211 --memory=2200 --alsologtostderr   | newest-cni-137211            | jenkins | v1.33.1 | 14 Aug 24 01:26 UTC | 14 Aug 24 01:26 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| delete  | -p embed-certs-901410                                  | embed-certs-901410           | jenkins | v1.33.1 | 14 Aug 24 01:26 UTC | 14 Aug 24 01:26 UTC |
	| start   | -p auto-612440 --memory=3072                           | auto-612440                  | jenkins | v1.33.1 | 14 Aug 24 01:26 UTC | 14 Aug 24 01:27 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| image   | newest-cni-137211 image list                           | newest-cni-137211            | jenkins | v1.33.1 | 14 Aug 24 01:26 UTC | 14 Aug 24 01:26 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-137211                                   | newest-cni-137211            | jenkins | v1.33.1 | 14 Aug 24 01:26 UTC | 14 Aug 24 01:26 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-137211                                   | newest-cni-137211            | jenkins | v1.33.1 | 14 Aug 24 01:26 UTC | 14 Aug 24 01:26 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-137211                                   | newest-cni-137211            | jenkins | v1.33.1 | 14 Aug 24 01:26 UTC | 14 Aug 24 01:26 UTC |
	| delete  | -p newest-cni-137211                                   | newest-cni-137211            | jenkins | v1.33.1 | 14 Aug 24 01:26 UTC | 14 Aug 24 01:26 UTC |
	| start   | -p kindnet-612440                                      | kindnet-612440               | jenkins | v1.33.1 | 14 Aug 24 01:26 UTC | 14 Aug 24 01:27 UTC |
	|         | --memory=3072                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --cni=kindnet --driver=kvm2                            |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p no-preload-776907                                   | no-preload-776907            | jenkins | v1.33.1 | 14 Aug 24 01:26 UTC | 14 Aug 24 01:26 UTC |
	| start   | -p calico-612440 --memory=3072                         | calico-612440                | jenkins | v1.33.1 | 14 Aug 24 01:26 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --cni=calico --driver=kvm2                             |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| ssh     | -p auto-612440 pgrep -a                                | auto-612440                  | jenkins | v1.33.1 | 14 Aug 24 01:27 UTC | 14 Aug 24 01:27 UTC |
	|         | kubelet                                                |                              |         |         |                     |                     |
	| ssh     | -p kindnet-612440 pgrep -a                             | kindnet-612440               | jenkins | v1.33.1 | 14 Aug 24 01:27 UTC | 14 Aug 24 01:27 UTC |
	|         | kubelet                                                |                              |         |         |                     |                     |
	| ssh     | -p auto-612440 sudo cat                                | auto-612440                  | jenkins | v1.33.1 | 14 Aug 24 01:28 UTC | 14 Aug 24 01:28 UTC |
	|         | /etc/nsswitch.conf                                     |                              |         |         |                     |                     |
	| ssh     | -p auto-612440 sudo cat                                | auto-612440                  | jenkins | v1.33.1 | 14 Aug 24 01:28 UTC | 14 Aug 24 01:28 UTC |
	|         | /etc/hosts                                             |                              |         |         |                     |                     |
	| ssh     | -p auto-612440 sudo cat                                | auto-612440                  | jenkins | v1.33.1 | 14 Aug 24 01:28 UTC |                     |
	|         | /etc/resolv.conf                                       |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/14 01:26:59
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0814 01:26:59.942685   70428 out.go:291] Setting OutFile to fd 1 ...
	I0814 01:26:59.943223   70428 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 01:26:59.943275   70428 out.go:304] Setting ErrFile to fd 2...
	I0814 01:26:59.943294   70428 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 01:26:59.943792   70428 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19429-9425/.minikube/bin
	I0814 01:26:59.945048   70428 out.go:298] Setting JSON to false
	I0814 01:26:59.946011   70428 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7766,"bootTime":1723591054,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0814 01:26:59.946105   70428 start.go:139] virtualization: kvm guest
	I0814 01:26:59.948166   70428 out.go:177] * [calico-612440] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0814 01:26:59.950362   70428 out.go:177]   - MINIKUBE_LOCATION=19429
	I0814 01:26:59.950399   70428 notify.go:220] Checking for updates...
	I0814 01:26:59.952956   70428 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0814 01:26:59.954223   70428 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19429-9425/kubeconfig
	I0814 01:26:59.955277   70428 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19429-9425/.minikube
	I0814 01:26:59.956486   70428 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0814 01:26:59.957807   70428 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0814 01:26:59.959673   70428 config.go:182] Loaded profile config "auto-612440": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 01:26:59.959826   70428 config.go:182] Loaded profile config "default-k8s-diff-port-585256": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 01:26:59.959955   70428 config.go:182] Loaded profile config "kindnet-612440": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 01:26:59.960063   70428 driver.go:392] Setting default libvirt URI to qemu:///system
	I0814 01:26:59.998860   70428 out.go:177] * Using the kvm2 driver based on user configuration
	I0814 01:27:00.000033   70428 start.go:297] selected driver: kvm2
	I0814 01:27:00.000043   70428 start.go:901] validating driver "kvm2" against <nil>
	I0814 01:27:00.000054   70428 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0814 01:27:00.000768   70428 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 01:27:00.000835   70428 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19429-9425/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0814 01:27:00.016630   70428 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0814 01:27:00.016685   70428 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0814 01:27:00.016998   70428 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 01:27:00.017082   70428 cni.go:84] Creating CNI manager for "calico"
	I0814 01:27:00.017099   70428 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0814 01:27:00.017165   70428 start.go:340] cluster config:
	{Name:calico-612440 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:calico-612440 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 01:27:00.017334   70428 iso.go:125] acquiring lock: {Name:mk654171f0e78c238a265344dbbd1eacb21d0f1b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 01:27:00.019229   70428 out.go:177] * Starting "calico-612440" primary control-plane node in "calico-612440" cluster
	I0814 01:26:55.877468   70015 main.go:141] libmachine: (kindnet-612440) DBG | domain kindnet-612440 has defined MAC address 52:54:00:2e:21:e4 in network mk-kindnet-612440
	I0814 01:26:55.878064   70015 main.go:141] libmachine: (kindnet-612440) DBG | unable to find current IP address of domain kindnet-612440 in network mk-kindnet-612440
	I0814 01:26:55.878105   70015 main.go:141] libmachine: (kindnet-612440) DBG | I0814 01:26:55.877959   70097 retry.go:31] will retry after 596.869933ms: waiting for machine to come up
	I0814 01:26:56.478616   70015 main.go:141] libmachine: (kindnet-612440) DBG | domain kindnet-612440 has defined MAC address 52:54:00:2e:21:e4 in network mk-kindnet-612440
	I0814 01:26:56.480480   70015 main.go:141] libmachine: (kindnet-612440) DBG | unable to find current IP address of domain kindnet-612440 in network mk-kindnet-612440
	I0814 01:26:56.480516   70015 main.go:141] libmachine: (kindnet-612440) DBG | I0814 01:26:56.480383   70097 retry.go:31] will retry after 810.827062ms: waiting for machine to come up
	I0814 01:26:57.292444   70015 main.go:141] libmachine: (kindnet-612440) DBG | domain kindnet-612440 has defined MAC address 52:54:00:2e:21:e4 in network mk-kindnet-612440
	I0814 01:26:57.293010   70015 main.go:141] libmachine: (kindnet-612440) DBG | unable to find current IP address of domain kindnet-612440 in network mk-kindnet-612440
	I0814 01:26:57.293035   70015 main.go:141] libmachine: (kindnet-612440) DBG | I0814 01:26:57.292960   70097 retry.go:31] will retry after 1.257904013s: waiting for machine to come up
	I0814 01:26:58.552290   70015 main.go:141] libmachine: (kindnet-612440) DBG | domain kindnet-612440 has defined MAC address 52:54:00:2e:21:e4 in network mk-kindnet-612440
	I0814 01:26:58.552810   70015 main.go:141] libmachine: (kindnet-612440) DBG | unable to find current IP address of domain kindnet-612440 in network mk-kindnet-612440
	I0814 01:26:58.552837   70015 main.go:141] libmachine: (kindnet-612440) DBG | I0814 01:26:58.552755   70097 retry.go:31] will retry after 1.41747331s: waiting for machine to come up
	I0814 01:26:59.971880   70015 main.go:141] libmachine: (kindnet-612440) DBG | domain kindnet-612440 has defined MAC address 52:54:00:2e:21:e4 in network mk-kindnet-612440
	I0814 01:26:59.972458   70015 main.go:141] libmachine: (kindnet-612440) DBG | unable to find current IP address of domain kindnet-612440 in network mk-kindnet-612440
	I0814 01:26:59.972489   70015 main.go:141] libmachine: (kindnet-612440) DBG | I0814 01:26:59.972434   70097 retry.go:31] will retry after 1.643782348s: waiting for machine to come up
	I0814 01:27:00.020392   70428 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0814 01:27:00.020440   70428 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0814 01:27:00.020454   70428 cache.go:56] Caching tarball of preloaded images
	I0814 01:27:00.020534   70428 preload.go:172] Found /home/jenkins/minikube-integration/19429-9425/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0814 01:27:00.020555   70428 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0814 01:27:00.020671   70428 profile.go:143] Saving config to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/calico-612440/config.json ...
	I0814 01:27:00.020693   70428 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/calico-612440/config.json: {Name:mkd3f2a4b749fca6047a65c84d3577e4431ebaf6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 01:27:00.020847   70428 start.go:360] acquireMachinesLock for calico-612440: {Name:mk8ab1b491404dd6cc95a37a50c74a14d6d0e4c9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 01:27:01.617524   70015 main.go:141] libmachine: (kindnet-612440) DBG | domain kindnet-612440 has defined MAC address 52:54:00:2e:21:e4 in network mk-kindnet-612440
	I0814 01:27:01.618126   70015 main.go:141] libmachine: (kindnet-612440) DBG | unable to find current IP address of domain kindnet-612440 in network mk-kindnet-612440
	I0814 01:27:01.618177   70015 main.go:141] libmachine: (kindnet-612440) DBG | I0814 01:27:01.618099   70097 retry.go:31] will retry after 2.444944945s: waiting for machine to come up
	I0814 01:27:04.066253   70015 main.go:141] libmachine: (kindnet-612440) DBG | domain kindnet-612440 has defined MAC address 52:54:00:2e:21:e4 in network mk-kindnet-612440
	I0814 01:27:04.066831   70015 main.go:141] libmachine: (kindnet-612440) DBG | unable to find current IP address of domain kindnet-612440 in network mk-kindnet-612440
	I0814 01:27:04.066865   70015 main.go:141] libmachine: (kindnet-612440) DBG | I0814 01:27:04.066775   70097 retry.go:31] will retry after 3.384958564s: waiting for machine to come up
	I0814 01:27:09.059053   69391 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0814 01:27:09.059131   69391 kubeadm.go:310] [preflight] Running pre-flight checks
	I0814 01:27:09.059232   69391 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0814 01:27:09.059352   69391 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0814 01:27:09.059466   69391 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0814 01:27:09.059530   69391 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0814 01:27:09.061136   69391 out.go:204]   - Generating certificates and keys ...
	I0814 01:27:09.061225   69391 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0814 01:27:09.061297   69391 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0814 01:27:09.061371   69391 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0814 01:27:09.061442   69391 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0814 01:27:09.061512   69391 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0814 01:27:09.061583   69391 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0814 01:27:09.061664   69391 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0814 01:27:09.061853   69391 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [auto-612440 localhost] and IPs [192.168.50.74 127.0.0.1 ::1]
	I0814 01:27:09.061917   69391 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0814 01:27:09.062083   69391 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [auto-612440 localhost] and IPs [192.168.50.74 127.0.0.1 ::1]
	I0814 01:27:09.062185   69391 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0814 01:27:09.062270   69391 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0814 01:27:09.062335   69391 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0814 01:27:09.062394   69391 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0814 01:27:09.062455   69391 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0814 01:27:09.062526   69391 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0814 01:27:09.062583   69391 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0814 01:27:09.062672   69391 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0814 01:27:09.062742   69391 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0814 01:27:09.062828   69391 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0814 01:27:09.062894   69391 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0814 01:27:09.064259   69391 out.go:204]   - Booting up control plane ...
	I0814 01:27:09.064371   69391 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0814 01:27:09.064473   69391 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0814 01:27:09.064551   69391 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0814 01:27:09.064715   69391 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0814 01:27:09.064847   69391 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0814 01:27:09.064889   69391 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0814 01:27:09.064997   69391 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0814 01:27:09.065140   69391 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0814 01:27:09.065251   69391 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 500.792051ms
	I0814 01:27:09.065327   69391 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0814 01:27:09.065386   69391 kubeadm.go:310] [api-check] The API server is healthy after 5.501293723s
	I0814 01:27:09.065473   69391 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0814 01:27:09.065621   69391 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0814 01:27:09.065706   69391 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0814 01:27:09.065904   69391 kubeadm.go:310] [mark-control-plane] Marking the node auto-612440 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0814 01:27:09.065982   69391 kubeadm.go:310] [bootstrap-token] Using token: zujlcb.u40862hntf9a56bs
	I0814 01:27:09.067396   69391 out.go:204]   - Configuring RBAC rules ...
	I0814 01:27:09.067513   69391 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0814 01:27:09.067619   69391 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0814 01:27:09.067792   69391 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0814 01:27:09.067929   69391 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0814 01:27:09.068041   69391 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0814 01:27:09.068124   69391 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0814 01:27:09.068222   69391 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0814 01:27:09.068259   69391 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0814 01:27:09.068298   69391 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0814 01:27:09.068304   69391 kubeadm.go:310] 
	I0814 01:27:09.068352   69391 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0814 01:27:09.068362   69391 kubeadm.go:310] 
	I0814 01:27:09.068447   69391 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0814 01:27:09.068454   69391 kubeadm.go:310] 
	I0814 01:27:09.068474   69391 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0814 01:27:09.068549   69391 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0814 01:27:09.068625   69391 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0814 01:27:09.068637   69391 kubeadm.go:310] 
	I0814 01:27:09.068717   69391 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0814 01:27:09.068728   69391 kubeadm.go:310] 
	I0814 01:27:09.068806   69391 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0814 01:27:09.068815   69391 kubeadm.go:310] 
	I0814 01:27:09.068881   69391 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0814 01:27:09.068988   69391 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0814 01:27:09.069081   69391 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0814 01:27:09.069104   69391 kubeadm.go:310] 
	I0814 01:27:09.069224   69391 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0814 01:27:09.069334   69391 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0814 01:27:09.069343   69391 kubeadm.go:310] 
	I0814 01:27:09.069463   69391 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token zujlcb.u40862hntf9a56bs \
	I0814 01:27:09.069606   69391 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f608549e9c065598e2d494ba753eb8a7bbe7260a53a76decd30e6e17b5fe24c3 \
	I0814 01:27:09.069639   69391 kubeadm.go:310] 	--control-plane 
	I0814 01:27:09.069655   69391 kubeadm.go:310] 
	I0814 01:27:09.069778   69391 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0814 01:27:09.069787   69391 kubeadm.go:310] 
	I0814 01:27:09.069869   69391 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token zujlcb.u40862hntf9a56bs \
	I0814 01:27:09.069999   69391 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f608549e9c065598e2d494ba753eb8a7bbe7260a53a76decd30e6e17b5fe24c3 
	I0814 01:27:09.070022   69391 cni.go:84] Creating CNI manager for ""
	I0814 01:27:09.070032   69391 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 01:27:09.071426   69391 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0814 01:27:09.072531   69391 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0814 01:27:09.084991   69391 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0814 01:27:09.112193   69391 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0814 01:27:09.112279   69391 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:27:09.112279   69391 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-612440 minikube.k8s.io/updated_at=2024_08_14T01_27_09_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a5ac17629aabe8562ef9220195ba6559cf416caf minikube.k8s.io/name=auto-612440 minikube.k8s.io/primary=true
	I0814 01:27:09.153610   69391 ops.go:34] apiserver oom_adj: -16
	I0814 01:27:09.261438   69391 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:27:07.453176   70015 main.go:141] libmachine: (kindnet-612440) DBG | domain kindnet-612440 has defined MAC address 52:54:00:2e:21:e4 in network mk-kindnet-612440
	I0814 01:27:07.453578   70015 main.go:141] libmachine: (kindnet-612440) DBG | unable to find current IP address of domain kindnet-612440 in network mk-kindnet-612440
	I0814 01:27:07.453606   70015 main.go:141] libmachine: (kindnet-612440) DBG | I0814 01:27:07.453549   70097 retry.go:31] will retry after 3.041588933s: waiting for machine to come up
	I0814 01:27:10.496745   70015 main.go:141] libmachine: (kindnet-612440) DBG | domain kindnet-612440 has defined MAC address 52:54:00:2e:21:e4 in network mk-kindnet-612440
	I0814 01:27:10.497237   70015 main.go:141] libmachine: (kindnet-612440) DBG | unable to find current IP address of domain kindnet-612440 in network mk-kindnet-612440
	I0814 01:27:10.497265   70015 main.go:141] libmachine: (kindnet-612440) DBG | I0814 01:27:10.497201   70097 retry.go:31] will retry after 4.432557148s: waiting for machine to come up
	I0814 01:27:09.762135   69391 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:27:10.262528   69391 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:27:10.762447   69391 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:27:11.261483   69391 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:27:11.761736   69391 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:27:12.262535   69391 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:27:12.762418   69391 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:27:12.849604   69391 kubeadm.go:1113] duration metric: took 3.737412327s to wait for elevateKubeSystemPrivileges
	I0814 01:27:12.849639   69391 kubeadm.go:394] duration metric: took 14.500925914s to StartCluster
	I0814 01:27:12.849656   69391 settings.go:142] acquiring lock: {Name:mkb0f793aa2a6618ff3457f9cd2d34beec5f1b5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 01:27:12.849735   69391 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19429-9425/kubeconfig
	I0814 01:27:12.850733   69391 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19429-9425/kubeconfig: {Name:mkd99f966962f24d3ec49056c344f5320df43dba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 01:27:12.850960   69391 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0814 01:27:12.850976   69391 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.50.74 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0814 01:27:12.851051   69391 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0814 01:27:12.851142   69391 addons.go:69] Setting storage-provisioner=true in profile "auto-612440"
	I0814 01:27:12.851189   69391 addons.go:234] Setting addon storage-provisioner=true in "auto-612440"
	I0814 01:27:12.851192   69391 addons.go:69] Setting default-storageclass=true in profile "auto-612440"
	I0814 01:27:12.851225   69391 host.go:66] Checking if "auto-612440" exists ...
	I0814 01:27:12.851241   69391 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-612440"
	I0814 01:27:12.851244   69391 config.go:182] Loaded profile config "auto-612440": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 01:27:12.851626   69391 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:27:12.851669   69391 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:27:12.851768   69391 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:27:12.851799   69391 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:27:12.852713   69391 out.go:177] * Verifying Kubernetes components...
	I0814 01:27:12.854057   69391 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 01:27:12.866442   69391 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41765
	I0814 01:27:12.866454   69391 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35299
	I0814 01:27:12.866862   69391 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:27:12.866925   69391 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:27:12.867345   69391 main.go:141] libmachine: Using API Version  1
	I0814 01:27:12.867362   69391 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:27:12.867479   69391 main.go:141] libmachine: Using API Version  1
	I0814 01:27:12.867503   69391 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:27:12.867689   69391 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:27:12.867865   69391 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:27:12.868231   69391 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:27:12.868271   69391 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:27:12.868594   69391 main.go:141] libmachine: (auto-612440) Calling .GetState
	I0814 01:27:12.872075   69391 addons.go:234] Setting addon default-storageclass=true in "auto-612440"
	I0814 01:27:12.872115   69391 host.go:66] Checking if "auto-612440" exists ...
	I0814 01:27:12.872477   69391 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:27:12.872515   69391 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:27:12.884083   69391 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35021
	I0814 01:27:12.884555   69391 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:27:12.885065   69391 main.go:141] libmachine: Using API Version  1
	I0814 01:27:12.885092   69391 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:27:12.885453   69391 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:27:12.885648   69391 main.go:141] libmachine: (auto-612440) Calling .GetState
	I0814 01:27:12.887147   69391 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43723
	I0814 01:27:12.887500   69391 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:27:12.887694   69391 main.go:141] libmachine: (auto-612440) Calling .DriverName
	I0814 01:27:12.887923   69391 main.go:141] libmachine: Using API Version  1
	I0814 01:27:12.887941   69391 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:27:12.888446   69391 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:27:12.889119   69391 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:27:12.889167   69391 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:27:12.889798   69391 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 01:27:12.890999   69391 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 01:27:12.891012   69391 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0814 01:27:12.891026   69391 main.go:141] libmachine: (auto-612440) Calling .GetSSHHostname
	I0814 01:27:12.894531   69391 main.go:141] libmachine: (auto-612440) DBG | domain auto-612440 has defined MAC address 52:54:00:b0:2f:05 in network mk-auto-612440
	I0814 01:27:12.894973   69391 main.go:141] libmachine: (auto-612440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2f:05", ip: ""} in network mk-auto-612440: {Iface:virbr2 ExpiryTime:2024-08-14 02:26:43 +0000 UTC Type:0 Mac:52:54:00:b0:2f:05 Iaid: IPaddr:192.168.50.74 Prefix:24 Hostname:auto-612440 Clientid:01:52:54:00:b0:2f:05}
	I0814 01:27:12.895031   69391 main.go:141] libmachine: (auto-612440) DBG | domain auto-612440 has defined IP address 192.168.50.74 and MAC address 52:54:00:b0:2f:05 in network mk-auto-612440
	I0814 01:27:12.895289   69391 main.go:141] libmachine: (auto-612440) Calling .GetSSHPort
	I0814 01:27:12.895501   69391 main.go:141] libmachine: (auto-612440) Calling .GetSSHKeyPath
	I0814 01:27:12.895674   69391 main.go:141] libmachine: (auto-612440) Calling .GetSSHUsername
	I0814 01:27:12.895809   69391 sshutil.go:53] new ssh client: &{IP:192.168.50.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/auto-612440/id_rsa Username:docker}
	I0814 01:27:12.904400   69391 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37595
	I0814 01:27:12.904754   69391 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:27:12.905178   69391 main.go:141] libmachine: Using API Version  1
	I0814 01:27:12.905196   69391 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:27:12.905494   69391 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:27:12.905675   69391 main.go:141] libmachine: (auto-612440) Calling .GetState
	I0814 01:27:12.907278   69391 main.go:141] libmachine: (auto-612440) Calling .DriverName
	I0814 01:27:12.907489   69391 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0814 01:27:12.907506   69391 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0814 01:27:12.907524   69391 main.go:141] libmachine: (auto-612440) Calling .GetSSHHostname
	I0814 01:27:12.910115   69391 main.go:141] libmachine: (auto-612440) DBG | domain auto-612440 has defined MAC address 52:54:00:b0:2f:05 in network mk-auto-612440
	I0814 01:27:12.910506   69391 main.go:141] libmachine: (auto-612440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2f:05", ip: ""} in network mk-auto-612440: {Iface:virbr2 ExpiryTime:2024-08-14 02:26:43 +0000 UTC Type:0 Mac:52:54:00:b0:2f:05 Iaid: IPaddr:192.168.50.74 Prefix:24 Hostname:auto-612440 Clientid:01:52:54:00:b0:2f:05}
	I0814 01:27:12.910529   69391 main.go:141] libmachine: (auto-612440) DBG | domain auto-612440 has defined IP address 192.168.50.74 and MAC address 52:54:00:b0:2f:05 in network mk-auto-612440
	I0814 01:27:12.910886   69391 main.go:141] libmachine: (auto-612440) Calling .GetSSHPort
	I0814 01:27:12.911057   69391 main.go:141] libmachine: (auto-612440) Calling .GetSSHKeyPath
	I0814 01:27:12.911207   69391 main.go:141] libmachine: (auto-612440) Calling .GetSSHUsername
	I0814 01:27:12.911338   69391 sshutil.go:53] new ssh client: &{IP:192.168.50.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/auto-612440/id_rsa Username:docker}
	I0814 01:27:12.996493   69391 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0814 01:27:13.025710   69391 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 01:27:13.173918   69391 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0814 01:27:13.189385   69391 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 01:27:13.478314   69391 start.go:971] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0814 01:27:13.478488   69391 main.go:141] libmachine: Making call to close driver server
	I0814 01:27:13.478510   69391 main.go:141] libmachine: (auto-612440) Calling .Close
	I0814 01:27:13.478791   69391 main.go:141] libmachine: (auto-612440) DBG | Closing plugin on server side
	I0814 01:27:13.478838   69391 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:27:13.478862   69391 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:27:13.478888   69391 main.go:141] libmachine: Making call to close driver server
	I0814 01:27:13.478899   69391 main.go:141] libmachine: (auto-612440) Calling .Close
	I0814 01:27:13.479360   69391 node_ready.go:35] waiting up to 15m0s for node "auto-612440" to be "Ready" ...
	I0814 01:27:13.479836   69391 main.go:141] libmachine: (auto-612440) DBG | Closing plugin on server side
	I0814 01:27:13.479854   69391 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:27:13.479866   69391 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:27:13.513037   69391 node_ready.go:49] node "auto-612440" has status "Ready":"True"
	I0814 01:27:13.513061   69391 node_ready.go:38] duration metric: took 33.673309ms for node "auto-612440" to be "Ready" ...
	I0814 01:27:13.513070   69391 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 01:27:13.525829   69391 main.go:141] libmachine: Making call to close driver server
	I0814 01:27:13.525850   69391 main.go:141] libmachine: (auto-612440) Calling .Close
	I0814 01:27:13.526099   69391 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:27:13.526117   69391 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:27:13.527832   69391 pod_ready.go:78] waiting up to 15m0s for pod "etcd-auto-612440" in "kube-system" namespace to be "Ready" ...
	I0814 01:27:13.755031   69391 main.go:141] libmachine: Making call to close driver server
	I0814 01:27:13.755051   69391 main.go:141] libmachine: (auto-612440) Calling .Close
	I0814 01:27:13.755333   69391 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:27:13.755397   69391 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:27:13.755395   69391 main.go:141] libmachine: (auto-612440) DBG | Closing plugin on server side
	I0814 01:27:13.755417   69391 main.go:141] libmachine: Making call to close driver server
	I0814 01:27:13.755429   69391 main.go:141] libmachine: (auto-612440) Calling .Close
	I0814 01:27:13.755674   69391 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:27:13.755692   69391 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:27:13.757180   69391 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0814 01:27:13.758136   69391 addons.go:510] duration metric: took 907.094402ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0814 01:27:13.983027   69391 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-612440" context rescaled to 1 replicas
	I0814 01:27:14.933088   70015 main.go:141] libmachine: (kindnet-612440) DBG | domain kindnet-612440 has defined MAC address 52:54:00:2e:21:e4 in network mk-kindnet-612440
	I0814 01:27:14.933569   70015 main.go:141] libmachine: (kindnet-612440) Found IP for machine: 192.168.61.73
	I0814 01:27:14.933593   70015 main.go:141] libmachine: (kindnet-612440) Reserving static IP address...
	I0814 01:27:14.933625   70015 main.go:141] libmachine: (kindnet-612440) DBG | domain kindnet-612440 has current primary IP address 192.168.61.73 and MAC address 52:54:00:2e:21:e4 in network mk-kindnet-612440
	I0814 01:27:14.933958   70015 main.go:141] libmachine: (kindnet-612440) DBG | unable to find host DHCP lease matching {name: "kindnet-612440", mac: "52:54:00:2e:21:e4", ip: "192.168.61.73"} in network mk-kindnet-612440
	I0814 01:27:15.006228   70015 main.go:141] libmachine: (kindnet-612440) DBG | Getting to WaitForSSH function...
	I0814 01:27:15.006258   70015 main.go:141] libmachine: (kindnet-612440) Reserved static IP address: 192.168.61.73
	I0814 01:27:15.006271   70015 main.go:141] libmachine: (kindnet-612440) Waiting for SSH to be available...
	I0814 01:27:15.008816   70015 main.go:141] libmachine: (kindnet-612440) DBG | domain kindnet-612440 has defined MAC address 52:54:00:2e:21:e4 in network mk-kindnet-612440
	I0814 01:27:15.009198   70015 main.go:141] libmachine: (kindnet-612440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:21:e4", ip: ""} in network mk-kindnet-612440: {Iface:virbr4 ExpiryTime:2024-08-14 02:27:06 +0000 UTC Type:0 Mac:52:54:00:2e:21:e4 Iaid: IPaddr:192.168.61.73 Prefix:24 Hostname:minikube Clientid:01:52:54:00:2e:21:e4}
	I0814 01:27:15.009229   70015 main.go:141] libmachine: (kindnet-612440) DBG | domain kindnet-612440 has defined IP address 192.168.61.73 and MAC address 52:54:00:2e:21:e4 in network mk-kindnet-612440
	I0814 01:27:15.009308   70015 main.go:141] libmachine: (kindnet-612440) DBG | Using SSH client type: external
	I0814 01:27:15.009430   70015 main.go:141] libmachine: (kindnet-612440) DBG | Using SSH private key: /home/jenkins/minikube-integration/19429-9425/.minikube/machines/kindnet-612440/id_rsa (-rw-------)
	I0814 01:27:15.009469   70015 main.go:141] libmachine: (kindnet-612440) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.73 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19429-9425/.minikube/machines/kindnet-612440/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0814 01:27:15.009497   70015 main.go:141] libmachine: (kindnet-612440) DBG | About to run SSH command:
	I0814 01:27:15.009509   70015 main.go:141] libmachine: (kindnet-612440) DBG | exit 0
	I0814 01:27:15.133773   70015 main.go:141] libmachine: (kindnet-612440) DBG | SSH cmd err, output: <nil>: 
	I0814 01:27:15.134104   70015 main.go:141] libmachine: (kindnet-612440) KVM machine creation complete!
	I0814 01:27:15.134444   70015 main.go:141] libmachine: (kindnet-612440) Calling .GetConfigRaw
	I0814 01:27:15.134967   70015 main.go:141] libmachine: (kindnet-612440) Calling .DriverName
	I0814 01:27:15.135136   70015 main.go:141] libmachine: (kindnet-612440) Calling .DriverName
	I0814 01:27:15.135299   70015 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0814 01:27:15.135328   70015 main.go:141] libmachine: (kindnet-612440) Calling .GetState
	I0814 01:27:15.136616   70015 main.go:141] libmachine: Detecting operating system of created instance...
	I0814 01:27:15.136628   70015 main.go:141] libmachine: Waiting for SSH to be available...
	I0814 01:27:15.136634   70015 main.go:141] libmachine: Getting to WaitForSSH function...
	I0814 01:27:15.136640   70015 main.go:141] libmachine: (kindnet-612440) Calling .GetSSHHostname
	I0814 01:27:15.139069   70015 main.go:141] libmachine: (kindnet-612440) DBG | domain kindnet-612440 has defined MAC address 52:54:00:2e:21:e4 in network mk-kindnet-612440
	I0814 01:27:15.139448   70015 main.go:141] libmachine: (kindnet-612440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:21:e4", ip: ""} in network mk-kindnet-612440: {Iface:virbr4 ExpiryTime:2024-08-14 02:27:06 +0000 UTC Type:0 Mac:52:54:00:2e:21:e4 Iaid: IPaddr:192.168.61.73 Prefix:24 Hostname:kindnet-612440 Clientid:01:52:54:00:2e:21:e4}
	I0814 01:27:15.139468   70015 main.go:141] libmachine: (kindnet-612440) DBG | domain kindnet-612440 has defined IP address 192.168.61.73 and MAC address 52:54:00:2e:21:e4 in network mk-kindnet-612440
	I0814 01:27:15.139615   70015 main.go:141] libmachine: (kindnet-612440) Calling .GetSSHPort
	I0814 01:27:15.139775   70015 main.go:141] libmachine: (kindnet-612440) Calling .GetSSHKeyPath
	I0814 01:27:15.139912   70015 main.go:141] libmachine: (kindnet-612440) Calling .GetSSHKeyPath
	I0814 01:27:15.140058   70015 main.go:141] libmachine: (kindnet-612440) Calling .GetSSHUsername
	I0814 01:27:15.140226   70015 main.go:141] libmachine: Using SSH client type: native
	I0814 01:27:15.140417   70015 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.73 22 <nil> <nil>}
	I0814 01:27:15.140429   70015 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0814 01:27:15.240799   70015 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 01:27:15.240819   70015 main.go:141] libmachine: Detecting the provisioner...
	I0814 01:27:15.240828   70015 main.go:141] libmachine: (kindnet-612440) Calling .GetSSHHostname
	I0814 01:27:15.243471   70015 main.go:141] libmachine: (kindnet-612440) DBG | domain kindnet-612440 has defined MAC address 52:54:00:2e:21:e4 in network mk-kindnet-612440
	I0814 01:27:15.243828   70015 main.go:141] libmachine: (kindnet-612440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:21:e4", ip: ""} in network mk-kindnet-612440: {Iface:virbr4 ExpiryTime:2024-08-14 02:27:06 +0000 UTC Type:0 Mac:52:54:00:2e:21:e4 Iaid: IPaddr:192.168.61.73 Prefix:24 Hostname:kindnet-612440 Clientid:01:52:54:00:2e:21:e4}
	I0814 01:27:15.243865   70015 main.go:141] libmachine: (kindnet-612440) DBG | domain kindnet-612440 has defined IP address 192.168.61.73 and MAC address 52:54:00:2e:21:e4 in network mk-kindnet-612440
	I0814 01:27:15.244014   70015 main.go:141] libmachine: (kindnet-612440) Calling .GetSSHPort
	I0814 01:27:15.244201   70015 main.go:141] libmachine: (kindnet-612440) Calling .GetSSHKeyPath
	I0814 01:27:15.244351   70015 main.go:141] libmachine: (kindnet-612440) Calling .GetSSHKeyPath
	I0814 01:27:15.244458   70015 main.go:141] libmachine: (kindnet-612440) Calling .GetSSHUsername
	I0814 01:27:15.244619   70015 main.go:141] libmachine: Using SSH client type: native
	I0814 01:27:15.244815   70015 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.73 22 <nil> <nil>}
	I0814 01:27:15.244828   70015 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0814 01:27:15.350187   70015 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0814 01:27:15.350278   70015 main.go:141] libmachine: found compatible host: buildroot
	I0814 01:27:15.350294   70015 main.go:141] libmachine: Provisioning with buildroot...
	I0814 01:27:15.350306   70015 main.go:141] libmachine: (kindnet-612440) Calling .GetMachineName
	I0814 01:27:15.350522   70015 buildroot.go:166] provisioning hostname "kindnet-612440"
	I0814 01:27:15.350555   70015 main.go:141] libmachine: (kindnet-612440) Calling .GetMachineName
	I0814 01:27:15.350686   70015 main.go:141] libmachine: (kindnet-612440) Calling .GetSSHHostname
	I0814 01:27:15.353331   70015 main.go:141] libmachine: (kindnet-612440) DBG | domain kindnet-612440 has defined MAC address 52:54:00:2e:21:e4 in network mk-kindnet-612440
	I0814 01:27:15.353683   70015 main.go:141] libmachine: (kindnet-612440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:21:e4", ip: ""} in network mk-kindnet-612440: {Iface:virbr4 ExpiryTime:2024-08-14 02:27:06 +0000 UTC Type:0 Mac:52:54:00:2e:21:e4 Iaid: IPaddr:192.168.61.73 Prefix:24 Hostname:kindnet-612440 Clientid:01:52:54:00:2e:21:e4}
	I0814 01:27:15.353705   70015 main.go:141] libmachine: (kindnet-612440) DBG | domain kindnet-612440 has defined IP address 192.168.61.73 and MAC address 52:54:00:2e:21:e4 in network mk-kindnet-612440
	I0814 01:27:15.353894   70015 main.go:141] libmachine: (kindnet-612440) Calling .GetSSHPort
	I0814 01:27:15.354078   70015 main.go:141] libmachine: (kindnet-612440) Calling .GetSSHKeyPath
	I0814 01:27:15.354229   70015 main.go:141] libmachine: (kindnet-612440) Calling .GetSSHKeyPath
	I0814 01:27:15.354376   70015 main.go:141] libmachine: (kindnet-612440) Calling .GetSSHUsername
	I0814 01:27:15.354590   70015 main.go:141] libmachine: Using SSH client type: native
	I0814 01:27:15.354809   70015 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.73 22 <nil> <nil>}
	I0814 01:27:15.354828   70015 main.go:141] libmachine: About to run SSH command:
	sudo hostname kindnet-612440 && echo "kindnet-612440" | sudo tee /etc/hostname
	I0814 01:27:15.471071   70015 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-612440
	
	I0814 01:27:15.471101   70015 main.go:141] libmachine: (kindnet-612440) Calling .GetSSHHostname
	I0814 01:27:15.474144   70015 main.go:141] libmachine: (kindnet-612440) DBG | domain kindnet-612440 has defined MAC address 52:54:00:2e:21:e4 in network mk-kindnet-612440
	I0814 01:27:15.474642   70015 main.go:141] libmachine: (kindnet-612440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:21:e4", ip: ""} in network mk-kindnet-612440: {Iface:virbr4 ExpiryTime:2024-08-14 02:27:06 +0000 UTC Type:0 Mac:52:54:00:2e:21:e4 Iaid: IPaddr:192.168.61.73 Prefix:24 Hostname:kindnet-612440 Clientid:01:52:54:00:2e:21:e4}
	I0814 01:27:15.474677   70015 main.go:141] libmachine: (kindnet-612440) DBG | domain kindnet-612440 has defined IP address 192.168.61.73 and MAC address 52:54:00:2e:21:e4 in network mk-kindnet-612440
	I0814 01:27:15.474805   70015 main.go:141] libmachine: (kindnet-612440) Calling .GetSSHPort
	I0814 01:27:15.475008   70015 main.go:141] libmachine: (kindnet-612440) Calling .GetSSHKeyPath
	I0814 01:27:15.475196   70015 main.go:141] libmachine: (kindnet-612440) Calling .GetSSHKeyPath
	I0814 01:27:15.475410   70015 main.go:141] libmachine: (kindnet-612440) Calling .GetSSHUsername
	I0814 01:27:15.475593   70015 main.go:141] libmachine: Using SSH client type: native
	I0814 01:27:15.475767   70015 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.73 22 <nil> <nil>}
	I0814 01:27:15.475782   70015 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-612440' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-612440/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-612440' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0814 01:27:15.587312   70015 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 01:27:15.587351   70015 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19429-9425/.minikube CaCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19429-9425/.minikube}
	I0814 01:27:15.587373   70015 buildroot.go:174] setting up certificates
	I0814 01:27:15.587386   70015 provision.go:84] configureAuth start
	I0814 01:27:15.587399   70015 main.go:141] libmachine: (kindnet-612440) Calling .GetMachineName
	I0814 01:27:15.587693   70015 main.go:141] libmachine: (kindnet-612440) Calling .GetIP
	I0814 01:27:15.590569   70015 main.go:141] libmachine: (kindnet-612440) DBG | domain kindnet-612440 has defined MAC address 52:54:00:2e:21:e4 in network mk-kindnet-612440
	I0814 01:27:15.590913   70015 main.go:141] libmachine: (kindnet-612440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:21:e4", ip: ""} in network mk-kindnet-612440: {Iface:virbr4 ExpiryTime:2024-08-14 02:27:06 +0000 UTC Type:0 Mac:52:54:00:2e:21:e4 Iaid: IPaddr:192.168.61.73 Prefix:24 Hostname:kindnet-612440 Clientid:01:52:54:00:2e:21:e4}
	I0814 01:27:15.590934   70015 main.go:141] libmachine: (kindnet-612440) DBG | domain kindnet-612440 has defined IP address 192.168.61.73 and MAC address 52:54:00:2e:21:e4 in network mk-kindnet-612440
	I0814 01:27:15.591060   70015 main.go:141] libmachine: (kindnet-612440) Calling .GetSSHHostname
	I0814 01:27:15.593194   70015 main.go:141] libmachine: (kindnet-612440) DBG | domain kindnet-612440 has defined MAC address 52:54:00:2e:21:e4 in network mk-kindnet-612440
	I0814 01:27:15.593500   70015 main.go:141] libmachine: (kindnet-612440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:21:e4", ip: ""} in network mk-kindnet-612440: {Iface:virbr4 ExpiryTime:2024-08-14 02:27:06 +0000 UTC Type:0 Mac:52:54:00:2e:21:e4 Iaid: IPaddr:192.168.61.73 Prefix:24 Hostname:kindnet-612440 Clientid:01:52:54:00:2e:21:e4}
	I0814 01:27:15.593525   70015 main.go:141] libmachine: (kindnet-612440) DBG | domain kindnet-612440 has defined IP address 192.168.61.73 and MAC address 52:54:00:2e:21:e4 in network mk-kindnet-612440
	I0814 01:27:15.593711   70015 provision.go:143] copyHostCerts
	I0814 01:27:15.593773   70015 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem, removing ...
	I0814 01:27:15.593786   70015 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem
	I0814 01:27:15.593863   70015 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem (1123 bytes)
	I0814 01:27:15.593999   70015 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem, removing ...
	I0814 01:27:15.594017   70015 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem
	I0814 01:27:15.594062   70015 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem (1675 bytes)
	I0814 01:27:15.594157   70015 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem, removing ...
	I0814 01:27:15.594166   70015 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem
	I0814 01:27:15.594194   70015 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem (1082 bytes)
	I0814 01:27:15.594314   70015 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem org=jenkins.kindnet-612440 san=[127.0.0.1 192.168.61.73 kindnet-612440 localhost minikube]
	I0814 01:27:16.570336   70428 start.go:364] duration metric: took 16.549452732s to acquireMachinesLock for "calico-612440"
	I0814 01:27:16.570446   70428 start.go:93] Provisioning new machine with config: &{Name:calico-612440 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:calico-612440 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0814 01:27:16.570631   70428 start.go:125] createHost starting for "" (driver="kvm2")
	I0814 01:27:15.934527   70015 provision.go:177] copyRemoteCerts
	I0814 01:27:15.934583   70015 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0814 01:27:15.934607   70015 main.go:141] libmachine: (kindnet-612440) Calling .GetSSHHostname
	I0814 01:27:15.937597   70015 main.go:141] libmachine: (kindnet-612440) DBG | domain kindnet-612440 has defined MAC address 52:54:00:2e:21:e4 in network mk-kindnet-612440
	I0814 01:27:15.937900   70015 main.go:141] libmachine: (kindnet-612440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:21:e4", ip: ""} in network mk-kindnet-612440: {Iface:virbr4 ExpiryTime:2024-08-14 02:27:06 +0000 UTC Type:0 Mac:52:54:00:2e:21:e4 Iaid: IPaddr:192.168.61.73 Prefix:24 Hostname:kindnet-612440 Clientid:01:52:54:00:2e:21:e4}
	I0814 01:27:15.937927   70015 main.go:141] libmachine: (kindnet-612440) DBG | domain kindnet-612440 has defined IP address 192.168.61.73 and MAC address 52:54:00:2e:21:e4 in network mk-kindnet-612440
	I0814 01:27:15.938152   70015 main.go:141] libmachine: (kindnet-612440) Calling .GetSSHPort
	I0814 01:27:15.938373   70015 main.go:141] libmachine: (kindnet-612440) Calling .GetSSHKeyPath
	I0814 01:27:15.938570   70015 main.go:141] libmachine: (kindnet-612440) Calling .GetSSHUsername
	I0814 01:27:15.938719   70015 sshutil.go:53] new ssh client: &{IP:192.168.61.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/kindnet-612440/id_rsa Username:docker}
	I0814 01:27:16.019835   70015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0814 01:27:16.043395   70015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0814 01:27:16.064924   70015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0814 01:27:16.086001   70015 provision.go:87] duration metric: took 498.604957ms to configureAuth
	I0814 01:27:16.086026   70015 buildroot.go:189] setting minikube options for container-runtime
	I0814 01:27:16.086220   70015 config.go:182] Loaded profile config "kindnet-612440": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 01:27:16.086304   70015 main.go:141] libmachine: (kindnet-612440) Calling .GetSSHHostname
	I0814 01:27:16.088882   70015 main.go:141] libmachine: (kindnet-612440) DBG | domain kindnet-612440 has defined MAC address 52:54:00:2e:21:e4 in network mk-kindnet-612440
	I0814 01:27:16.089233   70015 main.go:141] libmachine: (kindnet-612440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:21:e4", ip: ""} in network mk-kindnet-612440: {Iface:virbr4 ExpiryTime:2024-08-14 02:27:06 +0000 UTC Type:0 Mac:52:54:00:2e:21:e4 Iaid: IPaddr:192.168.61.73 Prefix:24 Hostname:kindnet-612440 Clientid:01:52:54:00:2e:21:e4}
	I0814 01:27:16.089262   70015 main.go:141] libmachine: (kindnet-612440) DBG | domain kindnet-612440 has defined IP address 192.168.61.73 and MAC address 52:54:00:2e:21:e4 in network mk-kindnet-612440
	I0814 01:27:16.089393   70015 main.go:141] libmachine: (kindnet-612440) Calling .GetSSHPort
	I0814 01:27:16.089597   70015 main.go:141] libmachine: (kindnet-612440) Calling .GetSSHKeyPath
	I0814 01:27:16.089743   70015 main.go:141] libmachine: (kindnet-612440) Calling .GetSSHKeyPath
	I0814 01:27:16.089887   70015 main.go:141] libmachine: (kindnet-612440) Calling .GetSSHUsername
	I0814 01:27:16.090028   70015 main.go:141] libmachine: Using SSH client type: native
	I0814 01:27:16.090213   70015 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.73 22 <nil> <nil>}
	I0814 01:27:16.090233   70015 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0814 01:27:16.340368   70015 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0814 01:27:16.340399   70015 main.go:141] libmachine: Checking connection to Docker...
	I0814 01:27:16.340411   70015 main.go:141] libmachine: (kindnet-612440) Calling .GetURL
	I0814 01:27:16.341818   70015 main.go:141] libmachine: (kindnet-612440) DBG | Using libvirt version 6000000
	I0814 01:27:16.344238   70015 main.go:141] libmachine: (kindnet-612440) DBG | domain kindnet-612440 has defined MAC address 52:54:00:2e:21:e4 in network mk-kindnet-612440
	I0814 01:27:16.344613   70015 main.go:141] libmachine: (kindnet-612440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:21:e4", ip: ""} in network mk-kindnet-612440: {Iface:virbr4 ExpiryTime:2024-08-14 02:27:06 +0000 UTC Type:0 Mac:52:54:00:2e:21:e4 Iaid: IPaddr:192.168.61.73 Prefix:24 Hostname:kindnet-612440 Clientid:01:52:54:00:2e:21:e4}
	I0814 01:27:16.344645   70015 main.go:141] libmachine: (kindnet-612440) DBG | domain kindnet-612440 has defined IP address 192.168.61.73 and MAC address 52:54:00:2e:21:e4 in network mk-kindnet-612440
	I0814 01:27:16.344788   70015 main.go:141] libmachine: Docker is up and running!
	I0814 01:27:16.344800   70015 main.go:141] libmachine: Reticulating splines...
	I0814 01:27:16.344806   70015 client.go:171] duration metric: took 24.617021021s to LocalClient.Create
	I0814 01:27:16.344830   70015 start.go:167] duration metric: took 24.617079314s to libmachine.API.Create "kindnet-612440"
	I0814 01:27:16.344844   70015 start.go:293] postStartSetup for "kindnet-612440" (driver="kvm2")
	I0814 01:27:16.344855   70015 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0814 01:27:16.344871   70015 main.go:141] libmachine: (kindnet-612440) Calling .DriverName
	I0814 01:27:16.345083   70015 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0814 01:27:16.345107   70015 main.go:141] libmachine: (kindnet-612440) Calling .GetSSHHostname
	I0814 01:27:16.347105   70015 main.go:141] libmachine: (kindnet-612440) DBG | domain kindnet-612440 has defined MAC address 52:54:00:2e:21:e4 in network mk-kindnet-612440
	I0814 01:27:16.347416   70015 main.go:141] libmachine: (kindnet-612440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:21:e4", ip: ""} in network mk-kindnet-612440: {Iface:virbr4 ExpiryTime:2024-08-14 02:27:06 +0000 UTC Type:0 Mac:52:54:00:2e:21:e4 Iaid: IPaddr:192.168.61.73 Prefix:24 Hostname:kindnet-612440 Clientid:01:52:54:00:2e:21:e4}
	I0814 01:27:16.347443   70015 main.go:141] libmachine: (kindnet-612440) DBG | domain kindnet-612440 has defined IP address 192.168.61.73 and MAC address 52:54:00:2e:21:e4 in network mk-kindnet-612440
	I0814 01:27:16.347569   70015 main.go:141] libmachine: (kindnet-612440) Calling .GetSSHPort
	I0814 01:27:16.347734   70015 main.go:141] libmachine: (kindnet-612440) Calling .GetSSHKeyPath
	I0814 01:27:16.347892   70015 main.go:141] libmachine: (kindnet-612440) Calling .GetSSHUsername
	I0814 01:27:16.348018   70015 sshutil.go:53] new ssh client: &{IP:192.168.61.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/kindnet-612440/id_rsa Username:docker}
	I0814 01:27:16.427520   70015 ssh_runner.go:195] Run: cat /etc/os-release
	I0814 01:27:16.431458   70015 info.go:137] Remote host: Buildroot 2023.02.9
	I0814 01:27:16.431483   70015 filesync.go:126] Scanning /home/jenkins/minikube-integration/19429-9425/.minikube/addons for local assets ...
	I0814 01:27:16.431555   70015 filesync.go:126] Scanning /home/jenkins/minikube-integration/19429-9425/.minikube/files for local assets ...
	I0814 01:27:16.431660   70015 filesync.go:149] local asset: /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem -> 165892.pem in /etc/ssl/certs
	I0814 01:27:16.431748   70015 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0814 01:27:16.440358   70015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem --> /etc/ssl/certs/165892.pem (1708 bytes)
	I0814 01:27:16.462759   70015 start.go:296] duration metric: took 117.899313ms for postStartSetup
	I0814 01:27:16.462821   70015 main.go:141] libmachine: (kindnet-612440) Calling .GetConfigRaw
	I0814 01:27:16.463502   70015 main.go:141] libmachine: (kindnet-612440) Calling .GetIP
	I0814 01:27:16.466024   70015 main.go:141] libmachine: (kindnet-612440) DBG | domain kindnet-612440 has defined MAC address 52:54:00:2e:21:e4 in network mk-kindnet-612440
	I0814 01:27:16.466367   70015 main.go:141] libmachine: (kindnet-612440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:21:e4", ip: ""} in network mk-kindnet-612440: {Iface:virbr4 ExpiryTime:2024-08-14 02:27:06 +0000 UTC Type:0 Mac:52:54:00:2e:21:e4 Iaid: IPaddr:192.168.61.73 Prefix:24 Hostname:kindnet-612440 Clientid:01:52:54:00:2e:21:e4}
	I0814 01:27:16.466403   70015 main.go:141] libmachine: (kindnet-612440) DBG | domain kindnet-612440 has defined IP address 192.168.61.73 and MAC address 52:54:00:2e:21:e4 in network mk-kindnet-612440
	I0814 01:27:16.466638   70015 profile.go:143] Saving config to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/kindnet-612440/config.json ...
	I0814 01:27:16.466821   70015 start.go:128] duration metric: took 24.760074236s to createHost
	I0814 01:27:16.466844   70015 main.go:141] libmachine: (kindnet-612440) Calling .GetSSHHostname
	I0814 01:27:16.468995   70015 main.go:141] libmachine: (kindnet-612440) DBG | domain kindnet-612440 has defined MAC address 52:54:00:2e:21:e4 in network mk-kindnet-612440
	I0814 01:27:16.469348   70015 main.go:141] libmachine: (kindnet-612440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:21:e4", ip: ""} in network mk-kindnet-612440: {Iface:virbr4 ExpiryTime:2024-08-14 02:27:06 +0000 UTC Type:0 Mac:52:54:00:2e:21:e4 Iaid: IPaddr:192.168.61.73 Prefix:24 Hostname:kindnet-612440 Clientid:01:52:54:00:2e:21:e4}
	I0814 01:27:16.469369   70015 main.go:141] libmachine: (kindnet-612440) DBG | domain kindnet-612440 has defined IP address 192.168.61.73 and MAC address 52:54:00:2e:21:e4 in network mk-kindnet-612440
	I0814 01:27:16.469546   70015 main.go:141] libmachine: (kindnet-612440) Calling .GetSSHPort
	I0814 01:27:16.469737   70015 main.go:141] libmachine: (kindnet-612440) Calling .GetSSHKeyPath
	I0814 01:27:16.469895   70015 main.go:141] libmachine: (kindnet-612440) Calling .GetSSHKeyPath
	I0814 01:27:16.470058   70015 main.go:141] libmachine: (kindnet-612440) Calling .GetSSHUsername
	I0814 01:27:16.470260   70015 main.go:141] libmachine: Using SSH client type: native
	I0814 01:27:16.470504   70015 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.73 22 <nil> <nil>}
	I0814 01:27:16.470518   70015 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0814 01:27:16.570196   70015 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723598836.552698414
	
	I0814 01:27:16.570218   70015 fix.go:216] guest clock: 1723598836.552698414
	I0814 01:27:16.570225   70015 fix.go:229] Guest: 2024-08-14 01:27:16.552698414 +0000 UTC Remote: 2024-08-14 01:27:16.466833443 +0000 UTC m=+30.812842293 (delta=85.864971ms)
	I0814 01:27:16.570243   70015 fix.go:200] guest clock delta is within tolerance: 85.864971ms
	I0814 01:27:16.570248   70015 start.go:83] releasing machines lock for "kindnet-612440", held for 24.863658321s
	I0814 01:27:16.570274   70015 main.go:141] libmachine: (kindnet-612440) Calling .DriverName
	I0814 01:27:16.570570   70015 main.go:141] libmachine: (kindnet-612440) Calling .GetIP
	I0814 01:27:16.573516   70015 main.go:141] libmachine: (kindnet-612440) DBG | domain kindnet-612440 has defined MAC address 52:54:00:2e:21:e4 in network mk-kindnet-612440
	I0814 01:27:16.573920   70015 main.go:141] libmachine: (kindnet-612440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:21:e4", ip: ""} in network mk-kindnet-612440: {Iface:virbr4 ExpiryTime:2024-08-14 02:27:06 +0000 UTC Type:0 Mac:52:54:00:2e:21:e4 Iaid: IPaddr:192.168.61.73 Prefix:24 Hostname:kindnet-612440 Clientid:01:52:54:00:2e:21:e4}
	I0814 01:27:16.573953   70015 main.go:141] libmachine: (kindnet-612440) DBG | domain kindnet-612440 has defined IP address 192.168.61.73 and MAC address 52:54:00:2e:21:e4 in network mk-kindnet-612440
	I0814 01:27:16.574165   70015 main.go:141] libmachine: (kindnet-612440) Calling .DriverName
	I0814 01:27:16.574677   70015 main.go:141] libmachine: (kindnet-612440) Calling .DriverName
	I0814 01:27:16.574881   70015 main.go:141] libmachine: (kindnet-612440) Calling .DriverName
	I0814 01:27:16.575025   70015 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0814 01:27:16.575086   70015 main.go:141] libmachine: (kindnet-612440) Calling .GetSSHHostname
	I0814 01:27:16.575177   70015 ssh_runner.go:195] Run: cat /version.json
	I0814 01:27:16.575194   70015 main.go:141] libmachine: (kindnet-612440) Calling .GetSSHHostname
	I0814 01:27:16.578477   70015 main.go:141] libmachine: (kindnet-612440) DBG | domain kindnet-612440 has defined MAC address 52:54:00:2e:21:e4 in network mk-kindnet-612440
	I0814 01:27:16.578499   70015 main.go:141] libmachine: (kindnet-612440) DBG | domain kindnet-612440 has defined MAC address 52:54:00:2e:21:e4 in network mk-kindnet-612440
	I0814 01:27:16.578905   70015 main.go:141] libmachine: (kindnet-612440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:21:e4", ip: ""} in network mk-kindnet-612440: {Iface:virbr4 ExpiryTime:2024-08-14 02:27:06 +0000 UTC Type:0 Mac:52:54:00:2e:21:e4 Iaid: IPaddr:192.168.61.73 Prefix:24 Hostname:kindnet-612440 Clientid:01:52:54:00:2e:21:e4}
	I0814 01:27:16.578927   70015 main.go:141] libmachine: (kindnet-612440) DBG | domain kindnet-612440 has defined IP address 192.168.61.73 and MAC address 52:54:00:2e:21:e4 in network mk-kindnet-612440
	I0814 01:27:16.578983   70015 main.go:141] libmachine: (kindnet-612440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:21:e4", ip: ""} in network mk-kindnet-612440: {Iface:virbr4 ExpiryTime:2024-08-14 02:27:06 +0000 UTC Type:0 Mac:52:54:00:2e:21:e4 Iaid: IPaddr:192.168.61.73 Prefix:24 Hostname:kindnet-612440 Clientid:01:52:54:00:2e:21:e4}
	I0814 01:27:16.579006   70015 main.go:141] libmachine: (kindnet-612440) DBG | domain kindnet-612440 has defined IP address 192.168.61.73 and MAC address 52:54:00:2e:21:e4 in network mk-kindnet-612440
	I0814 01:27:16.579213   70015 main.go:141] libmachine: (kindnet-612440) Calling .GetSSHPort
	I0814 01:27:16.579364   70015 main.go:141] libmachine: (kindnet-612440) Calling .GetSSHKeyPath
	I0814 01:27:16.579397   70015 main.go:141] libmachine: (kindnet-612440) Calling .GetSSHPort
	I0814 01:27:16.579486   70015 main.go:141] libmachine: (kindnet-612440) Calling .GetSSHUsername
	I0814 01:27:16.579718   70015 main.go:141] libmachine: (kindnet-612440) Calling .GetSSHKeyPath
	I0814 01:27:16.579717   70015 sshutil.go:53] new ssh client: &{IP:192.168.61.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/kindnet-612440/id_rsa Username:docker}
	I0814 01:27:16.579888   70015 main.go:141] libmachine: (kindnet-612440) Calling .GetSSHUsername
	I0814 01:27:16.580050   70015 sshutil.go:53] new ssh client: &{IP:192.168.61.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/kindnet-612440/id_rsa Username:docker}
	I0814 01:27:16.698082   70015 ssh_runner.go:195] Run: systemctl --version
	I0814 01:27:16.704147   70015 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0814 01:27:16.855827   70015 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0814 01:27:16.862116   70015 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0814 01:27:16.862170   70015 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0814 01:27:16.876813   70015 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0814 01:27:16.876840   70015 start.go:495] detecting cgroup driver to use...
	I0814 01:27:16.876906   70015 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0814 01:27:16.892980   70015 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0814 01:27:16.907460   70015 docker.go:217] disabling cri-docker service (if available) ...
	I0814 01:27:16.907530   70015 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0814 01:27:16.924135   70015 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0814 01:27:16.939511   70015 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0814 01:27:17.082865   70015 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0814 01:27:17.248510   70015 docker.go:233] disabling docker service ...
	I0814 01:27:17.248586   70015 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0814 01:27:17.266322   70015 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0814 01:27:17.279901   70015 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0814 01:27:17.398494   70015 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0814 01:27:17.520849   70015 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0814 01:27:17.536807   70015 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 01:27:17.557947   70015 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0814 01:27:17.558022   70015 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:27:17.568242   70015 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0814 01:27:17.568321   70015 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:27:17.580291   70015 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:27:17.590408   70015 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:27:17.600218   70015 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0814 01:27:17.610762   70015 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:27:17.622272   70015 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:27:17.642350   70015 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:27:17.653953   70015 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0814 01:27:17.663036   70015 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0814 01:27:17.663090   70015 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0814 01:27:17.674467   70015 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0814 01:27:17.683082   70015 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 01:27:17.811743   70015 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0814 01:27:17.958158   70015 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0814 01:27:17.958229   70015 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0814 01:27:17.963760   70015 start.go:563] Will wait 60s for crictl version
	I0814 01:27:17.963832   70015 ssh_runner.go:195] Run: which crictl
	I0814 01:27:17.969190   70015 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0814 01:27:18.014663   70015 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0814 01:27:18.014758   70015 ssh_runner.go:195] Run: crio --version
	I0814 01:27:18.040890   70015 ssh_runner.go:195] Run: crio --version
	I0814 01:27:18.068605   70015 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0814 01:27:15.533743   69391 pod_ready.go:102] pod "etcd-auto-612440" in "kube-system" namespace has status "Ready":"False"
	I0814 01:27:17.535242   69391 pod_ready.go:92] pod "etcd-auto-612440" in "kube-system" namespace has status "Ready":"True"
	I0814 01:27:17.535271   69391 pod_ready.go:81] duration metric: took 4.007409151s for pod "etcd-auto-612440" in "kube-system" namespace to be "Ready" ...
	I0814 01:27:17.535308   69391 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-auto-612440" in "kube-system" namespace to be "Ready" ...
	I0814 01:27:17.540921   69391 pod_ready.go:92] pod "kube-apiserver-auto-612440" in "kube-system" namespace has status "Ready":"True"
	I0814 01:27:17.540944   69391 pod_ready.go:81] duration metric: took 5.623867ms for pod "kube-apiserver-auto-612440" in "kube-system" namespace to be "Ready" ...
	I0814 01:27:17.540969   69391 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-auto-612440" in "kube-system" namespace to be "Ready" ...
	I0814 01:27:17.545771   69391 pod_ready.go:92] pod "kube-controller-manager-auto-612440" in "kube-system" namespace has status "Ready":"True"
	I0814 01:27:17.545791   69391 pod_ready.go:81] duration metric: took 4.813128ms for pod "kube-controller-manager-auto-612440" in "kube-system" namespace to be "Ready" ...
	I0814 01:27:17.545801   69391 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-ckxgt" in "kube-system" namespace to be "Ready" ...
	I0814 01:27:17.550446   69391 pod_ready.go:92] pod "kube-proxy-ckxgt" in "kube-system" namespace has status "Ready":"True"
	I0814 01:27:17.550465   69391 pod_ready.go:81] duration metric: took 4.655458ms for pod "kube-proxy-ckxgt" in "kube-system" namespace to be "Ready" ...
	I0814 01:27:17.550476   69391 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-auto-612440" in "kube-system" namespace to be "Ready" ...
	I0814 01:27:17.554493   69391 pod_ready.go:92] pod "kube-scheduler-auto-612440" in "kube-system" namespace has status "Ready":"True"
	I0814 01:27:17.554518   69391 pod_ready.go:81] duration metric: took 4.033368ms for pod "kube-scheduler-auto-612440" in "kube-system" namespace to be "Ready" ...
	I0814 01:27:17.554527   69391 pod_ready.go:38] duration metric: took 4.041445437s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 01:27:17.554544   69391 api_server.go:52] waiting for apiserver process to appear ...
	I0814 01:27:17.554602   69391 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:27:17.568944   69391 api_server.go:72] duration metric: took 4.717932467s to wait for apiserver process to appear ...
	I0814 01:27:17.568964   69391 api_server.go:88] waiting for apiserver healthz status ...
	I0814 01:27:17.568981   69391 api_server.go:253] Checking apiserver healthz at https://192.168.50.74:8443/healthz ...
	I0814 01:27:17.574269   69391 api_server.go:279] https://192.168.50.74:8443/healthz returned 200:
	ok
	I0814 01:27:17.575545   69391 api_server.go:141] control plane version: v1.31.0
	I0814 01:27:17.575571   69391 api_server.go:131] duration metric: took 6.599202ms to wait for apiserver health ...
	I0814 01:27:17.575581   69391 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 01:27:17.734997   69391 system_pods.go:59] 7 kube-system pods found
	I0814 01:27:17.735038   69391 system_pods.go:61] "coredns-6f6b679f8f-zwxpd" [ac8e3cbe-b06c-4ed9-8e3d-055532163138] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0814 01:27:17.735047   69391 system_pods.go:61] "etcd-auto-612440" [ddca2921-d338-4129-b3ef-9d167dd1c17a] Running
	I0814 01:27:17.735054   69391 system_pods.go:61] "kube-apiserver-auto-612440" [4dc68fce-b31c-40f5-bf06-2fe986635913] Running
	I0814 01:27:17.735061   69391 system_pods.go:61] "kube-controller-manager-auto-612440" [e5411dba-0325-4ec1-a48d-a8a4dfdcb085] Running
	I0814 01:27:17.735066   69391 system_pods.go:61] "kube-proxy-ckxgt" [1b0e373d-afb9-4335-ba96-12d36a4a332e] Running
	I0814 01:27:17.735071   69391 system_pods.go:61] "kube-scheduler-auto-612440" [64f72f9a-5748-452a-9016-54faea8c5bcb] Running
	I0814 01:27:17.735076   69391 system_pods.go:61] "storage-provisioner" [c220c511-4812-4b03-bed5-b87614144380] Running
	I0814 01:27:17.735086   69391 system_pods.go:74] duration metric: took 159.497403ms to wait for pod list to return data ...
	I0814 01:27:17.735095   69391 default_sa.go:34] waiting for default service account to be created ...
	I0814 01:27:17.932584   69391 default_sa.go:45] found service account: "default"
	I0814 01:27:17.932612   69391 default_sa.go:55] duration metric: took 197.504223ms for default service account to be created ...
	I0814 01:27:17.932623   69391 system_pods.go:116] waiting for k8s-apps to be running ...
	I0814 01:27:18.135599   69391 system_pods.go:86] 7 kube-system pods found
	I0814 01:27:18.135633   69391 system_pods.go:89] "coredns-6f6b679f8f-zwxpd" [ac8e3cbe-b06c-4ed9-8e3d-055532163138] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0814 01:27:18.135642   69391 system_pods.go:89] "etcd-auto-612440" [ddca2921-d338-4129-b3ef-9d167dd1c17a] Running
	I0814 01:27:18.135650   69391 system_pods.go:89] "kube-apiserver-auto-612440" [4dc68fce-b31c-40f5-bf06-2fe986635913] Running
	I0814 01:27:18.135657   69391 system_pods.go:89] "kube-controller-manager-auto-612440" [e5411dba-0325-4ec1-a48d-a8a4dfdcb085] Running
	I0814 01:27:18.135664   69391 system_pods.go:89] "kube-proxy-ckxgt" [1b0e373d-afb9-4335-ba96-12d36a4a332e] Running
	I0814 01:27:18.135671   69391 system_pods.go:89] "kube-scheduler-auto-612440" [64f72f9a-5748-452a-9016-54faea8c5bcb] Running
	I0814 01:27:18.135681   69391 system_pods.go:89] "storage-provisioner" [c220c511-4812-4b03-bed5-b87614144380] Running
	I0814 01:27:18.135691   69391 system_pods.go:126] duration metric: took 203.061399ms to wait for k8s-apps to be running ...
	I0814 01:27:18.135703   69391 system_svc.go:44] waiting for kubelet service to be running ....
	I0814 01:27:18.135743   69391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 01:27:18.151992   69391 system_svc.go:56] duration metric: took 16.279151ms WaitForService to wait for kubelet
	I0814 01:27:18.152025   69391 kubeadm.go:582] duration metric: took 5.301016475s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 01:27:18.152051   69391 node_conditions.go:102] verifying NodePressure condition ...
	I0814 01:27:18.333565   69391 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0814 01:27:18.333590   69391 node_conditions.go:123] node cpu capacity is 2
	I0814 01:27:18.333602   69391 node_conditions.go:105] duration metric: took 181.546512ms to run NodePressure ...
	I0814 01:27:18.333616   69391 start.go:241] waiting for startup goroutines ...
	I0814 01:27:18.333625   69391 start.go:246] waiting for cluster config update ...
	I0814 01:27:18.333638   69391 start.go:255] writing updated cluster config ...
	I0814 01:27:18.334026   69391 ssh_runner.go:195] Run: rm -f paused
	I0814 01:27:18.408766   69391 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0814 01:27:18.411808   69391 out.go:177] * Done! kubectl is now configured to use "auto-612440" cluster and "default" namespace by default
	I0814 01:27:16.572617   70428 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0814 01:27:16.572852   70428 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:27:16.572912   70428 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:27:16.590934   70428 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34639
	I0814 01:27:16.591402   70428 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:27:16.591966   70428 main.go:141] libmachine: Using API Version  1
	I0814 01:27:16.591987   70428 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:27:16.592340   70428 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:27:16.592524   70428 main.go:141] libmachine: (calico-612440) Calling .GetMachineName
	I0814 01:27:16.592683   70428 main.go:141] libmachine: (calico-612440) Calling .DriverName
	I0814 01:27:16.592831   70428 start.go:159] libmachine.API.Create for "calico-612440" (driver="kvm2")
	I0814 01:27:16.592859   70428 client.go:168] LocalClient.Create starting
	I0814 01:27:16.592909   70428 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem
	I0814 01:27:16.592940   70428 main.go:141] libmachine: Decoding PEM data...
	I0814 01:27:16.592954   70428 main.go:141] libmachine: Parsing certificate...
	I0814 01:27:16.593020   70428 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem
	I0814 01:27:16.593046   70428 main.go:141] libmachine: Decoding PEM data...
	I0814 01:27:16.593069   70428 main.go:141] libmachine: Parsing certificate...
	I0814 01:27:16.593097   70428 main.go:141] libmachine: Running pre-create checks...
	I0814 01:27:16.593113   70428 main.go:141] libmachine: (calico-612440) Calling .PreCreateCheck
	I0814 01:27:16.593486   70428 main.go:141] libmachine: (calico-612440) Calling .GetConfigRaw
	I0814 01:27:16.593873   70428 main.go:141] libmachine: Creating machine...
	I0814 01:27:16.593885   70428 main.go:141] libmachine: (calico-612440) Calling .Create
	I0814 01:27:16.594030   70428 main.go:141] libmachine: (calico-612440) Creating KVM machine...
	I0814 01:27:16.595179   70428 main.go:141] libmachine: (calico-612440) DBG | found existing default KVM network
	I0814 01:27:16.596601   70428 main.go:141] libmachine: (calico-612440) DBG | I0814 01:27:16.596436   70599 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:54:46:e4} reservation:<nil>}
	I0814 01:27:16.597790   70428 main.go:141] libmachine: (calico-612440) DBG | I0814 01:27:16.597700   70599 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:84:01:bb} reservation:<nil>}
	I0814 01:27:16.599102   70428 main.go:141] libmachine: (calico-612440) DBG | I0814 01:27:16.599029   70599 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:96:08:ca} reservation:<nil>}
	I0814 01:27:16.600178   70428 main.go:141] libmachine: (calico-612440) DBG | I0814 01:27:16.600075   70599 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00030b1e0}
	I0814 01:27:16.600204   70428 main.go:141] libmachine: (calico-612440) DBG | created network xml: 
	I0814 01:27:16.600215   70428 main.go:141] libmachine: (calico-612440) DBG | <network>
	I0814 01:27:16.600231   70428 main.go:141] libmachine: (calico-612440) DBG |   <name>mk-calico-612440</name>
	I0814 01:27:16.600241   70428 main.go:141] libmachine: (calico-612440) DBG |   <dns enable='no'/>
	I0814 01:27:16.600256   70428 main.go:141] libmachine: (calico-612440) DBG |   
	I0814 01:27:16.600269   70428 main.go:141] libmachine: (calico-612440) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0814 01:27:16.600288   70428 main.go:141] libmachine: (calico-612440) DBG |     <dhcp>
	I0814 01:27:16.600301   70428 main.go:141] libmachine: (calico-612440) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0814 01:27:16.600310   70428 main.go:141] libmachine: (calico-612440) DBG |     </dhcp>
	I0814 01:27:16.600324   70428 main.go:141] libmachine: (calico-612440) DBG |   </ip>
	I0814 01:27:16.600334   70428 main.go:141] libmachine: (calico-612440) DBG |   
	I0814 01:27:16.600343   70428 main.go:141] libmachine: (calico-612440) DBG | </network>
	I0814 01:27:16.600356   70428 main.go:141] libmachine: (calico-612440) DBG | 
	I0814 01:27:16.605579   70428 main.go:141] libmachine: (calico-612440) DBG | trying to create private KVM network mk-calico-612440 192.168.72.0/24...
	I0814 01:27:16.674213   70428 main.go:141] libmachine: (calico-612440) DBG | private KVM network mk-calico-612440 192.168.72.0/24 created
	I0814 01:27:16.674249   70428 main.go:141] libmachine: (calico-612440) Setting up store path in /home/jenkins/minikube-integration/19429-9425/.minikube/machines/calico-612440 ...
	I0814 01:27:16.674263   70428 main.go:141] libmachine: (calico-612440) DBG | I0814 01:27:16.674206   70599 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19429-9425/.minikube
	I0814 01:27:16.674294   70428 main.go:141] libmachine: (calico-612440) Building disk image from file:///home/jenkins/minikube-integration/19429-9425/.minikube/cache/iso/amd64/minikube-v1.33.1-1723567878-19429-amd64.iso
	I0814 01:27:16.674360   70428 main.go:141] libmachine: (calico-612440) Downloading /home/jenkins/minikube-integration/19429-9425/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19429-9425/.minikube/cache/iso/amd64/minikube-v1.33.1-1723567878-19429-amd64.iso...
	I0814 01:27:16.918715   70428 main.go:141] libmachine: (calico-612440) DBG | I0814 01:27:16.918599   70599 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19429-9425/.minikube/machines/calico-612440/id_rsa...
	I0814 01:27:17.127234   70428 main.go:141] libmachine: (calico-612440) DBG | I0814 01:27:17.127075   70599 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19429-9425/.minikube/machines/calico-612440/calico-612440.rawdisk...
	I0814 01:27:17.127278   70428 main.go:141] libmachine: (calico-612440) DBG | Writing magic tar header
	I0814 01:27:17.127294   70428 main.go:141] libmachine: (calico-612440) DBG | Writing SSH key tar header
	I0814 01:27:17.127310   70428 main.go:141] libmachine: (calico-612440) DBG | I0814 01:27:17.127195   70599 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19429-9425/.minikube/machines/calico-612440 ...
	I0814 01:27:17.127333   70428 main.go:141] libmachine: (calico-612440) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19429-9425/.minikube/machines/calico-612440
	I0814 01:27:17.127370   70428 main.go:141] libmachine: (calico-612440) Setting executable bit set on /home/jenkins/minikube-integration/19429-9425/.minikube/machines/calico-612440 (perms=drwx------)
	I0814 01:27:17.127389   70428 main.go:141] libmachine: (calico-612440) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19429-9425/.minikube/machines
	I0814 01:27:17.127405   70428 main.go:141] libmachine: (calico-612440) Setting executable bit set on /home/jenkins/minikube-integration/19429-9425/.minikube/machines (perms=drwxr-xr-x)
	I0814 01:27:17.127423   70428 main.go:141] libmachine: (calico-612440) Setting executable bit set on /home/jenkins/minikube-integration/19429-9425/.minikube (perms=drwxr-xr-x)
	I0814 01:27:17.127437   70428 main.go:141] libmachine: (calico-612440) Setting executable bit set on /home/jenkins/minikube-integration/19429-9425 (perms=drwxrwxr-x)
	I0814 01:27:17.127450   70428 main.go:141] libmachine: (calico-612440) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0814 01:27:17.127465   70428 main.go:141] libmachine: (calico-612440) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19429-9425/.minikube
	I0814 01:27:17.127478   70428 main.go:141] libmachine: (calico-612440) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19429-9425
	I0814 01:27:17.127491   70428 main.go:141] libmachine: (calico-612440) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0814 01:27:17.127504   70428 main.go:141] libmachine: (calico-612440) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0814 01:27:17.127521   70428 main.go:141] libmachine: (calico-612440) Creating domain...
	I0814 01:27:17.127533   70428 main.go:141] libmachine: (calico-612440) DBG | Checking permissions on dir: /home/jenkins
	I0814 01:27:17.127547   70428 main.go:141] libmachine: (calico-612440) DBG | Checking permissions on dir: /home
	I0814 01:27:17.127561   70428 main.go:141] libmachine: (calico-612440) DBG | Skipping /home - not owner
	I0814 01:27:17.128817   70428 main.go:141] libmachine: (calico-612440) define libvirt domain using xml: 
	I0814 01:27:17.128846   70428 main.go:141] libmachine: (calico-612440) <domain type='kvm'>
	I0814 01:27:17.128858   70428 main.go:141] libmachine: (calico-612440)   <name>calico-612440</name>
	I0814 01:27:17.128867   70428 main.go:141] libmachine: (calico-612440)   <memory unit='MiB'>3072</memory>
	I0814 01:27:17.128876   70428 main.go:141] libmachine: (calico-612440)   <vcpu>2</vcpu>
	I0814 01:27:17.128886   70428 main.go:141] libmachine: (calico-612440)   <features>
	I0814 01:27:17.128897   70428 main.go:141] libmachine: (calico-612440)     <acpi/>
	I0814 01:27:17.128906   70428 main.go:141] libmachine: (calico-612440)     <apic/>
	I0814 01:27:17.128914   70428 main.go:141] libmachine: (calico-612440)     <pae/>
	I0814 01:27:17.128923   70428 main.go:141] libmachine: (calico-612440)     
	I0814 01:27:17.128931   70428 main.go:141] libmachine: (calico-612440)   </features>
	I0814 01:27:17.128957   70428 main.go:141] libmachine: (calico-612440)   <cpu mode='host-passthrough'>
	I0814 01:27:17.128968   70428 main.go:141] libmachine: (calico-612440)   
	I0814 01:27:17.128975   70428 main.go:141] libmachine: (calico-612440)   </cpu>
	I0814 01:27:17.128984   70428 main.go:141] libmachine: (calico-612440)   <os>
	I0814 01:27:17.128993   70428 main.go:141] libmachine: (calico-612440)     <type>hvm</type>
	I0814 01:27:17.129001   70428 main.go:141] libmachine: (calico-612440)     <boot dev='cdrom'/>
	I0814 01:27:17.129011   70428 main.go:141] libmachine: (calico-612440)     <boot dev='hd'/>
	I0814 01:27:17.129022   70428 main.go:141] libmachine: (calico-612440)     <bootmenu enable='no'/>
	I0814 01:27:17.129032   70428 main.go:141] libmachine: (calico-612440)   </os>
	I0814 01:27:17.129040   70428 main.go:141] libmachine: (calico-612440)   <devices>
	I0814 01:27:17.129051   70428 main.go:141] libmachine: (calico-612440)     <disk type='file' device='cdrom'>
	I0814 01:27:17.129086   70428 main.go:141] libmachine: (calico-612440)       <source file='/home/jenkins/minikube-integration/19429-9425/.minikube/machines/calico-612440/boot2docker.iso'/>
	I0814 01:27:17.129111   70428 main.go:141] libmachine: (calico-612440)       <target dev='hdc' bus='scsi'/>
	I0814 01:27:17.129126   70428 main.go:141] libmachine: (calico-612440)       <readonly/>
	I0814 01:27:17.129138   70428 main.go:141] libmachine: (calico-612440)     </disk>
	I0814 01:27:17.129156   70428 main.go:141] libmachine: (calico-612440)     <disk type='file' device='disk'>
	I0814 01:27:17.129173   70428 main.go:141] libmachine: (calico-612440)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0814 01:27:17.129211   70428 main.go:141] libmachine: (calico-612440)       <source file='/home/jenkins/minikube-integration/19429-9425/.minikube/machines/calico-612440/calico-612440.rawdisk'/>
	I0814 01:27:17.129262   70428 main.go:141] libmachine: (calico-612440)       <target dev='hda' bus='virtio'/>
	I0814 01:27:17.129275   70428 main.go:141] libmachine: (calico-612440)     </disk>
	I0814 01:27:17.129285   70428 main.go:141] libmachine: (calico-612440)     <interface type='network'>
	I0814 01:27:17.129295   70428 main.go:141] libmachine: (calico-612440)       <source network='mk-calico-612440'/>
	I0814 01:27:17.129308   70428 main.go:141] libmachine: (calico-612440)       <model type='virtio'/>
	I0814 01:27:17.129334   70428 main.go:141] libmachine: (calico-612440)     </interface>
	I0814 01:27:17.129355   70428 main.go:141] libmachine: (calico-612440)     <interface type='network'>
	I0814 01:27:17.129366   70428 main.go:141] libmachine: (calico-612440)       <source network='default'/>
	I0814 01:27:17.129376   70428 main.go:141] libmachine: (calico-612440)       <model type='virtio'/>
	I0814 01:27:17.129385   70428 main.go:141] libmachine: (calico-612440)     </interface>
	I0814 01:27:17.129397   70428 main.go:141] libmachine: (calico-612440)     <serial type='pty'>
	I0814 01:27:17.129408   70428 main.go:141] libmachine: (calico-612440)       <target port='0'/>
	I0814 01:27:17.129416   70428 main.go:141] libmachine: (calico-612440)     </serial>
	I0814 01:27:17.129426   70428 main.go:141] libmachine: (calico-612440)     <console type='pty'>
	I0814 01:27:17.129450   70428 main.go:141] libmachine: (calico-612440)       <target type='serial' port='0'/>
	I0814 01:27:17.129461   70428 main.go:141] libmachine: (calico-612440)     </console>
	I0814 01:27:17.129469   70428 main.go:141] libmachine: (calico-612440)     <rng model='virtio'>
	I0814 01:27:17.129496   70428 main.go:141] libmachine: (calico-612440)       <backend model='random'>/dev/random</backend>
	I0814 01:27:17.129517   70428 main.go:141] libmachine: (calico-612440)     </rng>
	I0814 01:27:17.129536   70428 main.go:141] libmachine: (calico-612440)     
	I0814 01:27:17.129555   70428 main.go:141] libmachine: (calico-612440)     
	I0814 01:27:17.129566   70428 main.go:141] libmachine: (calico-612440)   </devices>
	I0814 01:27:17.129576   70428 main.go:141] libmachine: (calico-612440) </domain>
	I0814 01:27:17.129586   70428 main.go:141] libmachine: (calico-612440) 
	I0814 01:27:17.134237   70428 main.go:141] libmachine: (calico-612440) DBG | domain calico-612440 has defined MAC address 52:54:00:bf:f6:b8 in network default
	I0814 01:27:17.134910   70428 main.go:141] libmachine: (calico-612440) Ensuring networks are active...
	I0814 01:27:17.134932   70428 main.go:141] libmachine: (calico-612440) DBG | domain calico-612440 has defined MAC address 52:54:00:88:f2:61 in network mk-calico-612440
	I0814 01:27:17.135894   70428 main.go:141] libmachine: (calico-612440) Ensuring network default is active
	I0814 01:27:17.136232   70428 main.go:141] libmachine: (calico-612440) Ensuring network mk-calico-612440 is active
	I0814 01:27:17.136963   70428 main.go:141] libmachine: (calico-612440) Getting domain xml...
	I0814 01:27:17.137943   70428 main.go:141] libmachine: (calico-612440) Creating domain...
	I0814 01:27:18.554928   70428 main.go:141] libmachine: (calico-612440) Waiting to get IP...
	I0814 01:27:18.555998   70428 main.go:141] libmachine: (calico-612440) DBG | domain calico-612440 has defined MAC address 52:54:00:88:f2:61 in network mk-calico-612440
	I0814 01:27:18.556490   70428 main.go:141] libmachine: (calico-612440) DBG | unable to find current IP address of domain calico-612440 in network mk-calico-612440
	I0814 01:27:18.556510   70428 main.go:141] libmachine: (calico-612440) DBG | I0814 01:27:18.556420   70599 retry.go:31] will retry after 227.56008ms: waiting for machine to come up
	I0814 01:27:18.786371   70428 main.go:141] libmachine: (calico-612440) DBG | domain calico-612440 has defined MAC address 52:54:00:88:f2:61 in network mk-calico-612440
	I0814 01:27:18.786618   70428 main.go:141] libmachine: (calico-612440) DBG | unable to find current IP address of domain calico-612440 in network mk-calico-612440
	I0814 01:27:18.786664   70428 main.go:141] libmachine: (calico-612440) DBG | I0814 01:27:18.786523   70599 retry.go:31] will retry after 343.115884ms: waiting for machine to come up
	I0814 01:27:19.131132   70428 main.go:141] libmachine: (calico-612440) DBG | domain calico-612440 has defined MAC address 52:54:00:88:f2:61 in network mk-calico-612440
	I0814 01:27:19.131860   70428 main.go:141] libmachine: (calico-612440) DBG | unable to find current IP address of domain calico-612440 in network mk-calico-612440
	I0814 01:27:19.131887   70428 main.go:141] libmachine: (calico-612440) DBG | I0814 01:27:19.131836   70599 retry.go:31] will retry after 395.541725ms: waiting for machine to come up
	I0814 01:27:19.529395   70428 main.go:141] libmachine: (calico-612440) DBG | domain calico-612440 has defined MAC address 52:54:00:88:f2:61 in network mk-calico-612440
	I0814 01:27:19.529912   70428 main.go:141] libmachine: (calico-612440) DBG | unable to find current IP address of domain calico-612440 in network mk-calico-612440
	I0814 01:27:19.529935   70428 main.go:141] libmachine: (calico-612440) DBG | I0814 01:27:19.529865   70599 retry.go:31] will retry after 503.197447ms: waiting for machine to come up
	I0814 01:27:18.069730   70015 main.go:141] libmachine: (kindnet-612440) Calling .GetIP
	I0814 01:27:18.072628   70015 main.go:141] libmachine: (kindnet-612440) DBG | domain kindnet-612440 has defined MAC address 52:54:00:2e:21:e4 in network mk-kindnet-612440
	I0814 01:27:18.072974   70015 main.go:141] libmachine: (kindnet-612440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:21:e4", ip: ""} in network mk-kindnet-612440: {Iface:virbr4 ExpiryTime:2024-08-14 02:27:06 +0000 UTC Type:0 Mac:52:54:00:2e:21:e4 Iaid: IPaddr:192.168.61.73 Prefix:24 Hostname:kindnet-612440 Clientid:01:52:54:00:2e:21:e4}
	I0814 01:27:18.073003   70015 main.go:141] libmachine: (kindnet-612440) DBG | domain kindnet-612440 has defined IP address 192.168.61.73 and MAC address 52:54:00:2e:21:e4 in network mk-kindnet-612440
	I0814 01:27:18.073199   70015 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0814 01:27:18.076889   70015 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 01:27:18.087879   70015 kubeadm.go:883] updating cluster {Name:kindnet-612440 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:kindnet-612440 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.61.73 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0814 01:27:18.087985   70015 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0814 01:27:18.088045   70015 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 01:27:18.118488   70015 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0814 01:27:18.118544   70015 ssh_runner.go:195] Run: which lz4
	I0814 01:27:18.122408   70015 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0814 01:27:18.126203   70015 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0814 01:27:18.126234   70015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0814 01:27:19.493937   70015 crio.go:462] duration metric: took 1.371575138s to copy over tarball
	I0814 01:27:19.494020   70015 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0814 01:27:21.914299   70015 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.420246985s)
	I0814 01:27:21.914333   70015 crio.go:469] duration metric: took 2.420368085s to extract the tarball
	I0814 01:27:21.914343   70015 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0814 01:27:21.960429   70015 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 01:27:22.025138   70015 crio.go:514] all images are preloaded for cri-o runtime.
	I0814 01:27:22.025161   70015 cache_images.go:84] Images are preloaded, skipping loading
	I0814 01:27:22.025171   70015 kubeadm.go:934] updating node { 192.168.61.73 8443 v1.31.0 crio true true} ...
	I0814 01:27:22.025289   70015 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kindnet-612440 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.73
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:kindnet-612440 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I0814 01:27:22.025372   70015 ssh_runner.go:195] Run: crio config
	I0814 01:27:22.076612   70015 cni.go:84] Creating CNI manager for "kindnet"
	I0814 01:27:22.076635   70015 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0814 01:27:22.076668   70015 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.73 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-612440 NodeName:kindnet-612440 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.73"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.73 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0814 01:27:22.076845   70015 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.73
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kindnet-612440"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.73
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.73"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0814 01:27:22.076914   70015 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0814 01:27:22.086474   70015 binaries.go:44] Found k8s binaries, skipping transfer
	I0814 01:27:22.086548   70015 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0814 01:27:22.097087   70015 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0814 01:27:22.115315   70015 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0814 01:27:22.133470   70015 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
	I0814 01:27:22.151509   70015 ssh_runner.go:195] Run: grep 192.168.61.73	control-plane.minikube.internal$ /etc/hosts
	I0814 01:27:22.155093   70015 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.73	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 01:27:22.166521   70015 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 01:27:22.291964   70015 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 01:27:22.307888   70015 certs.go:68] Setting up /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/kindnet-612440 for IP: 192.168.61.73
	I0814 01:27:22.307913   70015 certs.go:194] generating shared ca certs ...
	I0814 01:27:22.307933   70015 certs.go:226] acquiring lock for ca certs: {Name:mke321171338291cf6d66f3acbfe43d3fabcafa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 01:27:22.308111   70015 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19429-9425/.minikube/ca.key
	I0814 01:27:22.308166   70015 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.key
	I0814 01:27:22.308179   70015 certs.go:256] generating profile certs ...
	I0814 01:27:22.308257   70015 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/kindnet-612440/client.key
	I0814 01:27:22.308283   70015 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/kindnet-612440/client.crt with IP's: []
	I0814 01:27:22.402315   70015 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/kindnet-612440/client.crt ...
	I0814 01:27:22.402346   70015 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/kindnet-612440/client.crt: {Name:mk10834e1b5dd4f3cfa236144c8c9122bb568ad2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 01:27:22.402514   70015 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/kindnet-612440/client.key ...
	I0814 01:27:22.402529   70015 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/kindnet-612440/client.key: {Name:mkb280be2e2d61dc5e41a70766fd679135b28bf6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 01:27:22.402608   70015 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/kindnet-612440/apiserver.key.f6a65b75
	I0814 01:27:22.402628   70015 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/kindnet-612440/apiserver.crt.f6a65b75 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.73]
	I0814 01:27:22.620220   70015 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/kindnet-612440/apiserver.crt.f6a65b75 ...
	I0814 01:27:22.620248   70015 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/kindnet-612440/apiserver.crt.f6a65b75: {Name:mk7b9329748d20a9ab25a87b117b063d9363ee19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 01:27:22.620406   70015 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/kindnet-612440/apiserver.key.f6a65b75 ...
	I0814 01:27:22.620418   70015 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/kindnet-612440/apiserver.key.f6a65b75: {Name:mk348c1e277597caa712800e5f10cbc346912286 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 01:27:22.620497   70015 certs.go:381] copying /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/kindnet-612440/apiserver.crt.f6a65b75 -> /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/kindnet-612440/apiserver.crt
	I0814 01:27:22.620582   70015 certs.go:385] copying /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/kindnet-612440/apiserver.key.f6a65b75 -> /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/kindnet-612440/apiserver.key
	I0814 01:27:22.620643   70015 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/kindnet-612440/proxy-client.key
	I0814 01:27:22.620658   70015 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/kindnet-612440/proxy-client.crt with IP's: []
	I0814 01:27:22.686351   70015 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/kindnet-612440/proxy-client.crt ...
	I0814 01:27:22.686378   70015 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/kindnet-612440/proxy-client.crt: {Name:mke0c3c86687929abfd711298efa0beae3ccf2f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 01:27:22.686529   70015 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/kindnet-612440/proxy-client.key ...
	I0814 01:27:22.686539   70015 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/kindnet-612440/proxy-client.key: {Name:mk71c6f587d135d6b7d86760641dc761f4ddc52f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 01:27:22.686696   70015 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589.pem (1338 bytes)
	W0814 01:27:22.686729   70015 certs.go:480] ignoring /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589_empty.pem, impossibly tiny 0 bytes
	I0814 01:27:22.686737   70015 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem (1679 bytes)
	I0814 01:27:22.686774   70015 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem (1082 bytes)
	I0814 01:27:22.686797   70015 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem (1123 bytes)
	I0814 01:27:22.686819   70015 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem (1675 bytes)
	I0814 01:27:22.686876   70015 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem (1708 bytes)
	I0814 01:27:22.687758   70015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0814 01:27:22.712864   70015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0814 01:27:22.735900   70015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0814 01:27:22.761224   70015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0814 01:27:22.787840   70015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/kindnet-612440/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0814 01:27:22.815369   70015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/kindnet-612440/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0814 01:27:22.841276   70015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/kindnet-612440/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0814 01:27:22.869464   70015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/kindnet-612440/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0814 01:27:22.893555   70015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem --> /usr/share/ca-certificates/165892.pem (1708 bytes)
	I0814 01:27:22.917746   70015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0814 01:27:22.943937   70015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589.pem --> /usr/share/ca-certificates/16589.pem (1338 bytes)
	I0814 01:27:22.966119   70015 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0814 01:27:22.989935   70015 ssh_runner.go:195] Run: openssl version
	I0814 01:27:23.004010   70015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/165892.pem && ln -fs /usr/share/ca-certificates/165892.pem /etc/ssl/certs/165892.pem"
	I0814 01:27:23.019002   70015 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/165892.pem
	I0814 01:27:23.023928   70015 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 13 23:59 /usr/share/ca-certificates/165892.pem
	I0814 01:27:23.023984   70015 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/165892.pem
	I0814 01:27:23.029998   70015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/165892.pem /etc/ssl/certs/3ec20f2e.0"
	I0814 01:27:23.043518   70015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0814 01:27:23.054196   70015 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0814 01:27:23.058534   70015 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 13 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0814 01:27:23.058600   70015 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0814 01:27:23.065552   70015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0814 01:27:23.076776   70015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16589.pem && ln -fs /usr/share/ca-certificates/16589.pem /etc/ssl/certs/16589.pem"
	I0814 01:27:23.087709   70015 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16589.pem
	I0814 01:27:23.091923   70015 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 13 23:59 /usr/share/ca-certificates/16589.pem
	I0814 01:27:23.091988   70015 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16589.pem
	I0814 01:27:23.097462   70015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16589.pem /etc/ssl/certs/51391683.0"
	I0814 01:27:23.108400   70015 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0814 01:27:23.112863   70015 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0814 01:27:23.112921   70015 kubeadm.go:392] StartCluster: {Name:kindnet-612440 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0
ClusterName:kindnet-612440 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.61.73 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 01:27:23.113007   70015 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0814 01:27:23.113059   70015 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 01:27:23.147815   70015 cri.go:89] found id: ""
	I0814 01:27:23.147923   70015 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0814 01:27:23.158296   70015 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 01:27:23.168091   70015 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 01:27:23.178377   70015 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 01:27:23.178395   70015 kubeadm.go:157] found existing configuration files:
	
	I0814 01:27:23.178446   70015 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 01:27:23.188656   70015 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 01:27:23.188724   70015 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 01:27:23.199055   70015 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 01:27:23.208797   70015 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 01:27:23.208871   70015 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 01:27:23.218798   70015 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 01:27:23.228169   70015 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 01:27:23.228241   70015 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 01:27:23.237268   70015 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 01:27:23.246329   70015 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 01:27:23.246412   70015 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 01:27:23.256191   70015 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0814 01:27:23.310815   70015 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0814 01:27:23.310910   70015 kubeadm.go:310] [preflight] Running pre-flight checks
	I0814 01:27:23.415811   70015 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0814 01:27:23.415973   70015 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0814 01:27:23.416119   70015 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0814 01:27:23.424444   70015 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0814 01:27:20.034373   70428 main.go:141] libmachine: (calico-612440) DBG | domain calico-612440 has defined MAC address 52:54:00:88:f2:61 in network mk-calico-612440
	I0814 01:27:20.034862   70428 main.go:141] libmachine: (calico-612440) DBG | unable to find current IP address of domain calico-612440 in network mk-calico-612440
	I0814 01:27:20.034890   70428 main.go:141] libmachine: (calico-612440) DBG | I0814 01:27:20.034821   70599 retry.go:31] will retry after 629.584173ms: waiting for machine to come up
	I0814 01:27:20.666587   70428 main.go:141] libmachine: (calico-612440) DBG | domain calico-612440 has defined MAC address 52:54:00:88:f2:61 in network mk-calico-612440
	I0814 01:27:20.667084   70428 main.go:141] libmachine: (calico-612440) DBG | unable to find current IP address of domain calico-612440 in network mk-calico-612440
	I0814 01:27:20.667126   70428 main.go:141] libmachine: (calico-612440) DBG | I0814 01:27:20.667044   70599 retry.go:31] will retry after 665.380719ms: waiting for machine to come up
	I0814 01:27:21.333792   70428 main.go:141] libmachine: (calico-612440) DBG | domain calico-612440 has defined MAC address 52:54:00:88:f2:61 in network mk-calico-612440
	I0814 01:27:21.334342   70428 main.go:141] libmachine: (calico-612440) DBG | unable to find current IP address of domain calico-612440 in network mk-calico-612440
	I0814 01:27:21.334369   70428 main.go:141] libmachine: (calico-612440) DBG | I0814 01:27:21.334294   70599 retry.go:31] will retry after 740.62012ms: waiting for machine to come up
	I0814 01:27:22.076498   70428 main.go:141] libmachine: (calico-612440) DBG | domain calico-612440 has defined MAC address 52:54:00:88:f2:61 in network mk-calico-612440
	I0814 01:27:22.077051   70428 main.go:141] libmachine: (calico-612440) DBG | unable to find current IP address of domain calico-612440 in network mk-calico-612440
	I0814 01:27:22.077077   70428 main.go:141] libmachine: (calico-612440) DBG | I0814 01:27:22.077013   70599 retry.go:31] will retry after 1.104269938s: waiting for machine to come up
	I0814 01:27:23.183235   70428 main.go:141] libmachine: (calico-612440) DBG | domain calico-612440 has defined MAC address 52:54:00:88:f2:61 in network mk-calico-612440
	I0814 01:27:23.183698   70428 main.go:141] libmachine: (calico-612440) DBG | unable to find current IP address of domain calico-612440 in network mk-calico-612440
	I0814 01:27:23.183722   70428 main.go:141] libmachine: (calico-612440) DBG | I0814 01:27:23.183665   70599 retry.go:31] will retry after 1.531252021s: waiting for machine to come up
	I0814 01:27:24.716990   70428 main.go:141] libmachine: (calico-612440) DBG | domain calico-612440 has defined MAC address 52:54:00:88:f2:61 in network mk-calico-612440
	I0814 01:27:24.717522   70428 main.go:141] libmachine: (calico-612440) DBG | unable to find current IP address of domain calico-612440 in network mk-calico-612440
	I0814 01:27:24.717547   70428 main.go:141] libmachine: (calico-612440) DBG | I0814 01:27:24.717472   70599 retry.go:31] will retry after 2.198704823s: waiting for machine to come up
	I0814 01:27:23.615049   70015 out.go:204]   - Generating certificates and keys ...
	I0814 01:27:23.615191   70015 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0814 01:27:23.615295   70015 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0814 01:27:23.615430   70015 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0814 01:27:23.698637   70015 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0814 01:27:23.796254   70015 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0814 01:27:23.862957   70015 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0814 01:27:23.966074   70015 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0814 01:27:23.966404   70015 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kindnet-612440 localhost] and IPs [192.168.61.73 127.0.0.1 ::1]
	I0814 01:27:24.300648   70015 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0814 01:27:24.301790   70015 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kindnet-612440 localhost] and IPs [192.168.61.73 127.0.0.1 ::1]
	I0814 01:27:24.466694   70015 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0814 01:27:24.898864   70015 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0814 01:27:25.075827   70015 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0814 01:27:25.075950   70015 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0814 01:27:25.245966   70015 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0814 01:27:25.492338   70015 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0814 01:27:25.783137   70015 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0814 01:27:25.959815   70015 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0814 01:27:26.079228   70015 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0814 01:27:26.079878   70015 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0814 01:27:26.082461   70015 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0814 01:27:26.917981   70428 main.go:141] libmachine: (calico-612440) DBG | domain calico-612440 has defined MAC address 52:54:00:88:f2:61 in network mk-calico-612440
	I0814 01:27:26.918630   70428 main.go:141] libmachine: (calico-612440) DBG | unable to find current IP address of domain calico-612440 in network mk-calico-612440
	I0814 01:27:26.918662   70428 main.go:141] libmachine: (calico-612440) DBG | I0814 01:27:26.918578   70599 retry.go:31] will retry after 2.612984386s: waiting for machine to come up
	I0814 01:27:29.532716   70428 main.go:141] libmachine: (calico-612440) DBG | domain calico-612440 has defined MAC address 52:54:00:88:f2:61 in network mk-calico-612440
	I0814 01:27:29.533157   70428 main.go:141] libmachine: (calico-612440) DBG | unable to find current IP address of domain calico-612440 in network mk-calico-612440
	I0814 01:27:29.533179   70428 main.go:141] libmachine: (calico-612440) DBG | I0814 01:27:29.533122   70599 retry.go:31] will retry after 2.267171653s: waiting for machine to come up
	I0814 01:27:26.084263   70015 out.go:204]   - Booting up control plane ...
	I0814 01:27:26.084353   70015 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0814 01:27:26.084439   70015 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0814 01:27:26.084637   70015 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0814 01:27:26.102813   70015 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0814 01:27:26.109277   70015 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0814 01:27:26.109343   70015 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0814 01:27:26.241028   70015 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0814 01:27:26.241183   70015 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0814 01:27:26.743500   70015 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.690487ms
	I0814 01:27:26.743606   70015 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0814 01:27:31.745453   70015 kubeadm.go:310] [api-check] The API server is healthy after 5.002150971s
	I0814 01:27:31.757430   70015 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0814 01:27:31.773296   70015 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0814 01:27:31.804758   70015 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0814 01:27:31.805025   70015 kubeadm.go:310] [mark-control-plane] Marking the node kindnet-612440 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0814 01:27:31.814848   70015 kubeadm.go:310] [bootstrap-token] Using token: ewzcu9.517lo8oyaxdntwmy
	I0814 01:27:31.816164   70015 out.go:204]   - Configuring RBAC rules ...
	I0814 01:27:31.816307   70015 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0814 01:27:31.819923   70015 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0814 01:27:31.825614   70015 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0814 01:27:31.831228   70015 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0814 01:27:31.833845   70015 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0814 01:27:31.836721   70015 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0814 01:27:32.152738   70015 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0814 01:27:32.573342   70015 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0814 01:27:33.152657   70015 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0814 01:27:33.152678   70015 kubeadm.go:310] 
	I0814 01:27:33.152788   70015 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0814 01:27:33.152811   70015 kubeadm.go:310] 
	I0814 01:27:33.152940   70015 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0814 01:27:33.152951   70015 kubeadm.go:310] 
	I0814 01:27:33.152972   70015 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0814 01:27:33.153046   70015 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0814 01:27:33.153100   70015 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0814 01:27:33.153110   70015 kubeadm.go:310] 
	I0814 01:27:33.153174   70015 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0814 01:27:33.153184   70015 kubeadm.go:310] 
	I0814 01:27:33.153243   70015 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0814 01:27:33.153250   70015 kubeadm.go:310] 
	I0814 01:27:33.153297   70015 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0814 01:27:33.153419   70015 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0814 01:27:33.153479   70015 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0814 01:27:33.153490   70015 kubeadm.go:310] 
	I0814 01:27:33.153602   70015 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0814 01:27:33.153712   70015 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0814 01:27:33.153725   70015 kubeadm.go:310] 
	I0814 01:27:33.153834   70015 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ewzcu9.517lo8oyaxdntwmy \
	I0814 01:27:33.153928   70015 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f608549e9c065598e2d494ba753eb8a7bbe7260a53a76decd30e6e17b5fe24c3 \
	I0814 01:27:33.153949   70015 kubeadm.go:310] 	--control-plane 
	I0814 01:27:33.153953   70015 kubeadm.go:310] 
	I0814 01:27:33.154088   70015 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0814 01:27:33.154099   70015 kubeadm.go:310] 
	I0814 01:27:33.154207   70015 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ewzcu9.517lo8oyaxdntwmy \
	I0814 01:27:33.154341   70015 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f608549e9c065598e2d494ba753eb8a7bbe7260a53a76decd30e6e17b5fe24c3 
	I0814 01:27:33.155235   70015 kubeadm.go:310] W0814 01:27:23.299201     856 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0814 01:27:33.155515   70015 kubeadm.go:310] W0814 01:27:23.300022     856 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0814 01:27:33.155613   70015 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0814 01:27:33.155640   70015 cni.go:84] Creating CNI manager for "kindnet"
	I0814 01:27:33.157151   70015 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0814 01:27:31.802391   70428 main.go:141] libmachine: (calico-612440) DBG | domain calico-612440 has defined MAC address 52:54:00:88:f2:61 in network mk-calico-612440
	I0814 01:27:31.802898   70428 main.go:141] libmachine: (calico-612440) DBG | unable to find current IP address of domain calico-612440 in network mk-calico-612440
	I0814 01:27:31.802937   70428 main.go:141] libmachine: (calico-612440) DBG | I0814 01:27:31.802858   70599 retry.go:31] will retry after 3.12068249s: waiting for machine to come up
	I0814 01:27:34.927730   70428 main.go:141] libmachine: (calico-612440) DBG | domain calico-612440 has defined MAC address 52:54:00:88:f2:61 in network mk-calico-612440
	I0814 01:27:34.928170   70428 main.go:141] libmachine: (calico-612440) DBG | unable to find current IP address of domain calico-612440 in network mk-calico-612440
	I0814 01:27:34.928193   70428 main.go:141] libmachine: (calico-612440) DBG | I0814 01:27:34.928132   70599 retry.go:31] will retry after 5.654906396s: waiting for machine to come up
	I0814 01:27:33.158179   70015 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0814 01:27:33.163366   70015 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0814 01:27:33.163383   70015 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0814 01:27:33.179922   70015 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0814 01:27:33.444438   70015 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0814 01:27:33.444544   70015 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:27:33.444544   70015 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kindnet-612440 minikube.k8s.io/updated_at=2024_08_14T01_27_33_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a5ac17629aabe8562ef9220195ba6559cf416caf minikube.k8s.io/name=kindnet-612440 minikube.k8s.io/primary=true
	I0814 01:27:33.552604   70015 ops.go:34] apiserver oom_adj: -16
	I0814 01:27:33.552797   70015 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:27:34.052813   70015 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:27:34.553048   70015 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:27:35.053551   70015 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:27:35.552824   70015 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:27:36.053454   70015 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:27:36.553573   70015 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:27:37.053018   70015 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:27:37.553069   70015 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:27:37.669137   70015 kubeadm.go:1113] duration metric: took 4.224656961s to wait for elevateKubeSystemPrivileges
	I0814 01:27:37.669179   70015 kubeadm.go:394] duration metric: took 14.55626074s to StartCluster
	I0814 01:27:37.669203   70015 settings.go:142] acquiring lock: {Name:mkb0f793aa2a6618ff3457f9cd2d34beec5f1b5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 01:27:37.669280   70015 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19429-9425/kubeconfig
	I0814 01:27:37.671677   70015 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19429-9425/kubeconfig: {Name:mkd99f966962f24d3ec49056c344f5320df43dba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 01:27:37.671947   70015 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.61.73 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0814 01:27:37.672339   70015 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0814 01:27:37.672597   70015 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0814 01:27:37.672745   70015 config.go:182] Loaded profile config "kindnet-612440": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 01:27:37.672755   70015 addons.go:69] Setting default-storageclass=true in profile "kindnet-612440"
	I0814 01:27:37.672779   70015 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kindnet-612440"
	I0814 01:27:37.672746   70015 addons.go:69] Setting storage-provisioner=true in profile "kindnet-612440"
	I0814 01:27:37.672834   70015 addons.go:234] Setting addon storage-provisioner=true in "kindnet-612440"
	I0814 01:27:37.672889   70015 host.go:66] Checking if "kindnet-612440" exists ...
	I0814 01:27:37.673133   70015 out.go:177] * Verifying Kubernetes components...
	I0814 01:27:37.673482   70015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:27:37.673738   70015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:27:37.673758   70015 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:27:37.673794   70015 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:27:37.674801   70015 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 01:27:37.694812   70015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39309
	I0814 01:27:37.694898   70015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34943
	I0814 01:27:37.695377   70015 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:27:37.695467   70015 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:27:37.695955   70015 main.go:141] libmachine: Using API Version  1
	I0814 01:27:37.695977   70015 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:27:37.696129   70015 main.go:141] libmachine: Using API Version  1
	I0814 01:27:37.696148   70015 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:27:37.696340   70015 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:27:37.696464   70015 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:27:37.696537   70015 main.go:141] libmachine: (kindnet-612440) Calling .GetState
	I0814 01:27:37.697085   70015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:27:37.697126   70015 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:27:37.700222   70015 addons.go:234] Setting addon default-storageclass=true in "kindnet-612440"
	I0814 01:27:37.700266   70015 host.go:66] Checking if "kindnet-612440" exists ...
	I0814 01:27:37.700627   70015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:27:37.700677   70015 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:27:37.713593   70015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35005
	I0814 01:27:37.714136   70015 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:27:37.714652   70015 main.go:141] libmachine: Using API Version  1
	I0814 01:27:37.714677   70015 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:27:37.715048   70015 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:27:37.715257   70015 main.go:141] libmachine: (kindnet-612440) Calling .GetState
	I0814 01:27:37.715652   70015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45173
	I0814 01:27:37.716013   70015 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:27:37.716487   70015 main.go:141] libmachine: Using API Version  1
	I0814 01:27:37.716511   70015 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:27:37.716882   70015 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:27:37.717155   70015 main.go:141] libmachine: (kindnet-612440) Calling .DriverName
	I0814 01:27:37.717487   70015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:27:37.717520   70015 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:27:37.718802   70015 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 01:27:37.720122   70015 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 01:27:37.720140   70015 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0814 01:27:37.720159   70015 main.go:141] libmachine: (kindnet-612440) Calling .GetSSHHostname
	I0814 01:27:37.723641   70015 main.go:141] libmachine: (kindnet-612440) DBG | domain kindnet-612440 has defined MAC address 52:54:00:2e:21:e4 in network mk-kindnet-612440
	I0814 01:27:37.724143   70015 main.go:141] libmachine: (kindnet-612440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:21:e4", ip: ""} in network mk-kindnet-612440: {Iface:virbr4 ExpiryTime:2024-08-14 02:27:06 +0000 UTC Type:0 Mac:52:54:00:2e:21:e4 Iaid: IPaddr:192.168.61.73 Prefix:24 Hostname:kindnet-612440 Clientid:01:52:54:00:2e:21:e4}
	I0814 01:27:37.724173   70015 main.go:141] libmachine: (kindnet-612440) DBG | domain kindnet-612440 has defined IP address 192.168.61.73 and MAC address 52:54:00:2e:21:e4 in network mk-kindnet-612440
	I0814 01:27:37.724512   70015 main.go:141] libmachine: (kindnet-612440) Calling .GetSSHPort
	I0814 01:27:37.724695   70015 main.go:141] libmachine: (kindnet-612440) Calling .GetSSHKeyPath
	I0814 01:27:37.724843   70015 main.go:141] libmachine: (kindnet-612440) Calling .GetSSHUsername
	I0814 01:27:37.724971   70015 sshutil.go:53] new ssh client: &{IP:192.168.61.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/kindnet-612440/id_rsa Username:docker}
	I0814 01:27:37.732591   70015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40839
	I0814 01:27:37.732947   70015 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:27:37.733427   70015 main.go:141] libmachine: Using API Version  1
	I0814 01:27:37.733446   70015 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:27:37.733890   70015 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:27:37.734096   70015 main.go:141] libmachine: (kindnet-612440) Calling .GetState
	I0814 01:27:37.736107   70015 main.go:141] libmachine: (kindnet-612440) Calling .DriverName
	I0814 01:27:37.736676   70015 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0814 01:27:37.736693   70015 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0814 01:27:37.736712   70015 main.go:141] libmachine: (kindnet-612440) Calling .GetSSHHostname
	I0814 01:27:37.739742   70015 main.go:141] libmachine: (kindnet-612440) DBG | domain kindnet-612440 has defined MAC address 52:54:00:2e:21:e4 in network mk-kindnet-612440
	I0814 01:27:37.740116   70015 main.go:141] libmachine: (kindnet-612440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:21:e4", ip: ""} in network mk-kindnet-612440: {Iface:virbr4 ExpiryTime:2024-08-14 02:27:06 +0000 UTC Type:0 Mac:52:54:00:2e:21:e4 Iaid: IPaddr:192.168.61.73 Prefix:24 Hostname:kindnet-612440 Clientid:01:52:54:00:2e:21:e4}
	I0814 01:27:37.740133   70015 main.go:141] libmachine: (kindnet-612440) DBG | domain kindnet-612440 has defined IP address 192.168.61.73 and MAC address 52:54:00:2e:21:e4 in network mk-kindnet-612440
	I0814 01:27:37.740330   70015 main.go:141] libmachine: (kindnet-612440) Calling .GetSSHPort
	I0814 01:27:37.740557   70015 main.go:141] libmachine: (kindnet-612440) Calling .GetSSHKeyPath
	I0814 01:27:37.740782   70015 main.go:141] libmachine: (kindnet-612440) Calling .GetSSHUsername
	I0814 01:27:37.740925   70015 sshutil.go:53] new ssh client: &{IP:192.168.61.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/kindnet-612440/id_rsa Username:docker}
	I0814 01:27:37.878965   70015 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0814 01:27:37.903262   70015 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 01:27:38.061261   70015 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0814 01:27:38.075626   70015 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 01:27:38.426464   70015 start.go:971] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0814 01:27:38.426598   70015 main.go:141] libmachine: Making call to close driver server
	I0814 01:27:38.426625   70015 main.go:141] libmachine: (kindnet-612440) Calling .Close
	I0814 01:27:38.426950   70015 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:27:38.426968   70015 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:27:38.426978   70015 main.go:141] libmachine: Making call to close driver server
	I0814 01:27:38.426985   70015 main.go:141] libmachine: (kindnet-612440) Calling .Close
	I0814 01:27:38.427243   70015 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:27:38.427265   70015 main.go:141] libmachine: (kindnet-612440) DBG | Closing plugin on server side
	I0814 01:27:38.427267   70015 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:27:38.427947   70015 node_ready.go:35] waiting up to 15m0s for node "kindnet-612440" to be "Ready" ...
	I0814 01:27:38.462145   70015 main.go:141] libmachine: Making call to close driver server
	I0814 01:27:38.462172   70015 main.go:141] libmachine: (kindnet-612440) Calling .Close
	I0814 01:27:38.462446   70015 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:27:38.462450   70015 main.go:141] libmachine: (kindnet-612440) DBG | Closing plugin on server side
	I0814 01:27:38.462461   70015 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:27:38.818763   70015 main.go:141] libmachine: Making call to close driver server
	I0814 01:27:38.818790   70015 main.go:141] libmachine: (kindnet-612440) Calling .Close
	I0814 01:27:38.819139   70015 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:27:38.819160   70015 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:27:38.819170   70015 main.go:141] libmachine: Making call to close driver server
	I0814 01:27:38.819168   70015 main.go:141] libmachine: (kindnet-612440) DBG | Closing plugin on server side
	I0814 01:27:38.819179   70015 main.go:141] libmachine: (kindnet-612440) Calling .Close
	I0814 01:27:38.819410   70015 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:27:38.819434   70015 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:27:38.820970   70015 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0814 01:27:38.821992   70015 addons.go:510] duration metric: took 1.149397955s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0814 01:27:38.931401   70015 kapi.go:214] "coredns" deployment in "kube-system" namespace and "kindnet-612440" context rescaled to 1 replicas
	I0814 01:27:40.431777   70015 node_ready.go:53] node "kindnet-612440" has status "Ready":"False"
	I0814 01:27:40.587132   70428 main.go:141] libmachine: (calico-612440) DBG | domain calico-612440 has defined MAC address 52:54:00:88:f2:61 in network mk-calico-612440
	I0814 01:27:40.587647   70428 main.go:141] libmachine: (calico-612440) Found IP for machine: 192.168.72.199
	I0814 01:27:40.587676   70428 main.go:141] libmachine: (calico-612440) DBG | domain calico-612440 has current primary IP address 192.168.72.199 and MAC address 52:54:00:88:f2:61 in network mk-calico-612440
	I0814 01:27:40.587685   70428 main.go:141] libmachine: (calico-612440) Reserving static IP address...
	I0814 01:27:40.587988   70428 main.go:141] libmachine: (calico-612440) DBG | unable to find host DHCP lease matching {name: "calico-612440", mac: "52:54:00:88:f2:61", ip: "192.168.72.199"} in network mk-calico-612440
	I0814 01:27:40.660040   70428 main.go:141] libmachine: (calico-612440) Reserved static IP address: 192.168.72.199
	I0814 01:27:40.660071   70428 main.go:141] libmachine: (calico-612440) Waiting for SSH to be available...
	I0814 01:27:40.660081   70428 main.go:141] libmachine: (calico-612440) DBG | Getting to WaitForSSH function...
	I0814 01:27:40.662722   70428 main.go:141] libmachine: (calico-612440) DBG | domain calico-612440 has defined MAC address 52:54:00:88:f2:61 in network mk-calico-612440
	I0814 01:27:40.663173   70428 main.go:141] libmachine: (calico-612440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:f2:61", ip: ""} in network mk-calico-612440: {Iface:virbr3 ExpiryTime:2024-08-14 02:27:32 +0000 UTC Type:0 Mac:52:54:00:88:f2:61 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:minikube Clientid:01:52:54:00:88:f2:61}
	I0814 01:27:40.663197   70428 main.go:141] libmachine: (calico-612440) DBG | domain calico-612440 has defined IP address 192.168.72.199 and MAC address 52:54:00:88:f2:61 in network mk-calico-612440
	I0814 01:27:40.663317   70428 main.go:141] libmachine: (calico-612440) DBG | Using SSH client type: external
	I0814 01:27:40.663353   70428 main.go:141] libmachine: (calico-612440) DBG | Using SSH private key: /home/jenkins/minikube-integration/19429-9425/.minikube/machines/calico-612440/id_rsa (-rw-------)
	I0814 01:27:40.663387   70428 main.go:141] libmachine: (calico-612440) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.199 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19429-9425/.minikube/machines/calico-612440/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0814 01:27:40.663405   70428 main.go:141] libmachine: (calico-612440) DBG | About to run SSH command:
	I0814 01:27:40.663418   70428 main.go:141] libmachine: (calico-612440) DBG | exit 0
	I0814 01:27:40.790436   70428 main.go:141] libmachine: (calico-612440) DBG | SSH cmd err, output: <nil>: 
	I0814 01:27:40.790690   70428 main.go:141] libmachine: (calico-612440) KVM machine creation complete!
	I0814 01:27:40.791022   70428 main.go:141] libmachine: (calico-612440) Calling .GetConfigRaw
	I0814 01:27:40.791667   70428 main.go:141] libmachine: (calico-612440) Calling .DriverName
	I0814 01:27:40.791866   70428 main.go:141] libmachine: (calico-612440) Calling .DriverName
	I0814 01:27:40.792024   70428 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0814 01:27:40.792040   70428 main.go:141] libmachine: (calico-612440) Calling .GetState
	I0814 01:27:40.793393   70428 main.go:141] libmachine: Detecting operating system of created instance...
	I0814 01:27:40.793407   70428 main.go:141] libmachine: Waiting for SSH to be available...
	I0814 01:27:40.793413   70428 main.go:141] libmachine: Getting to WaitForSSH function...
	I0814 01:27:40.793420   70428 main.go:141] libmachine: (calico-612440) Calling .GetSSHHostname
	I0814 01:27:40.795864   70428 main.go:141] libmachine: (calico-612440) DBG | domain calico-612440 has defined MAC address 52:54:00:88:f2:61 in network mk-calico-612440
	I0814 01:27:40.796203   70428 main.go:141] libmachine: (calico-612440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:f2:61", ip: ""} in network mk-calico-612440: {Iface:virbr3 ExpiryTime:2024-08-14 02:27:32 +0000 UTC Type:0 Mac:52:54:00:88:f2:61 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:calico-612440 Clientid:01:52:54:00:88:f2:61}
	I0814 01:27:40.796238   70428 main.go:141] libmachine: (calico-612440) DBG | domain calico-612440 has defined IP address 192.168.72.199 and MAC address 52:54:00:88:f2:61 in network mk-calico-612440
	I0814 01:27:40.796393   70428 main.go:141] libmachine: (calico-612440) Calling .GetSSHPort
	I0814 01:27:40.796560   70428 main.go:141] libmachine: (calico-612440) Calling .GetSSHKeyPath
	I0814 01:27:40.796719   70428 main.go:141] libmachine: (calico-612440) Calling .GetSSHKeyPath
	I0814 01:27:40.796827   70428 main.go:141] libmachine: (calico-612440) Calling .GetSSHUsername
	I0814 01:27:40.796959   70428 main.go:141] libmachine: Using SSH client type: native
	I0814 01:27:40.797139   70428 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.199 22 <nil> <nil>}
	I0814 01:27:40.797150   70428 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0814 01:27:40.901299   70428 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 01:27:40.901327   70428 main.go:141] libmachine: Detecting the provisioner...
	I0814 01:27:40.901339   70428 main.go:141] libmachine: (calico-612440) Calling .GetSSHHostname
	I0814 01:27:40.903976   70428 main.go:141] libmachine: (calico-612440) DBG | domain calico-612440 has defined MAC address 52:54:00:88:f2:61 in network mk-calico-612440
	I0814 01:27:40.904444   70428 main.go:141] libmachine: (calico-612440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:f2:61", ip: ""} in network mk-calico-612440: {Iface:virbr3 ExpiryTime:2024-08-14 02:27:32 +0000 UTC Type:0 Mac:52:54:00:88:f2:61 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:calico-612440 Clientid:01:52:54:00:88:f2:61}
	I0814 01:27:40.904471   70428 main.go:141] libmachine: (calico-612440) DBG | domain calico-612440 has defined IP address 192.168.72.199 and MAC address 52:54:00:88:f2:61 in network mk-calico-612440
	I0814 01:27:40.904729   70428 main.go:141] libmachine: (calico-612440) Calling .GetSSHPort
	I0814 01:27:40.904916   70428 main.go:141] libmachine: (calico-612440) Calling .GetSSHKeyPath
	I0814 01:27:40.905081   70428 main.go:141] libmachine: (calico-612440) Calling .GetSSHKeyPath
	I0814 01:27:40.905257   70428 main.go:141] libmachine: (calico-612440) Calling .GetSSHUsername
	I0814 01:27:40.905455   70428 main.go:141] libmachine: Using SSH client type: native
	I0814 01:27:40.905660   70428 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.199 22 <nil> <nil>}
	I0814 01:27:40.905672   70428 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0814 01:27:41.014422   70428 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0814 01:27:41.014503   70428 main.go:141] libmachine: found compatible host: buildroot
	I0814 01:27:41.014517   70428 main.go:141] libmachine: Provisioning with buildroot...
	I0814 01:27:41.014533   70428 main.go:141] libmachine: (calico-612440) Calling .GetMachineName
	I0814 01:27:41.014778   70428 buildroot.go:166] provisioning hostname "calico-612440"
	I0814 01:27:41.014804   70428 main.go:141] libmachine: (calico-612440) Calling .GetMachineName
	I0814 01:27:41.014994   70428 main.go:141] libmachine: (calico-612440) Calling .GetSSHHostname
	I0814 01:27:41.017907   70428 main.go:141] libmachine: (calico-612440) DBG | domain calico-612440 has defined MAC address 52:54:00:88:f2:61 in network mk-calico-612440
	I0814 01:27:41.018307   70428 main.go:141] libmachine: (calico-612440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:f2:61", ip: ""} in network mk-calico-612440: {Iface:virbr3 ExpiryTime:2024-08-14 02:27:32 +0000 UTC Type:0 Mac:52:54:00:88:f2:61 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:calico-612440 Clientid:01:52:54:00:88:f2:61}
	I0814 01:27:41.018338   70428 main.go:141] libmachine: (calico-612440) DBG | domain calico-612440 has defined IP address 192.168.72.199 and MAC address 52:54:00:88:f2:61 in network mk-calico-612440
	I0814 01:27:41.018473   70428 main.go:141] libmachine: (calico-612440) Calling .GetSSHPort
	I0814 01:27:41.018638   70428 main.go:141] libmachine: (calico-612440) Calling .GetSSHKeyPath
	I0814 01:27:41.018774   70428 main.go:141] libmachine: (calico-612440) Calling .GetSSHKeyPath
	I0814 01:27:41.018928   70428 main.go:141] libmachine: (calico-612440) Calling .GetSSHUsername
	I0814 01:27:41.019122   70428 main.go:141] libmachine: Using SSH client type: native
	I0814 01:27:41.019336   70428 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.199 22 <nil> <nil>}
	I0814 01:27:41.019349   70428 main.go:141] libmachine: About to run SSH command:
	sudo hostname calico-612440 && echo "calico-612440" | sudo tee /etc/hostname
	I0814 01:27:41.140007   70428 main.go:141] libmachine: SSH cmd err, output: <nil>: calico-612440
	
	I0814 01:27:41.140039   70428 main.go:141] libmachine: (calico-612440) Calling .GetSSHHostname
	I0814 01:27:41.142605   70428 main.go:141] libmachine: (calico-612440) DBG | domain calico-612440 has defined MAC address 52:54:00:88:f2:61 in network mk-calico-612440
	I0814 01:27:41.142871   70428 main.go:141] libmachine: (calico-612440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:f2:61", ip: ""} in network mk-calico-612440: {Iface:virbr3 ExpiryTime:2024-08-14 02:27:32 +0000 UTC Type:0 Mac:52:54:00:88:f2:61 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:calico-612440 Clientid:01:52:54:00:88:f2:61}
	I0814 01:27:41.142898   70428 main.go:141] libmachine: (calico-612440) DBG | domain calico-612440 has defined IP address 192.168.72.199 and MAC address 52:54:00:88:f2:61 in network mk-calico-612440
	I0814 01:27:41.143085   70428 main.go:141] libmachine: (calico-612440) Calling .GetSSHPort
	I0814 01:27:41.143307   70428 main.go:141] libmachine: (calico-612440) Calling .GetSSHKeyPath
	I0814 01:27:41.143473   70428 main.go:141] libmachine: (calico-612440) Calling .GetSSHKeyPath
	I0814 01:27:41.143658   70428 main.go:141] libmachine: (calico-612440) Calling .GetSSHUsername
	I0814 01:27:41.143891   70428 main.go:141] libmachine: Using SSH client type: native
	I0814 01:27:41.144063   70428 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.199 22 <nil> <nil>}
	I0814 01:27:41.144079   70428 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-612440' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-612440/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-612440' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0814 01:27:41.262292   70428 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 01:27:41.262325   70428 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19429-9425/.minikube CaCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19429-9425/.minikube}
	I0814 01:27:41.262381   70428 buildroot.go:174] setting up certificates
	I0814 01:27:41.262399   70428 provision.go:84] configureAuth start
	I0814 01:27:41.262418   70428 main.go:141] libmachine: (calico-612440) Calling .GetMachineName
	I0814 01:27:41.262710   70428 main.go:141] libmachine: (calico-612440) Calling .GetIP
	I0814 01:27:41.265770   70428 main.go:141] libmachine: (calico-612440) DBG | domain calico-612440 has defined MAC address 52:54:00:88:f2:61 in network mk-calico-612440
	I0814 01:27:41.266093   70428 main.go:141] libmachine: (calico-612440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:f2:61", ip: ""} in network mk-calico-612440: {Iface:virbr3 ExpiryTime:2024-08-14 02:27:32 +0000 UTC Type:0 Mac:52:54:00:88:f2:61 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:calico-612440 Clientid:01:52:54:00:88:f2:61}
	I0814 01:27:41.266127   70428 main.go:141] libmachine: (calico-612440) DBG | domain calico-612440 has defined IP address 192.168.72.199 and MAC address 52:54:00:88:f2:61 in network mk-calico-612440
	I0814 01:27:41.266307   70428 main.go:141] libmachine: (calico-612440) Calling .GetSSHHostname
	I0814 01:27:41.268992   70428 main.go:141] libmachine: (calico-612440) DBG | domain calico-612440 has defined MAC address 52:54:00:88:f2:61 in network mk-calico-612440
	I0814 01:27:41.269324   70428 main.go:141] libmachine: (calico-612440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:f2:61", ip: ""} in network mk-calico-612440: {Iface:virbr3 ExpiryTime:2024-08-14 02:27:32 +0000 UTC Type:0 Mac:52:54:00:88:f2:61 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:calico-612440 Clientid:01:52:54:00:88:f2:61}
	I0814 01:27:41.269350   70428 main.go:141] libmachine: (calico-612440) DBG | domain calico-612440 has defined IP address 192.168.72.199 and MAC address 52:54:00:88:f2:61 in network mk-calico-612440
	I0814 01:27:41.269481   70428 provision.go:143] copyHostCerts
	I0814 01:27:41.269534   70428 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem, removing ...
	I0814 01:27:41.269545   70428 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem
	I0814 01:27:41.269619   70428 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem (1082 bytes)
	I0814 01:27:41.269761   70428 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem, removing ...
	I0814 01:27:41.269773   70428 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem
	I0814 01:27:41.269802   70428 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem (1123 bytes)
	I0814 01:27:41.269875   70428 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem, removing ...
	I0814 01:27:41.269883   70428 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem
	I0814 01:27:41.269917   70428 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem (1675 bytes)
	I0814 01:27:41.269976   70428 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem org=jenkins.calico-612440 san=[127.0.0.1 192.168.72.199 calico-612440 localhost minikube]
	I0814 01:27:41.440271   70428 provision.go:177] copyRemoteCerts
	I0814 01:27:41.440325   70428 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0814 01:27:41.440349   70428 main.go:141] libmachine: (calico-612440) Calling .GetSSHHostname
	I0814 01:27:41.443197   70428 main.go:141] libmachine: (calico-612440) DBG | domain calico-612440 has defined MAC address 52:54:00:88:f2:61 in network mk-calico-612440
	I0814 01:27:41.443607   70428 main.go:141] libmachine: (calico-612440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:f2:61", ip: ""} in network mk-calico-612440: {Iface:virbr3 ExpiryTime:2024-08-14 02:27:32 +0000 UTC Type:0 Mac:52:54:00:88:f2:61 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:calico-612440 Clientid:01:52:54:00:88:f2:61}
	I0814 01:27:41.443635   70428 main.go:141] libmachine: (calico-612440) DBG | domain calico-612440 has defined IP address 192.168.72.199 and MAC address 52:54:00:88:f2:61 in network mk-calico-612440
	I0814 01:27:41.443830   70428 main.go:141] libmachine: (calico-612440) Calling .GetSSHPort
	I0814 01:27:41.443981   70428 main.go:141] libmachine: (calico-612440) Calling .GetSSHKeyPath
	I0814 01:27:41.444124   70428 main.go:141] libmachine: (calico-612440) Calling .GetSSHUsername
	I0814 01:27:41.444294   70428 sshutil.go:53] new ssh client: &{IP:192.168.72.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/calico-612440/id_rsa Username:docker}
	I0814 01:27:41.528906   70428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0814 01:27:41.551428   70428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0814 01:27:41.572600   70428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0814 01:27:41.594272   70428 provision.go:87] duration metric: took 331.856894ms to configureAuth
	I0814 01:27:41.594301   70428 buildroot.go:189] setting minikube options for container-runtime
	I0814 01:27:41.594527   70428 config.go:182] Loaded profile config "calico-612440": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 01:27:41.594619   70428 main.go:141] libmachine: (calico-612440) Calling .GetSSHHostname
	I0814 01:27:41.597431   70428 main.go:141] libmachine: (calico-612440) DBG | domain calico-612440 has defined MAC address 52:54:00:88:f2:61 in network mk-calico-612440
	I0814 01:27:41.597793   70428 main.go:141] libmachine: (calico-612440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:f2:61", ip: ""} in network mk-calico-612440: {Iface:virbr3 ExpiryTime:2024-08-14 02:27:32 +0000 UTC Type:0 Mac:52:54:00:88:f2:61 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:calico-612440 Clientid:01:52:54:00:88:f2:61}
	I0814 01:27:41.597820   70428 main.go:141] libmachine: (calico-612440) DBG | domain calico-612440 has defined IP address 192.168.72.199 and MAC address 52:54:00:88:f2:61 in network mk-calico-612440
	I0814 01:27:41.597980   70428 main.go:141] libmachine: (calico-612440) Calling .GetSSHPort
	I0814 01:27:41.598179   70428 main.go:141] libmachine: (calico-612440) Calling .GetSSHKeyPath
	I0814 01:27:41.598353   70428 main.go:141] libmachine: (calico-612440) Calling .GetSSHKeyPath
	I0814 01:27:41.598449   70428 main.go:141] libmachine: (calico-612440) Calling .GetSSHUsername
	I0814 01:27:41.598589   70428 main.go:141] libmachine: Using SSH client type: native
	I0814 01:27:41.598791   70428 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.199 22 <nil> <nil>}
	I0814 01:27:41.598817   70428 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0814 01:27:41.864579   70428 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0814 01:27:41.864618   70428 main.go:141] libmachine: Checking connection to Docker...
	I0814 01:27:41.864630   70428 main.go:141] libmachine: (calico-612440) Calling .GetURL
	I0814 01:27:41.866092   70428 main.go:141] libmachine: (calico-612440) DBG | Using libvirt version 6000000
	I0814 01:27:41.868506   70428 main.go:141] libmachine: (calico-612440) DBG | domain calico-612440 has defined MAC address 52:54:00:88:f2:61 in network mk-calico-612440
	I0814 01:27:41.868861   70428 main.go:141] libmachine: (calico-612440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:f2:61", ip: ""} in network mk-calico-612440: {Iface:virbr3 ExpiryTime:2024-08-14 02:27:32 +0000 UTC Type:0 Mac:52:54:00:88:f2:61 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:calico-612440 Clientid:01:52:54:00:88:f2:61}
	I0814 01:27:41.868891   70428 main.go:141] libmachine: (calico-612440) DBG | domain calico-612440 has defined IP address 192.168.72.199 and MAC address 52:54:00:88:f2:61 in network mk-calico-612440
	I0814 01:27:41.869027   70428 main.go:141] libmachine: Docker is up and running!
	I0814 01:27:41.869039   70428 main.go:141] libmachine: Reticulating splines...
	I0814 01:27:41.869045   70428 client.go:171] duration metric: took 25.276175964s to LocalClient.Create
	I0814 01:27:41.869083   70428 start.go:167] duration metric: took 25.276237772s to libmachine.API.Create "calico-612440"
	I0814 01:27:41.869095   70428 start.go:293] postStartSetup for "calico-612440" (driver="kvm2")
	I0814 01:27:41.869108   70428 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0814 01:27:41.869131   70428 main.go:141] libmachine: (calico-612440) Calling .DriverName
	I0814 01:27:41.869355   70428 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0814 01:27:41.869380   70428 main.go:141] libmachine: (calico-612440) Calling .GetSSHHostname
	I0814 01:27:41.871306   70428 main.go:141] libmachine: (calico-612440) DBG | domain calico-612440 has defined MAC address 52:54:00:88:f2:61 in network mk-calico-612440
	I0814 01:27:41.871586   70428 main.go:141] libmachine: (calico-612440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:f2:61", ip: ""} in network mk-calico-612440: {Iface:virbr3 ExpiryTime:2024-08-14 02:27:32 +0000 UTC Type:0 Mac:52:54:00:88:f2:61 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:calico-612440 Clientid:01:52:54:00:88:f2:61}
	I0814 01:27:41.871606   70428 main.go:141] libmachine: (calico-612440) DBG | domain calico-612440 has defined IP address 192.168.72.199 and MAC address 52:54:00:88:f2:61 in network mk-calico-612440
	I0814 01:27:41.871730   70428 main.go:141] libmachine: (calico-612440) Calling .GetSSHPort
	I0814 01:27:41.871895   70428 main.go:141] libmachine: (calico-612440) Calling .GetSSHKeyPath
	I0814 01:27:41.872062   70428 main.go:141] libmachine: (calico-612440) Calling .GetSSHUsername
	I0814 01:27:41.872218   70428 sshutil.go:53] new ssh client: &{IP:192.168.72.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/calico-612440/id_rsa Username:docker}
	I0814 01:27:41.956278   70428 ssh_runner.go:195] Run: cat /etc/os-release
	I0814 01:27:41.960398   70428 info.go:137] Remote host: Buildroot 2023.02.9
	I0814 01:27:41.960424   70428 filesync.go:126] Scanning /home/jenkins/minikube-integration/19429-9425/.minikube/addons for local assets ...
	I0814 01:27:41.960479   70428 filesync.go:126] Scanning /home/jenkins/minikube-integration/19429-9425/.minikube/files for local assets ...
	I0814 01:27:41.960560   70428 filesync.go:149] local asset: /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem -> 165892.pem in /etc/ssl/certs
	I0814 01:27:41.960654   70428 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0814 01:27:41.969880   70428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem --> /etc/ssl/certs/165892.pem (1708 bytes)
	I0814 01:27:41.992392   70428 start.go:296] duration metric: took 123.283552ms for postStartSetup
	I0814 01:27:41.992445   70428 main.go:141] libmachine: (calico-612440) Calling .GetConfigRaw
	I0814 01:27:41.993142   70428 main.go:141] libmachine: (calico-612440) Calling .GetIP
	I0814 01:27:41.996134   70428 main.go:141] libmachine: (calico-612440) DBG | domain calico-612440 has defined MAC address 52:54:00:88:f2:61 in network mk-calico-612440
	I0814 01:27:41.996569   70428 main.go:141] libmachine: (calico-612440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:f2:61", ip: ""} in network mk-calico-612440: {Iface:virbr3 ExpiryTime:2024-08-14 02:27:32 +0000 UTC Type:0 Mac:52:54:00:88:f2:61 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:calico-612440 Clientid:01:52:54:00:88:f2:61}
	I0814 01:27:41.996594   70428 main.go:141] libmachine: (calico-612440) DBG | domain calico-612440 has defined IP address 192.168.72.199 and MAC address 52:54:00:88:f2:61 in network mk-calico-612440
	I0814 01:27:41.996874   70428 profile.go:143] Saving config to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/calico-612440/config.json ...
	I0814 01:27:41.997077   70428 start.go:128] duration metric: took 25.426429962s to createHost
	I0814 01:27:41.997111   70428 main.go:141] libmachine: (calico-612440) Calling .GetSSHHostname
	I0814 01:27:41.999622   70428 main.go:141] libmachine: (calico-612440) DBG | domain calico-612440 has defined MAC address 52:54:00:88:f2:61 in network mk-calico-612440
	I0814 01:27:41.999932   70428 main.go:141] libmachine: (calico-612440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:f2:61", ip: ""} in network mk-calico-612440: {Iface:virbr3 ExpiryTime:2024-08-14 02:27:32 +0000 UTC Type:0 Mac:52:54:00:88:f2:61 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:calico-612440 Clientid:01:52:54:00:88:f2:61}
	I0814 01:27:41.999957   70428 main.go:141] libmachine: (calico-612440) DBG | domain calico-612440 has defined IP address 192.168.72.199 and MAC address 52:54:00:88:f2:61 in network mk-calico-612440
	I0814 01:27:42.000125   70428 main.go:141] libmachine: (calico-612440) Calling .GetSSHPort
	I0814 01:27:42.000286   70428 main.go:141] libmachine: (calico-612440) Calling .GetSSHKeyPath
	I0814 01:27:42.000427   70428 main.go:141] libmachine: (calico-612440) Calling .GetSSHKeyPath
	I0814 01:27:42.000614   70428 main.go:141] libmachine: (calico-612440) Calling .GetSSHUsername
	I0814 01:27:42.000771   70428 main.go:141] libmachine: Using SSH client type: native
	I0814 01:27:42.000950   70428 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.199 22 <nil> <nil>}
	I0814 01:27:42.000964   70428 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0814 01:27:42.106537   70428 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723598862.081195320
	
	I0814 01:27:42.106566   70428 fix.go:216] guest clock: 1723598862.081195320
	I0814 01:27:42.106576   70428 fix.go:229] Guest: 2024-08-14 01:27:42.08119532 +0000 UTC Remote: 2024-08-14 01:27:41.997087836 +0000 UTC m=+42.091830992 (delta=84.107484ms)
	I0814 01:27:42.106613   70428 fix.go:200] guest clock delta is within tolerance: 84.107484ms
	I0814 01:27:42.106625   70428 start.go:83] releasing machines lock for "calico-612440", held for 25.536256394s
	I0814 01:27:42.106653   70428 main.go:141] libmachine: (calico-612440) Calling .DriverName
	I0814 01:27:42.106916   70428 main.go:141] libmachine: (calico-612440) Calling .GetIP
	I0814 01:27:42.109565   70428 main.go:141] libmachine: (calico-612440) DBG | domain calico-612440 has defined MAC address 52:54:00:88:f2:61 in network mk-calico-612440
	I0814 01:27:42.109952   70428 main.go:141] libmachine: (calico-612440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:f2:61", ip: ""} in network mk-calico-612440: {Iface:virbr3 ExpiryTime:2024-08-14 02:27:32 +0000 UTC Type:0 Mac:52:54:00:88:f2:61 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:calico-612440 Clientid:01:52:54:00:88:f2:61}
	I0814 01:27:42.109981   70428 main.go:141] libmachine: (calico-612440) DBG | domain calico-612440 has defined IP address 192.168.72.199 and MAC address 52:54:00:88:f2:61 in network mk-calico-612440
	I0814 01:27:42.110112   70428 main.go:141] libmachine: (calico-612440) Calling .DriverName
	I0814 01:27:42.110569   70428 main.go:141] libmachine: (calico-612440) Calling .DriverName
	I0814 01:27:42.110714   70428 main.go:141] libmachine: (calico-612440) Calling .DriverName
	I0814 01:27:42.110816   70428 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0814 01:27:42.110868   70428 main.go:141] libmachine: (calico-612440) Calling .GetSSHHostname
	I0814 01:27:42.110918   70428 ssh_runner.go:195] Run: cat /version.json
	I0814 01:27:42.110954   70428 main.go:141] libmachine: (calico-612440) Calling .GetSSHHostname
	I0814 01:27:42.113358   70428 main.go:141] libmachine: (calico-612440) DBG | domain calico-612440 has defined MAC address 52:54:00:88:f2:61 in network mk-calico-612440
	I0814 01:27:42.113694   70428 main.go:141] libmachine: (calico-612440) DBG | domain calico-612440 has defined MAC address 52:54:00:88:f2:61 in network mk-calico-612440
	I0814 01:27:42.113755   70428 main.go:141] libmachine: (calico-612440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:f2:61", ip: ""} in network mk-calico-612440: {Iface:virbr3 ExpiryTime:2024-08-14 02:27:32 +0000 UTC Type:0 Mac:52:54:00:88:f2:61 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:calico-612440 Clientid:01:52:54:00:88:f2:61}
	I0814 01:27:42.113779   70428 main.go:141] libmachine: (calico-612440) DBG | domain calico-612440 has defined IP address 192.168.72.199 and MAC address 52:54:00:88:f2:61 in network mk-calico-612440
	I0814 01:27:42.113901   70428 main.go:141] libmachine: (calico-612440) Calling .GetSSHPort
	I0814 01:27:42.114014   70428 main.go:141] libmachine: (calico-612440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:f2:61", ip: ""} in network mk-calico-612440: {Iface:virbr3 ExpiryTime:2024-08-14 02:27:32 +0000 UTC Type:0 Mac:52:54:00:88:f2:61 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:calico-612440 Clientid:01:52:54:00:88:f2:61}
	I0814 01:27:42.114031   70428 main.go:141] libmachine: (calico-612440) Calling .GetSSHKeyPath
	I0814 01:27:42.114034   70428 main.go:141] libmachine: (calico-612440) DBG | domain calico-612440 has defined IP address 192.168.72.199 and MAC address 52:54:00:88:f2:61 in network mk-calico-612440
	I0814 01:27:42.114218   70428 main.go:141] libmachine: (calico-612440) Calling .GetSSHPort
	I0814 01:27:42.114230   70428 main.go:141] libmachine: (calico-612440) Calling .GetSSHUsername
	I0814 01:27:42.114380   70428 sshutil.go:53] new ssh client: &{IP:192.168.72.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/calico-612440/id_rsa Username:docker}
	I0814 01:27:42.114424   70428 main.go:141] libmachine: (calico-612440) Calling .GetSSHKeyPath
	I0814 01:27:42.114546   70428 main.go:141] libmachine: (calico-612440) Calling .GetSSHUsername
	I0814 01:27:42.114681   70428 sshutil.go:53] new ssh client: &{IP:192.168.72.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/calico-612440/id_rsa Username:docker}
	I0814 01:27:42.228707   70428 ssh_runner.go:195] Run: systemctl --version
	I0814 01:27:42.234269   70428 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0814 01:27:42.391492   70428 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0814 01:27:42.397275   70428 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0814 01:27:42.397326   70428 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0814 01:27:42.411781   70428 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0814 01:27:42.411803   70428 start.go:495] detecting cgroup driver to use...
	I0814 01:27:42.411864   70428 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0814 01:27:42.428030   70428 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0814 01:27:42.442097   70428 docker.go:217] disabling cri-docker service (if available) ...
	I0814 01:27:42.442156   70428 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0814 01:27:42.455034   70428 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0814 01:27:42.467389   70428 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0814 01:27:42.577147   70428 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0814 01:27:42.718076   70428 docker.go:233] disabling docker service ...
	I0814 01:27:42.718171   70428 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0814 01:27:42.732240   70428 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0814 01:27:42.744984   70428 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0814 01:27:42.875248   70428 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0814 01:27:43.006272   70428 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0814 01:27:43.019019   70428 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 01:27:43.035838   70428 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0814 01:27:43.035907   70428 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:27:43.045206   70428 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0814 01:27:43.045272   70428 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:27:43.054692   70428 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:27:43.064196   70428 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:27:43.073591   70428 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0814 01:27:43.084398   70428 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:27:43.094316   70428 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:27:43.111264   70428 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:27:43.120757   70428 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0814 01:27:43.129088   70428 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0814 01:27:43.129145   70428 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0814 01:27:43.142241   70428 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0814 01:27:43.151224   70428 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 01:27:43.285892   70428 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0814 01:27:43.413805   70428 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0814 01:27:43.413869   70428 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0814 01:27:43.418907   70428 start.go:563] Will wait 60s for crictl version
	I0814 01:27:43.418966   70428 ssh_runner.go:195] Run: which crictl
	I0814 01:27:43.422321   70428 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0814 01:27:43.460477   70428 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0814 01:27:43.460545   70428 ssh_runner.go:195] Run: crio --version
	I0814 01:27:43.488698   70428 ssh_runner.go:195] Run: crio --version
	I0814 01:27:43.517648   70428 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0814 01:27:43.518715   70428 main.go:141] libmachine: (calico-612440) Calling .GetIP
	I0814 01:27:43.521499   70428 main.go:141] libmachine: (calico-612440) DBG | domain calico-612440 has defined MAC address 52:54:00:88:f2:61 in network mk-calico-612440
	I0814 01:27:43.521865   70428 main.go:141] libmachine: (calico-612440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:f2:61", ip: ""} in network mk-calico-612440: {Iface:virbr3 ExpiryTime:2024-08-14 02:27:32 +0000 UTC Type:0 Mac:52:54:00:88:f2:61 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:calico-612440 Clientid:01:52:54:00:88:f2:61}
	I0814 01:27:43.521892   70428 main.go:141] libmachine: (calico-612440) DBG | domain calico-612440 has defined IP address 192.168.72.199 and MAC address 52:54:00:88:f2:61 in network mk-calico-612440
	I0814 01:27:43.522105   70428 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0814 01:27:43.526023   70428 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 01:27:43.537809   70428 kubeadm.go:883] updating cluster {Name:calico-612440 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:calico-612440 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.72.199 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0814 01:27:43.537905   70428 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0814 01:27:43.537946   70428 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 01:27:43.567909   70428 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0814 01:27:43.567975   70428 ssh_runner.go:195] Run: which lz4
	I0814 01:27:43.571774   70428 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0814 01:27:43.575634   70428 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0814 01:27:43.575668   70428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0814 01:27:44.761869   70428 crio.go:462] duration metric: took 1.190115027s to copy over tarball
	I0814 01:27:44.761953   70428 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0814 01:27:42.432098   70015 node_ready.go:53] node "kindnet-612440" has status "Ready":"False"
	I0814 01:27:44.432752   70015 node_ready.go:53] node "kindnet-612440" has status "Ready":"False"
	I0814 01:27:46.960479   70428 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.198496825s)
	I0814 01:27:46.960509   70428 crio.go:469] duration metric: took 2.198610235s to extract the tarball
	I0814 01:27:46.960518   70428 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0814 01:27:46.998354   70428 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 01:27:47.040398   70428 crio.go:514] all images are preloaded for cri-o runtime.
	I0814 01:27:47.040428   70428 cache_images.go:84] Images are preloaded, skipping loading
	I0814 01:27:47.040439   70428 kubeadm.go:934] updating node { 192.168.72.199 8443 v1.31.0 crio true true} ...
	I0814 01:27:47.040560   70428 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=calico-612440 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.199
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:calico-612440 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico}
	I0814 01:27:47.040628   70428 ssh_runner.go:195] Run: crio config
	I0814 01:27:47.082142   70428 cni.go:84] Creating CNI manager for "calico"
	I0814 01:27:47.082178   70428 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0814 01:27:47.082207   70428 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.199 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-612440 NodeName:calico-612440 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.199"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.199 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0814 01:27:47.082384   70428 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.199
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "calico-612440"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.199
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.199"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0814 01:27:47.082469   70428 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0814 01:27:47.093029   70428 binaries.go:44] Found k8s binaries, skipping transfer
	I0814 01:27:47.093103   70428 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0814 01:27:47.102142   70428 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0814 01:27:47.117320   70428 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0814 01:27:47.133040   70428 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0814 01:27:47.149620   70428 ssh_runner.go:195] Run: grep 192.168.72.199	control-plane.minikube.internal$ /etc/hosts
	I0814 01:27:47.153400   70428 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.199	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 01:27:47.165142   70428 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 01:27:47.306070   70428 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 01:27:47.322087   70428 certs.go:68] Setting up /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/calico-612440 for IP: 192.168.72.199
	I0814 01:27:47.322111   70428 certs.go:194] generating shared ca certs ...
	I0814 01:27:47.322125   70428 certs.go:226] acquiring lock for ca certs: {Name:mke321171338291cf6d66f3acbfe43d3fabcafa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 01:27:47.322306   70428 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19429-9425/.minikube/ca.key
	I0814 01:27:47.322367   70428 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.key
	I0814 01:27:47.322378   70428 certs.go:256] generating profile certs ...
	I0814 01:27:47.322427   70428 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/calico-612440/client.key
	I0814 01:27:47.322439   70428 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/calico-612440/client.crt with IP's: []
	I0814 01:27:47.376215   70428 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/calico-612440/client.crt ...
	I0814 01:27:47.376241   70428 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/calico-612440/client.crt: {Name:mk1b1c660dda01764b24d5b1261c475b0599509b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 01:27:47.376407   70428 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/calico-612440/client.key ...
	I0814 01:27:47.376418   70428 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/calico-612440/client.key: {Name:mkc9382fb756756ab1ccc7fb08898f39e02e8e6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 01:27:47.376491   70428 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/calico-612440/apiserver.key.1f7846c7
	I0814 01:27:47.376505   70428 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/calico-612440/apiserver.crt.1f7846c7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.199]
	I0814 01:27:47.765005   70428 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/calico-612440/apiserver.crt.1f7846c7 ...
	I0814 01:27:47.765034   70428 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/calico-612440/apiserver.crt.1f7846c7: {Name:mkde4843c128a76b627984e9d92038867a0767d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 01:27:47.765206   70428 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/calico-612440/apiserver.key.1f7846c7 ...
	I0814 01:27:47.765224   70428 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/calico-612440/apiserver.key.1f7846c7: {Name:mkae70fa16d2d72633610fbf046de16c3caa18df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 01:27:47.765343   70428 certs.go:381] copying /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/calico-612440/apiserver.crt.1f7846c7 -> /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/calico-612440/apiserver.crt
	I0814 01:27:47.765454   70428 certs.go:385] copying /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/calico-612440/apiserver.key.1f7846c7 -> /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/calico-612440/apiserver.key
	I0814 01:27:47.765514   70428 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/calico-612440/proxy-client.key
	I0814 01:27:47.765529   70428 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/calico-612440/proxy-client.crt with IP's: []
	I0814 01:27:47.844890   70428 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/calico-612440/proxy-client.crt ...
	I0814 01:27:47.844918   70428 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/calico-612440/proxy-client.crt: {Name:mkafbd2e795ff2ff276e51f3fbbec18004a203ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 01:27:47.845078   70428 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/calico-612440/proxy-client.key ...
	I0814 01:27:47.845092   70428 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/calico-612440/proxy-client.key: {Name:mkb1f16d7af7c4c9ab3c3df240c4894c27bb6e02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 01:27:47.845312   70428 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589.pem (1338 bytes)
	W0814 01:27:47.845360   70428 certs.go:480] ignoring /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589_empty.pem, impossibly tiny 0 bytes
	I0814 01:27:47.845372   70428 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem (1679 bytes)
	I0814 01:27:47.845395   70428 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem (1082 bytes)
	I0814 01:27:47.845431   70428 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem (1123 bytes)
	I0814 01:27:47.845456   70428 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem (1675 bytes)
	I0814 01:27:47.845494   70428 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem (1708 bytes)
	I0814 01:27:47.846110   70428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0814 01:27:47.869395   70428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0814 01:27:47.891899   70428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0814 01:27:47.914645   70428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0814 01:27:47.937316   70428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/calico-612440/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0814 01:27:47.959188   70428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/calico-612440/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0814 01:27:47.982436   70428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/calico-612440/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0814 01:27:48.034318   70428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/calico-612440/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0814 01:27:48.057955   70428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem --> /usr/share/ca-certificates/165892.pem (1708 bytes)
	I0814 01:27:48.080121   70428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0814 01:27:48.101058   70428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589.pem --> /usr/share/ca-certificates/16589.pem (1338 bytes)
	I0814 01:27:48.122867   70428 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0814 01:27:48.139948   70428 ssh_runner.go:195] Run: openssl version
	I0814 01:27:48.145406   70428 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/165892.pem && ln -fs /usr/share/ca-certificates/165892.pem /etc/ssl/certs/165892.pem"
	I0814 01:27:48.155736   70428 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/165892.pem
	I0814 01:27:48.159894   70428 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 13 23:59 /usr/share/ca-certificates/165892.pem
	I0814 01:27:48.159943   70428 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/165892.pem
	I0814 01:27:48.165249   70428 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/165892.pem /etc/ssl/certs/3ec20f2e.0"
	I0814 01:27:48.174772   70428 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0814 01:27:48.184604   70428 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0814 01:27:48.188741   70428 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 13 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0814 01:27:48.188793   70428 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0814 01:27:48.193884   70428 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0814 01:27:48.203749   70428 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16589.pem && ln -fs /usr/share/ca-certificates/16589.pem /etc/ssl/certs/16589.pem"
	I0814 01:27:48.213991   70428 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16589.pem
	I0814 01:27:48.217897   70428 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 13 23:59 /usr/share/ca-certificates/16589.pem
	I0814 01:27:48.217953   70428 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16589.pem
	I0814 01:27:48.223291   70428 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16589.pem /etc/ssl/certs/51391683.0"
	I0814 01:27:48.233064   70428 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0814 01:27:48.236701   70428 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0814 01:27:48.236752   70428 kubeadm.go:392] StartCluster: {Name:calico-612440 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 C
lusterName:calico-612440 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.72.199 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 01:27:48.236815   70428 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0814 01:27:48.236848   70428 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 01:27:48.272565   70428 cri.go:89] found id: ""
	I0814 01:27:48.272650   70428 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0814 01:27:48.282159   70428 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 01:27:48.293675   70428 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 01:27:48.303000   70428 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 01:27:48.303021   70428 kubeadm.go:157] found existing configuration files:
	
	I0814 01:27:48.303073   70428 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 01:27:48.313025   70428 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 01:27:48.313080   70428 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 01:27:48.323467   70428 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 01:27:48.333349   70428 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 01:27:48.333440   70428 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 01:27:48.342251   70428 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 01:27:48.350472   70428 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 01:27:48.350546   70428 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 01:27:48.359392   70428 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 01:27:48.369012   70428 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 01:27:48.369071   70428 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 01:27:48.378943   70428 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0814 01:27:48.437463   70428 kubeadm.go:310] W0814 01:27:48.419444     853 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0814 01:27:48.438063   70428 kubeadm.go:310] W0814 01:27:48.420361     853 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0814 01:27:48.538849   70428 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0814 01:27:46.931954   70015 node_ready.go:53] node "kindnet-612440" has status "Ready":"False"
	I0814 01:27:48.932140   70015 node_ready.go:53] node "kindnet-612440" has status "Ready":"False"
	I0814 01:27:50.240947   70015 node_ready.go:49] node "kindnet-612440" has status "Ready":"True"
	I0814 01:27:50.240971   70015 node_ready.go:38] duration metric: took 11.813004675s for node "kindnet-612440" to be "Ready" ...
	I0814 01:27:50.240981   70015 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 01:27:50.250591   70015 pod_ready.go:78] waiting up to 15m0s for pod "coredns-6f6b679f8f-26d7x" in "kube-system" namespace to be "Ready" ...
	I0814 01:27:52.258542   70015 pod_ready.go:92] pod "coredns-6f6b679f8f-26d7x" in "kube-system" namespace has status "Ready":"True"
	I0814 01:27:52.258572   70015 pod_ready.go:81] duration metric: took 2.007951013s for pod "coredns-6f6b679f8f-26d7x" in "kube-system" namespace to be "Ready" ...
	I0814 01:27:52.258586   70015 pod_ready.go:78] waiting up to 15m0s for pod "etcd-kindnet-612440" in "kube-system" namespace to be "Ready" ...
	I0814 01:27:52.263299   70015 pod_ready.go:92] pod "etcd-kindnet-612440" in "kube-system" namespace has status "Ready":"True"
	I0814 01:27:52.263327   70015 pod_ready.go:81] duration metric: took 4.732207ms for pod "etcd-kindnet-612440" in "kube-system" namespace to be "Ready" ...
	I0814 01:27:52.263344   70015 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-kindnet-612440" in "kube-system" namespace to be "Ready" ...
	I0814 01:27:52.267708   70015 pod_ready.go:92] pod "kube-apiserver-kindnet-612440" in "kube-system" namespace has status "Ready":"True"
	I0814 01:27:52.267733   70015 pod_ready.go:81] duration metric: took 4.379445ms for pod "kube-apiserver-kindnet-612440" in "kube-system" namespace to be "Ready" ...
	I0814 01:27:52.267745   70015 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-kindnet-612440" in "kube-system" namespace to be "Ready" ...
	I0814 01:27:52.272693   70015 pod_ready.go:92] pod "kube-controller-manager-kindnet-612440" in "kube-system" namespace has status "Ready":"True"
	I0814 01:27:52.272712   70015 pod_ready.go:81] duration metric: took 4.958983ms for pod "kube-controller-manager-kindnet-612440" in "kube-system" namespace to be "Ready" ...
	I0814 01:27:52.272721   70015 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-czg52" in "kube-system" namespace to be "Ready" ...
	I0814 01:27:52.276496   70015 pod_ready.go:92] pod "kube-proxy-czg52" in "kube-system" namespace has status "Ready":"True"
	I0814 01:27:52.276513   70015 pod_ready.go:81] duration metric: took 3.785733ms for pod "kube-proxy-czg52" in "kube-system" namespace to be "Ready" ...
	I0814 01:27:52.276525   70015 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-kindnet-612440" in "kube-system" namespace to be "Ready" ...
	I0814 01:27:52.655333   70015 pod_ready.go:92] pod "kube-scheduler-kindnet-612440" in "kube-system" namespace has status "Ready":"True"
	I0814 01:27:52.655364   70015 pod_ready.go:81] duration metric: took 378.81777ms for pod "kube-scheduler-kindnet-612440" in "kube-system" namespace to be "Ready" ...
	I0814 01:27:52.655377   70015 pod_ready.go:38] duration metric: took 2.414364228s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 01:27:52.655398   70015 api_server.go:52] waiting for apiserver process to appear ...
	I0814 01:27:52.655457   70015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:27:52.673743   70015 api_server.go:72] duration metric: took 15.001753663s to wait for apiserver process to appear ...
	I0814 01:27:52.673767   70015 api_server.go:88] waiting for apiserver healthz status ...
	I0814 01:27:52.673788   70015 api_server.go:253] Checking apiserver healthz at https://192.168.61.73:8443/healthz ...
	I0814 01:27:52.678017   70015 api_server.go:279] https://192.168.61.73:8443/healthz returned 200:
	ok
	I0814 01:27:52.679069   70015 api_server.go:141] control plane version: v1.31.0
	I0814 01:27:52.679098   70015 api_server.go:131] duration metric: took 5.322765ms to wait for apiserver health ...
	I0814 01:27:52.679109   70015 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 01:27:52.858762   70015 system_pods.go:59] 8 kube-system pods found
	I0814 01:27:52.858812   70015 system_pods.go:61] "coredns-6f6b679f8f-26d7x" [9b2ae407-b4b0-4183-871a-3f3ca3ea346e] Running
	I0814 01:27:52.858822   70015 system_pods.go:61] "etcd-kindnet-612440" [98bbf8cc-6487-4f36-8f98-84ae656051bd] Running
	I0814 01:27:52.858828   70015 system_pods.go:61] "kindnet-s95bz" [a197bc12-87e4-4b20-9369-a5d7690405ad] Running
	I0814 01:27:52.858838   70015 system_pods.go:61] "kube-apiserver-kindnet-612440" [2b3dabf5-4c03-4d35-a024-61bc459c3c70] Running
	I0814 01:27:52.858845   70015 system_pods.go:61] "kube-controller-manager-kindnet-612440" [62e10ac3-8b38-422d-8243-efc2d92b2529] Running
	I0814 01:27:52.858851   70015 system_pods.go:61] "kube-proxy-czg52" [6943aee4-9960-4358-8a9f-45d170596194] Running
	I0814 01:27:52.858856   70015 system_pods.go:61] "kube-scheduler-kindnet-612440" [964b02c0-dc8b-4dfa-ba59-159bfa75f3f5] Running
	I0814 01:27:52.858860   70015 system_pods.go:61] "storage-provisioner" [e60a21fc-0580-42b7-9e3d-400d62c991ba] Running
	I0814 01:27:52.858865   70015 system_pods.go:74] duration metric: took 179.750691ms to wait for pod list to return data ...
	I0814 01:27:52.858873   70015 default_sa.go:34] waiting for default service account to be created ...
	I0814 01:27:53.056075   70015 default_sa.go:45] found service account: "default"
	I0814 01:27:53.056102   70015 default_sa.go:55] duration metric: took 197.222281ms for default service account to be created ...
	I0814 01:27:53.056112   70015 system_pods.go:116] waiting for k8s-apps to be running ...
	I0814 01:27:53.259181   70015 system_pods.go:86] 8 kube-system pods found
	I0814 01:27:53.259210   70015 system_pods.go:89] "coredns-6f6b679f8f-26d7x" [9b2ae407-b4b0-4183-871a-3f3ca3ea346e] Running
	I0814 01:27:53.259216   70015 system_pods.go:89] "etcd-kindnet-612440" [98bbf8cc-6487-4f36-8f98-84ae656051bd] Running
	I0814 01:27:53.259221   70015 system_pods.go:89] "kindnet-s95bz" [a197bc12-87e4-4b20-9369-a5d7690405ad] Running
	I0814 01:27:53.259224   70015 system_pods.go:89] "kube-apiserver-kindnet-612440" [2b3dabf5-4c03-4d35-a024-61bc459c3c70] Running
	I0814 01:27:53.259229   70015 system_pods.go:89] "kube-controller-manager-kindnet-612440" [62e10ac3-8b38-422d-8243-efc2d92b2529] Running
	I0814 01:27:53.259233   70015 system_pods.go:89] "kube-proxy-czg52" [6943aee4-9960-4358-8a9f-45d170596194] Running
	I0814 01:27:53.259238   70015 system_pods.go:89] "kube-scheduler-kindnet-612440" [964b02c0-dc8b-4dfa-ba59-159bfa75f3f5] Running
	I0814 01:27:53.259243   70015 system_pods.go:89] "storage-provisioner" [e60a21fc-0580-42b7-9e3d-400d62c991ba] Running
	I0814 01:27:53.259251   70015 system_pods.go:126] duration metric: took 203.132847ms to wait for k8s-apps to be running ...
	I0814 01:27:53.259263   70015 system_svc.go:44] waiting for kubelet service to be running ....
	I0814 01:27:53.259312   70015 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 01:27:53.275788   70015 system_svc.go:56] duration metric: took 16.515492ms WaitForService to wait for kubelet
	I0814 01:27:53.275819   70015 kubeadm.go:582] duration metric: took 15.603833073s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 01:27:53.275840   70015 node_conditions.go:102] verifying NodePressure condition ...
	I0814 01:27:53.455445   70015 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0814 01:27:53.455470   70015 node_conditions.go:123] node cpu capacity is 2
	I0814 01:27:53.455480   70015 node_conditions.go:105] duration metric: took 179.634741ms to run NodePressure ...
	I0814 01:27:53.455493   70015 start.go:241] waiting for startup goroutines ...
	I0814 01:27:53.455506   70015 start.go:246] waiting for cluster config update ...
	I0814 01:27:53.455522   70015 start.go:255] writing updated cluster config ...
	I0814 01:27:53.455781   70015 ssh_runner.go:195] Run: rm -f paused
	I0814 01:27:53.505250   70015 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0814 01:27:53.507220   70015 out.go:177] * Done! kubectl is now configured to use "kindnet-612440" cluster and "default" namespace by default
	I0814 01:27:58.565817   70428 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0814 01:27:58.565913   70428 kubeadm.go:310] [preflight] Running pre-flight checks
	I0814 01:27:58.566000   70428 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0814 01:27:58.566114   70428 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0814 01:27:58.566191   70428 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0814 01:27:58.566244   70428 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0814 01:27:58.567597   70428 out.go:204]   - Generating certificates and keys ...
	I0814 01:27:58.567676   70428 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0814 01:27:58.567774   70428 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0814 01:27:58.567870   70428 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0814 01:27:58.567922   70428 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0814 01:27:58.567975   70428 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0814 01:27:58.568017   70428 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0814 01:27:58.568062   70428 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0814 01:27:58.568167   70428 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [calico-612440 localhost] and IPs [192.168.72.199 127.0.0.1 ::1]
	I0814 01:27:58.568220   70428 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0814 01:27:58.568335   70428 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [calico-612440 localhost] and IPs [192.168.72.199 127.0.0.1 ::1]
	I0814 01:27:58.568437   70428 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0814 01:27:58.568545   70428 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0814 01:27:58.568617   70428 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0814 01:27:58.568681   70428 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0814 01:27:58.568724   70428 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0814 01:27:58.568777   70428 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0814 01:27:58.568821   70428 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0814 01:27:58.568873   70428 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0814 01:27:58.568918   70428 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0814 01:27:58.569000   70428 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0814 01:27:58.569084   70428 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0814 01:27:58.571089   70428 out.go:204]   - Booting up control plane ...
	I0814 01:27:58.571192   70428 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0814 01:27:58.571313   70428 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0814 01:27:58.571418   70428 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0814 01:27:58.571543   70428 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0814 01:27:58.571651   70428 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0814 01:27:58.571711   70428 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0814 01:27:58.571834   70428 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0814 01:27:58.571943   70428 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0814 01:27:58.572009   70428 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002101292s
	I0814 01:27:58.572117   70428 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0814 01:27:58.572172   70428 kubeadm.go:310] [api-check] The API server is healthy after 4.501939952s
	I0814 01:27:58.572301   70428 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0814 01:27:58.572423   70428 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0814 01:27:58.572490   70428 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0814 01:27:58.572688   70428 kubeadm.go:310] [mark-control-plane] Marking the node calico-612440 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0814 01:27:58.572751   70428 kubeadm.go:310] [bootstrap-token] Using token: i3skvt.ztj84u9eky45ylzy
	I0814 01:27:58.574289   70428 out.go:204]   - Configuring RBAC rules ...
	I0814 01:27:58.574393   70428 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0814 01:27:58.574504   70428 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0814 01:27:58.574674   70428 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0814 01:27:58.574829   70428 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0814 01:27:58.574964   70428 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0814 01:27:58.575095   70428 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0814 01:27:58.575282   70428 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0814 01:27:58.575341   70428 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0814 01:27:58.575390   70428 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0814 01:27:58.575396   70428 kubeadm.go:310] 
	I0814 01:27:58.575450   70428 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0814 01:27:58.575456   70428 kubeadm.go:310] 
	I0814 01:27:58.575531   70428 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0814 01:27:58.575537   70428 kubeadm.go:310] 
	I0814 01:27:58.575558   70428 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0814 01:27:58.575607   70428 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0814 01:27:58.575649   70428 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0814 01:27:58.575658   70428 kubeadm.go:310] 
	I0814 01:27:58.575706   70428 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0814 01:27:58.575712   70428 kubeadm.go:310] 
	I0814 01:27:58.575777   70428 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0814 01:27:58.575794   70428 kubeadm.go:310] 
	I0814 01:27:58.575853   70428 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0814 01:27:58.575916   70428 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0814 01:27:58.575975   70428 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0814 01:27:58.575982   70428 kubeadm.go:310] 
	I0814 01:27:58.576049   70428 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0814 01:27:58.576119   70428 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0814 01:27:58.576126   70428 kubeadm.go:310] 
	I0814 01:27:58.576214   70428 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token i3skvt.ztj84u9eky45ylzy \
	I0814 01:27:58.576340   70428 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f608549e9c065598e2d494ba753eb8a7bbe7260a53a76decd30e6e17b5fe24c3 \
	I0814 01:27:58.576369   70428 kubeadm.go:310] 	--control-plane 
	I0814 01:27:58.576384   70428 kubeadm.go:310] 
	I0814 01:27:58.576495   70428 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0814 01:27:58.576504   70428 kubeadm.go:310] 
	I0814 01:27:58.576593   70428 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token i3skvt.ztj84u9eky45ylzy \
	I0814 01:27:58.576714   70428 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f608549e9c065598e2d494ba753eb8a7bbe7260a53a76decd30e6e17b5fe24c3 
	I0814 01:27:58.576725   70428 cni.go:84] Creating CNI manager for "calico"
	I0814 01:27:58.578136   70428 out.go:177] * Configuring Calico (Container Networking Interface) ...
	I0814 01:27:58.579624   70428 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0814 01:27:58.579639   70428 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (253923 bytes)
	I0814 01:27:58.599282   70428 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0814 01:27:59.880740   70428 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.281421621s)
	I0814 01:27:59.880810   70428 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0814 01:27:59.880967   70428 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:27:59.880984   70428 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes calico-612440 minikube.k8s.io/updated_at=2024_08_14T01_27_59_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a5ac17629aabe8562ef9220195ba6559cf416caf minikube.k8s.io/name=calico-612440 minikube.k8s.io/primary=true
	I0814 01:27:59.915861   70428 ops.go:34] apiserver oom_adj: -16
	
	
	==> CRI-O <==
	Aug 14 01:28:02 default-k8s-diff-port-585256 crio[720]: time="2024-08-14 01:28:02.297183399Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598882297159228,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a89880ce-5b8d-4be1-b0e9-7d9d027ff5cf name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 01:28:02 default-k8s-diff-port-585256 crio[720]: time="2024-08-14 01:28:02.297841032Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4cdc876d-27a5-4c1f-94e5-7b062285acbe name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 01:28:02 default-k8s-diff-port-585256 crio[720]: time="2024-08-14 01:28:02.297907684Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4cdc876d-27a5-4c1f-94e5-7b062285acbe name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 01:28:02 default-k8s-diff-port-585256 crio[720]: time="2024-08-14 01:28:02.298117166Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:178ad8a6bac1357bb802e3d04d4a245d48d7d17ada831702e7c8b8576d501dd2,PodSandboxId:6f98fff5404794ccef4bb9d032df8093f55924505cda14bdcde5a3ba7cda3970,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723597853398580338,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1636777b-2347-4c48-b72a-3b5445c4862a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a30f6f8799cac2a8f016c3eaf2abaf8462dbc8c55b19ea96d20ff345cd84557,PodSandboxId:9eca25d767f1a81f28b14158d7c80ca0ffb1397c3f86f79708b9ef2b6afda147,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723597852912193042,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-hngz9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 213f9a45-596b-47b3-9c37-ceae021433ea,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39c53a765019e07c801f353b5bd3181a2b9adb29b71bbf5ff1e384dc1f3b9af6,PodSandboxId:01056aaf40aa4e053f6a713b8800657d9b8d39f399c57d6b1eb2fc89aef05542,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723597852839646809,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-jmqk7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 397fb54b-40cd-4c4e-9503-c077f814c6e5,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85fda842f55cd00d8fe6aaea85760248f75acc62e7346fd6892aa6d01236fc0f,PodSandboxId:00369bc4aed926bb963ceeb61eb396f9f6eb6d5b9329f30c4310ee1f9d21a2bd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING
,CreatedAt:1723597852320287203,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rg8h9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2601104-a6f5-4065-87d5-c027d583f647,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2030360e485495175fa2be21c7c093c7d7310dfb73cf98c599fe4f9695485624,PodSandboxId:bc1dd8cbb18bc40b7490227aee0040905b7330da761fb42f4035d068c9e0edbc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723597841373142601
,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-585256,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63f2be92dbc40486c02357bb4abdde53,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e64d705f36b0067db76dc0ad093697a628d84f8b955847ef47867dbf1a7f9fe,PodSandboxId:71ce6596516d365b5372df76128b02d8a6051a0d0ce23a4367a3e8507ecf20d2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:172359784130
4029196,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-585256,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c65692368a95f1446ffe5a25cc5946d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c2ba2d805c8434f2a11f3cd7612d8b5e3857ef1450b928cad13153036ba31df,PodSandboxId:05b6d78a4af0439040fe1dfceffa45c4fec37ab4661259746bb22dbd4477fa8d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:172359
7841307764691,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-585256,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a27e79549c7620840739e6e02d96eba0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c4a3040cf2e5a03a5ceee9ed4044568f56fc5ef6ef1c69f9b963f837d55c4ce,PodSandboxId:88cb42849b1235a2a66a92861478f078a21a29de919930305958763f81f330e1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723597841236277469,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-585256,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9b8458885f7bf294298151b292cf053,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9ede10be40aa734712d3099693d401dbbd0b4f44fb5192ae012b554b9747ad7,PodSandboxId:8eb9ce14fa9cd506a3a371f7475fa31b94ca888cfa80f7d9c00effdd8aac0287,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723597560719516832,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-585256,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9b8458885f7bf294298151b292cf053,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4cdc876d-27a5-4c1f-94e5-7b062285acbe name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 01:28:02 default-k8s-diff-port-585256 crio[720]: time="2024-08-14 01:28:02.365867481Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=379fb4bf-efbb-418b-8d1d-b19fdc300ab3 name=/runtime.v1.RuntimeService/Version
	Aug 14 01:28:02 default-k8s-diff-port-585256 crio[720]: time="2024-08-14 01:28:02.365993026Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=379fb4bf-efbb-418b-8d1d-b19fdc300ab3 name=/runtime.v1.RuntimeService/Version
	Aug 14 01:28:02 default-k8s-diff-port-585256 crio[720]: time="2024-08-14 01:28:02.367226396Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=27e916a3-36c5-4cb2-9830-cfe10a20325c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 01:28:02 default-k8s-diff-port-585256 crio[720]: time="2024-08-14 01:28:02.367884523Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598882367848409,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=27e916a3-36c5-4cb2-9830-cfe10a20325c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 01:28:02 default-k8s-diff-port-585256 crio[720]: time="2024-08-14 01:28:02.368566550Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=da48e321-5354-465e-9498-ee073fd95826 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 01:28:02 default-k8s-diff-port-585256 crio[720]: time="2024-08-14 01:28:02.368646341Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=da48e321-5354-465e-9498-ee073fd95826 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 01:28:02 default-k8s-diff-port-585256 crio[720]: time="2024-08-14 01:28:02.368982227Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:178ad8a6bac1357bb802e3d04d4a245d48d7d17ada831702e7c8b8576d501dd2,PodSandboxId:6f98fff5404794ccef4bb9d032df8093f55924505cda14bdcde5a3ba7cda3970,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723597853398580338,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1636777b-2347-4c48-b72a-3b5445c4862a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a30f6f8799cac2a8f016c3eaf2abaf8462dbc8c55b19ea96d20ff345cd84557,PodSandboxId:9eca25d767f1a81f28b14158d7c80ca0ffb1397c3f86f79708b9ef2b6afda147,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723597852912193042,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-hngz9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 213f9a45-596b-47b3-9c37-ceae021433ea,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39c53a765019e07c801f353b5bd3181a2b9adb29b71bbf5ff1e384dc1f3b9af6,PodSandboxId:01056aaf40aa4e053f6a713b8800657d9b8d39f399c57d6b1eb2fc89aef05542,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723597852839646809,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-jmqk7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 397fb54b-40cd-4c4e-9503-c077f814c6e5,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85fda842f55cd00d8fe6aaea85760248f75acc62e7346fd6892aa6d01236fc0f,PodSandboxId:00369bc4aed926bb963ceeb61eb396f9f6eb6d5b9329f30c4310ee1f9d21a2bd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING
,CreatedAt:1723597852320287203,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rg8h9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2601104-a6f5-4065-87d5-c027d583f647,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2030360e485495175fa2be21c7c093c7d7310dfb73cf98c599fe4f9695485624,PodSandboxId:bc1dd8cbb18bc40b7490227aee0040905b7330da761fb42f4035d068c9e0edbc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723597841373142601
,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-585256,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63f2be92dbc40486c02357bb4abdde53,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e64d705f36b0067db76dc0ad093697a628d84f8b955847ef47867dbf1a7f9fe,PodSandboxId:71ce6596516d365b5372df76128b02d8a6051a0d0ce23a4367a3e8507ecf20d2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:172359784130
4029196,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-585256,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c65692368a95f1446ffe5a25cc5946d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c2ba2d805c8434f2a11f3cd7612d8b5e3857ef1450b928cad13153036ba31df,PodSandboxId:05b6d78a4af0439040fe1dfceffa45c4fec37ab4661259746bb22dbd4477fa8d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:172359
7841307764691,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-585256,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a27e79549c7620840739e6e02d96eba0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c4a3040cf2e5a03a5ceee9ed4044568f56fc5ef6ef1c69f9b963f837d55c4ce,PodSandboxId:88cb42849b1235a2a66a92861478f078a21a29de919930305958763f81f330e1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723597841236277469,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-585256,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9b8458885f7bf294298151b292cf053,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9ede10be40aa734712d3099693d401dbbd0b4f44fb5192ae012b554b9747ad7,PodSandboxId:8eb9ce14fa9cd506a3a371f7475fa31b94ca888cfa80f7d9c00effdd8aac0287,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723597560719516832,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-585256,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9b8458885f7bf294298151b292cf053,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=da48e321-5354-465e-9498-ee073fd95826 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 01:28:02 default-k8s-diff-port-585256 crio[720]: time="2024-08-14 01:28:02.410855989Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7dcda577-8450-47bb-973a-8e9f0e2f4714 name=/runtime.v1.RuntimeService/Version
	Aug 14 01:28:02 default-k8s-diff-port-585256 crio[720]: time="2024-08-14 01:28:02.410986256Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7dcda577-8450-47bb-973a-8e9f0e2f4714 name=/runtime.v1.RuntimeService/Version
	Aug 14 01:28:02 default-k8s-diff-port-585256 crio[720]: time="2024-08-14 01:28:02.413150355Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=16e26610-e244-42b0-944e-8777a9d70da6 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 01:28:02 default-k8s-diff-port-585256 crio[720]: time="2024-08-14 01:28:02.413743403Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598882413707136,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=16e26610-e244-42b0-944e-8777a9d70da6 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 01:28:02 default-k8s-diff-port-585256 crio[720]: time="2024-08-14 01:28:02.414450703Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1844d292-59ca-445c-a3d4-1e05a35dc441 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 01:28:02 default-k8s-diff-port-585256 crio[720]: time="2024-08-14 01:28:02.414540042Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1844d292-59ca-445c-a3d4-1e05a35dc441 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 01:28:02 default-k8s-diff-port-585256 crio[720]: time="2024-08-14 01:28:02.414888511Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:178ad8a6bac1357bb802e3d04d4a245d48d7d17ada831702e7c8b8576d501dd2,PodSandboxId:6f98fff5404794ccef4bb9d032df8093f55924505cda14bdcde5a3ba7cda3970,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723597853398580338,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1636777b-2347-4c48-b72a-3b5445c4862a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a30f6f8799cac2a8f016c3eaf2abaf8462dbc8c55b19ea96d20ff345cd84557,PodSandboxId:9eca25d767f1a81f28b14158d7c80ca0ffb1397c3f86f79708b9ef2b6afda147,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723597852912193042,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-hngz9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 213f9a45-596b-47b3-9c37-ceae021433ea,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39c53a765019e07c801f353b5bd3181a2b9adb29b71bbf5ff1e384dc1f3b9af6,PodSandboxId:01056aaf40aa4e053f6a713b8800657d9b8d39f399c57d6b1eb2fc89aef05542,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723597852839646809,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-jmqk7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 397fb54b-40cd-4c4e-9503-c077f814c6e5,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85fda842f55cd00d8fe6aaea85760248f75acc62e7346fd6892aa6d01236fc0f,PodSandboxId:00369bc4aed926bb963ceeb61eb396f9f6eb6d5b9329f30c4310ee1f9d21a2bd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING
,CreatedAt:1723597852320287203,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rg8h9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2601104-a6f5-4065-87d5-c027d583f647,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2030360e485495175fa2be21c7c093c7d7310dfb73cf98c599fe4f9695485624,PodSandboxId:bc1dd8cbb18bc40b7490227aee0040905b7330da761fb42f4035d068c9e0edbc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723597841373142601
,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-585256,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63f2be92dbc40486c02357bb4abdde53,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e64d705f36b0067db76dc0ad093697a628d84f8b955847ef47867dbf1a7f9fe,PodSandboxId:71ce6596516d365b5372df76128b02d8a6051a0d0ce23a4367a3e8507ecf20d2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:172359784130
4029196,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-585256,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c65692368a95f1446ffe5a25cc5946d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c2ba2d805c8434f2a11f3cd7612d8b5e3857ef1450b928cad13153036ba31df,PodSandboxId:05b6d78a4af0439040fe1dfceffa45c4fec37ab4661259746bb22dbd4477fa8d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:172359
7841307764691,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-585256,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a27e79549c7620840739e6e02d96eba0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c4a3040cf2e5a03a5ceee9ed4044568f56fc5ef6ef1c69f9b963f837d55c4ce,PodSandboxId:88cb42849b1235a2a66a92861478f078a21a29de919930305958763f81f330e1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723597841236277469,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-585256,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9b8458885f7bf294298151b292cf053,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9ede10be40aa734712d3099693d401dbbd0b4f44fb5192ae012b554b9747ad7,PodSandboxId:8eb9ce14fa9cd506a3a371f7475fa31b94ca888cfa80f7d9c00effdd8aac0287,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723597560719516832,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-585256,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9b8458885f7bf294298151b292cf053,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1844d292-59ca-445c-a3d4-1e05a35dc441 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 01:28:02 default-k8s-diff-port-585256 crio[720]: time="2024-08-14 01:28:02.455361932Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4d4e9b35-9e45-4c6d-aa0b-ec009fb3b2ac name=/runtime.v1.RuntimeService/Version
	Aug 14 01:28:02 default-k8s-diff-port-585256 crio[720]: time="2024-08-14 01:28:02.455606613Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4d4e9b35-9e45-4c6d-aa0b-ec009fb3b2ac name=/runtime.v1.RuntimeService/Version
	Aug 14 01:28:02 default-k8s-diff-port-585256 crio[720]: time="2024-08-14 01:28:02.457485736Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2cffcd00-9a8d-43ad-852b-57326a8504b7 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 01:28:02 default-k8s-diff-port-585256 crio[720]: time="2024-08-14 01:28:02.458241579Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598882458204342,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2cffcd00-9a8d-43ad-852b-57326a8504b7 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 01:28:02 default-k8s-diff-port-585256 crio[720]: time="2024-08-14 01:28:02.459060547Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9a202a79-5b01-4e65-af93-afcf962f1cf0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 01:28:02 default-k8s-diff-port-585256 crio[720]: time="2024-08-14 01:28:02.459148023Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9a202a79-5b01-4e65-af93-afcf962f1cf0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 01:28:02 default-k8s-diff-port-585256 crio[720]: time="2024-08-14 01:28:02.459468242Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:178ad8a6bac1357bb802e3d04d4a245d48d7d17ada831702e7c8b8576d501dd2,PodSandboxId:6f98fff5404794ccef4bb9d032df8093f55924505cda14bdcde5a3ba7cda3970,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723597853398580338,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1636777b-2347-4c48-b72a-3b5445c4862a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a30f6f8799cac2a8f016c3eaf2abaf8462dbc8c55b19ea96d20ff345cd84557,PodSandboxId:9eca25d767f1a81f28b14158d7c80ca0ffb1397c3f86f79708b9ef2b6afda147,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723597852912193042,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-hngz9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 213f9a45-596b-47b3-9c37-ceae021433ea,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39c53a765019e07c801f353b5bd3181a2b9adb29b71bbf5ff1e384dc1f3b9af6,PodSandboxId:01056aaf40aa4e053f6a713b8800657d9b8d39f399c57d6b1eb2fc89aef05542,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723597852839646809,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-jmqk7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 397fb54b-40cd-4c4e-9503-c077f814c6e5,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85fda842f55cd00d8fe6aaea85760248f75acc62e7346fd6892aa6d01236fc0f,PodSandboxId:00369bc4aed926bb963ceeb61eb396f9f6eb6d5b9329f30c4310ee1f9d21a2bd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING
,CreatedAt:1723597852320287203,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rg8h9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2601104-a6f5-4065-87d5-c027d583f647,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2030360e485495175fa2be21c7c093c7d7310dfb73cf98c599fe4f9695485624,PodSandboxId:bc1dd8cbb18bc40b7490227aee0040905b7330da761fb42f4035d068c9e0edbc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723597841373142601
,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-585256,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63f2be92dbc40486c02357bb4abdde53,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e64d705f36b0067db76dc0ad093697a628d84f8b955847ef47867dbf1a7f9fe,PodSandboxId:71ce6596516d365b5372df76128b02d8a6051a0d0ce23a4367a3e8507ecf20d2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:172359784130
4029196,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-585256,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c65692368a95f1446ffe5a25cc5946d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c2ba2d805c8434f2a11f3cd7612d8b5e3857ef1450b928cad13153036ba31df,PodSandboxId:05b6d78a4af0439040fe1dfceffa45c4fec37ab4661259746bb22dbd4477fa8d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:172359
7841307764691,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-585256,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a27e79549c7620840739e6e02d96eba0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c4a3040cf2e5a03a5ceee9ed4044568f56fc5ef6ef1c69f9b963f837d55c4ce,PodSandboxId:88cb42849b1235a2a66a92861478f078a21a29de919930305958763f81f330e1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723597841236277469,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-585256,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9b8458885f7bf294298151b292cf053,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9ede10be40aa734712d3099693d401dbbd0b4f44fb5192ae012b554b9747ad7,PodSandboxId:8eb9ce14fa9cd506a3a371f7475fa31b94ca888cfa80f7d9c00effdd8aac0287,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723597560719516832,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-585256,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9b8458885f7bf294298151b292cf053,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9a202a79-5b01-4e65-af93-afcf962f1cf0 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	178ad8a6bac13       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   17 minutes ago      Running             storage-provisioner       0                   6f98fff540479       storage-provisioner
	4a30f6f8799ca       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   17 minutes ago      Running             coredns                   0                   9eca25d767f1a       coredns-6f6b679f8f-hngz9
	39c53a765019e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   17 minutes ago      Running             coredns                   0                   01056aaf40aa4       coredns-6f6b679f8f-jmqk7
	85fda842f55cd       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   17 minutes ago      Running             kube-proxy                0                   00369bc4aed92       kube-proxy-rg8h9
	2030360e48549       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   17 minutes ago      Running             kube-scheduler            2                   bc1dd8cbb18bc       kube-scheduler-default-k8s-diff-port-585256
	3c2ba2d805c84       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   17 minutes ago      Running             etcd                      2                   05b6d78a4af04       etcd-default-k8s-diff-port-585256
	1e64d705f36b0       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   17 minutes ago      Running             kube-controller-manager   2                   71ce6596516d3       kube-controller-manager-default-k8s-diff-port-585256
	4c4a3040cf2e5       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   17 minutes ago      Running             kube-apiserver            2                   88cb42849b123       kube-apiserver-default-k8s-diff-port-585256
	a9ede10be40aa       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   22 minutes ago      Exited              kube-apiserver            1                   8eb9ce14fa9cd       kube-apiserver-default-k8s-diff-port-585256
	
	
	==> coredns [39c53a765019e07c801f353b5bd3181a2b9adb29b71bbf5ff1e384dc1f3b9af6] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [4a30f6f8799cac2a8f016c3eaf2abaf8462dbc8c55b19ea96d20ff345cd84557] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-585256
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-585256
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a5ac17629aabe8562ef9220195ba6559cf416caf
	                    minikube.k8s.io/name=default-k8s-diff-port-585256
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_14T01_10_47_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 14 Aug 2024 01:10:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-585256
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 14 Aug 2024 01:27:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 14 Aug 2024 01:26:16 +0000   Wed, 14 Aug 2024 01:10:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 14 Aug 2024 01:26:16 +0000   Wed, 14 Aug 2024 01:10:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 14 Aug 2024 01:26:16 +0000   Wed, 14 Aug 2024 01:10:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 14 Aug 2024 01:26:16 +0000   Wed, 14 Aug 2024 01:10:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.110
	  Hostname:    default-k8s-diff-port-585256
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 666676425019446a941ebb971b72dcb3
	  System UUID:                66667642-5019-446a-941e-bb971b72dcb3
	  Boot ID:                    ed146dfb-8b26-4148-877f-d40b1fba7453
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-hngz9                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     17m
	  kube-system                 coredns-6f6b679f8f-jmqk7                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     17m
	  kube-system                 etcd-default-k8s-diff-port-585256                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         17m
	  kube-system                 kube-apiserver-default-k8s-diff-port-585256             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-585256    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-proxy-rg8h9                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-scheduler-default-k8s-diff-port-585256             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 metrics-server-6867b74b74-lzfpz                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         17m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 17m                kube-proxy       
	  Normal  NodeHasSufficientMemory  17m (x8 over 17m)  kubelet          Node default-k8s-diff-port-585256 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17m (x8 over 17m)  kubelet          Node default-k8s-diff-port-585256 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17m (x7 over 17m)  kubelet          Node default-k8s-diff-port-585256 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  17m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 17m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  17m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  17m                kubelet          Node default-k8s-diff-port-585256 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17m                kubelet          Node default-k8s-diff-port-585256 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17m                kubelet          Node default-k8s-diff-port-585256 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           17m                node-controller  Node default-k8s-diff-port-585256 event: Registered Node default-k8s-diff-port-585256 in Controller
	
	
	==> dmesg <==
	[  +0.050512] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039230] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.730977] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.848194] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.410528] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.245917] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.055560] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.065016] systemd-fstab-generator[649]: Ignoring "noauto" option for root device
	[  +0.210631] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[  +0.151481] systemd-fstab-generator[676]: Ignoring "noauto" option for root device
	[  +0.339617] systemd-fstab-generator[705]: Ignoring "noauto" option for root device
	[  +4.292368] systemd-fstab-generator[803]: Ignoring "noauto" option for root device
	[  +0.063639] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.561710] systemd-fstab-generator[923]: Ignoring "noauto" option for root device
	[Aug14 01:06] kauditd_printk_skb: 97 callbacks suppressed
	[  +6.688674] kauditd_printk_skb: 85 callbacks suppressed
	[Aug14 01:10] kauditd_printk_skb: 4 callbacks suppressed
	[  +1.201538] systemd-fstab-generator[2577]: Ignoring "noauto" option for root device
	[  +4.439054] kauditd_printk_skb: 56 callbacks suppressed
	[  +1.617681] systemd-fstab-generator[2898]: Ignoring "noauto" option for root device
	[  +5.313063] systemd-fstab-generator[3015]: Ignoring "noauto" option for root device
	[  +0.090024] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.732176] kauditd_printk_skb: 84 callbacks suppressed
	
	
	==> etcd [3c2ba2d805c8434f2a11f3cd7612d8b5e3857ef1450b928cad13153036ba31df] <==
	{"level":"warn","ts":"2024-08-14T01:26:32.832367Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"560.957514ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2024-08-14T01:26:32.832513Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"621.880636ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-14T01:26:32.832559Z","caller":"traceutil/trace.go:171","msg":"trace[1258867750] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1251; }","duration":"621.92535ms","start":"2024-08-14T01:26:32.210624Z","end":"2024-08-14T01:26:32.832550Z","steps":["trace[1258867750] 'agreement among raft nodes before linearized reading'  (duration: 621.860572ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-14T01:26:32.832605Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-14T01:26:32.210577Z","time spent":"622.021338ms","remote":"127.0.0.1:44334","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-08-14T01:26:32.832623Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"149.091985ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-14T01:26:32.832684Z","caller":"traceutil/trace.go:171","msg":"trace[1239524567] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1251; }","duration":"149.157692ms","start":"2024-08-14T01:26:32.683521Z","end":"2024-08-14T01:26:32.832678Z","steps":["trace[1239524567] 'agreement among raft nodes before linearized reading'  (duration: 149.014953ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-14T01:26:32.832511Z","caller":"traceutil/trace.go:171","msg":"trace[228216333] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1251; }","duration":"561.115631ms","start":"2024-08-14T01:26:32.271384Z","end":"2024-08-14T01:26:32.832499Z","steps":["trace[228216333] 'agreement among raft nodes before linearized reading'  (duration: 560.9344ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-14T01:26:32.832370Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-14T01:26:32.133127Z","time spent":"698.282497ms","remote":"127.0.0.1:44520","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1113,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1250 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1040 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-08-14T01:26:59.447065Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"175.481918ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-14T01:26:59.447418Z","caller":"traceutil/trace.go:171","msg":"trace[397614214] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1273; }","duration":"175.85951ms","start":"2024-08-14T01:26:59.271532Z","end":"2024-08-14T01:26:59.447392Z","steps":["trace[397614214] 'range keys from in-memory index tree'  (duration: 175.469234ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-14T01:27:03.511113Z","caller":"traceutil/trace.go:171","msg":"trace[1858942962] linearizableReadLoop","detail":"{readStateIndex:1485; appliedIndex:1484; }","duration":"302.048881ms","start":"2024-08-14T01:27:03.209048Z","end":"2024-08-14T01:27:03.511096Z","steps":["trace[1858942962] 'read index received'  (duration: 301.85125ms)","trace[1858942962] 'applied index is now lower than readState.Index'  (duration: 197.064µs)"],"step_count":2}
	{"level":"warn","ts":"2024-08-14T01:27:03.511408Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"240.041331ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-14T01:27:03.512214Z","caller":"traceutil/trace.go:171","msg":"trace[1560480673] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1275; }","duration":"240.885835ms","start":"2024-08-14T01:27:03.271312Z","end":"2024-08-14T01:27:03.512198Z","steps":["trace[1560480673] 'agreement among raft nodes before linearized reading'  (duration: 240.018375ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-14T01:27:03.511432Z","caller":"traceutil/trace.go:171","msg":"trace[1389986745] transaction","detail":"{read_only:false; response_revision:1275; number_of_response:1; }","duration":"429.828591ms","start":"2024-08-14T01:27:03.081585Z","end":"2024-08-14T01:27:03.511414Z","steps":["trace[1389986745] 'process raft request'  (duration: 429.35149ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-14T01:27:03.511494Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"302.436781ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-14T01:27:03.512876Z","caller":"traceutil/trace.go:171","msg":"trace[1871256183] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1275; }","duration":"303.810086ms","start":"2024-08-14T01:27:03.209042Z","end":"2024-08-14T01:27:03.512852Z","steps":["trace[1871256183] 'agreement among raft nodes before linearized reading'  (duration: 302.418151ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-14T01:27:03.512972Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-14T01:27:03.208997Z","time spent":"303.955459ms","remote":"127.0.0.1:44334","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-08-14T01:27:03.512880Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-14T01:27:03.081568Z","time spent":"431.169234ms","remote":"127.0.0.1:44520","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1113,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1274 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1040 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-08-14T01:27:23.828770Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"148.077646ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-14T01:27:23.829036Z","caller":"traceutil/trace.go:171","msg":"trace[333089003] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1293; }","duration":"148.357992ms","start":"2024-08-14T01:27:23.680662Z","end":"2024-08-14T01:27:23.829020Z","steps":["trace[333089003] 'range keys from in-memory index tree'  (duration: 147.954223ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-14T01:27:24.399386Z","caller":"traceutil/trace.go:171","msg":"trace[1401489177] linearizableReadLoop","detail":"{readStateIndex:1508; appliedIndex:1507; }","duration":"128.378884ms","start":"2024-08-14T01:27:24.270989Z","end":"2024-08-14T01:27:24.399368Z","steps":["trace[1401489177] 'read index received'  (duration: 128.158523ms)","trace[1401489177] 'applied index is now lower than readState.Index'  (duration: 219.678µs)"],"step_count":2}
	{"level":"warn","ts":"2024-08-14T01:27:24.399561Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.545013ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-14T01:27:24.399614Z","caller":"traceutil/trace.go:171","msg":"trace[1462701059] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1294; }","duration":"128.619745ms","start":"2024-08-14T01:27:24.270985Z","end":"2024-08-14T01:27:24.399605Z","steps":["trace[1462701059] 'agreement among raft nodes before linearized reading'  (duration: 128.532156ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-14T01:27:24.399669Z","caller":"traceutil/trace.go:171","msg":"trace[71099139] transaction","detail":"{read_only:false; response_revision:1294; number_of_response:1; }","duration":"168.160172ms","start":"2024-08-14T01:27:24.231493Z","end":"2024-08-14T01:27:24.399654Z","steps":["trace[71099139] 'process raft request'  (duration: 167.731595ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-14T01:27:50.073717Z","caller":"traceutil/trace.go:171","msg":"trace[1806774684] transaction","detail":"{read_only:false; response_revision:1315; number_of_response:1; }","duration":"292.982647ms","start":"2024-08-14T01:27:49.780677Z","end":"2024-08-14T01:27:50.073660Z","steps":["trace[1806774684] 'process raft request'  (duration: 292.829664ms)"],"step_count":1}
	
	
	==> kernel <==
	 01:28:02 up 22 min,  0 users,  load average: 0.01, 0.08, 0.12
	Linux default-k8s-diff-port-585256 5.10.207 #1 SMP Tue Aug 13 22:05:29 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [4c4a3040cf2e5a03a5ceee9ed4044568f56fc5ef6ef1c69f9b963f837d55c4ce] <==
	I0814 01:23:44.932166       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0814 01:23:44.932239       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0814 01:25:43.929990       1 handler_proxy.go:99] no RequestInfo found in the context
	E0814 01:25:43.930117       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0814 01:25:44.932395       1 handler_proxy.go:99] no RequestInfo found in the context
	E0814 01:25:44.932460       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0814 01:25:44.932399       1 handler_proxy.go:99] no RequestInfo found in the context
	E0814 01:25:44.932557       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0814 01:25:44.933689       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0814 01:25:44.933733       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0814 01:26:44.934288       1 handler_proxy.go:99] no RequestInfo found in the context
	E0814 01:26:44.934360       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0814 01:26:44.934427       1 handler_proxy.go:99] no RequestInfo found in the context
	E0814 01:26:44.934481       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0814 01:26:44.935612       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0814 01:26:44.935662       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [a9ede10be40aa734712d3099693d401dbbd0b4f44fb5192ae012b554b9747ad7] <==
	W0814 01:10:36.536513       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 01:10:36.546034       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 01:10:36.568676       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 01:10:36.570070       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 01:10:36.577949       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 01:10:36.591651       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 01:10:36.598058       1 logging.go:55] [core] [Channel #16 SubChannel #17]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 01:10:36.613938       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 01:10:36.619402       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 01:10:36.639168       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 01:10:36.648659       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 01:10:36.678383       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 01:10:36.696183       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 01:10:36.709853       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 01:10:36.737735       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 01:10:36.784554       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 01:10:36.805599       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 01:10:36.875844       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 01:10:36.879424       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 01:10:36.879630       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 01:10:36.889082       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 01:10:36.955568       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 01:10:36.959197       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 01:10:36.968726       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 01:10:36.972236       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [1e64d705f36b0067db76dc0ad093697a628d84f8b955847ef47867dbf1a7f9fe] <==
	E0814 01:22:50.908326       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 01:22:51.468133       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0814 01:23:20.914546       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 01:23:21.475298       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0814 01:23:50.921045       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 01:23:51.485256       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0814 01:24:20.927975       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 01:24:21.493978       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0814 01:24:50.935121       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 01:24:51.501977       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0814 01:25:20.941272       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 01:25:21.509450       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0814 01:25:50.949480       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 01:25:51.517206       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0814 01:26:16.249490       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-585256"
	E0814 01:26:20.956469       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 01:26:21.525669       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0814 01:26:50.963448       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 01:26:51.535066       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0814 01:27:04.610977       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="1.420871ms"
	I0814 01:27:19.607449       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="109.772µs"
	E0814 01:27:20.971338       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 01:27:21.545652       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0814 01:27:50.977517       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 01:27:51.553873       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [85fda842f55cd00d8fe6aaea85760248f75acc62e7346fd6892aa6d01236fc0f] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0814 01:10:52.608122       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0814 01:10:52.622552       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.110"]
	E0814 01:10:52.622627       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0814 01:10:52.898766       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0814 01:10:52.898859       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0814 01:10:52.898892       1 server_linux.go:169] "Using iptables Proxier"
	I0814 01:10:52.901404       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0814 01:10:52.901750       1 server.go:483] "Version info" version="v1.31.0"
	I0814 01:10:52.901866       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0814 01:10:52.918305       1 config.go:197] "Starting service config controller"
	I0814 01:10:52.918348       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0814 01:10:52.918368       1 config.go:104] "Starting endpoint slice config controller"
	I0814 01:10:52.918371       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0814 01:10:52.921429       1 config.go:326] "Starting node config controller"
	I0814 01:10:52.921447       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0814 01:10:53.020143       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0814 01:10:53.020234       1 shared_informer.go:320] Caches are synced for service config
	I0814 01:10:53.023984       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [2030360e485495175fa2be21c7c093c7d7310dfb73cf98c599fe4f9695485624] <==
	W0814 01:10:43.948860       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0814 01:10:43.949160       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0814 01:10:43.949314       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0814 01:10:43.949352       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0814 01:10:43.949462       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0814 01:10:43.949494       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0814 01:10:43.949526       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0814 01:10:43.949561       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0814 01:10:44.843562       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0814 01:10:44.843668       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0814 01:10:44.848663       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0814 01:10:44.848755       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0814 01:10:44.955169       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0814 01:10:44.955265       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0814 01:10:45.030248       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0814 01:10:45.030312       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0814 01:10:45.046128       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0814 01:10:45.046190       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0814 01:10:45.097166       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0814 01:10:45.097506       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0814 01:10:45.199362       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0814 01:10:45.199425       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0814 01:10:45.220123       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0814 01:10:45.220371       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	I0814 01:10:48.037753       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 14 01:26:51 default-k8s-diff-port-585256 kubelet[2905]: E0814 01:26:51.606114    2905 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-6867b74b74-lzfpz" podUID="2dd31ad2-c384-4edd-8d5c-561bc2fa72e4"
	Aug 14 01:26:56 default-k8s-diff-port-585256 kubelet[2905]: E0814 01:26:56.864886    2905 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598816864360976,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 01:26:56 default-k8s-diff-port-585256 kubelet[2905]: E0814 01:26:56.865505    2905 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598816864360976,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 01:27:04 default-k8s-diff-port-585256 kubelet[2905]: E0814 01:27:04.592070    2905 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-lzfpz" podUID="2dd31ad2-c384-4edd-8d5c-561bc2fa72e4"
	Aug 14 01:27:06 default-k8s-diff-port-585256 kubelet[2905]: E0814 01:27:06.868121    2905 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598826867465148,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 01:27:06 default-k8s-diff-port-585256 kubelet[2905]: E0814 01:27:06.868488    2905 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598826867465148,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 01:27:16 default-k8s-diff-port-585256 kubelet[2905]: E0814 01:27:16.870215    2905 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598836869895388,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 01:27:16 default-k8s-diff-port-585256 kubelet[2905]: E0814 01:27:16.870261    2905 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598836869895388,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 01:27:19 default-k8s-diff-port-585256 kubelet[2905]: E0814 01:27:19.592465    2905 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-lzfpz" podUID="2dd31ad2-c384-4edd-8d5c-561bc2fa72e4"
	Aug 14 01:27:26 default-k8s-diff-port-585256 kubelet[2905]: E0814 01:27:26.872334    2905 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598846871898657,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 01:27:26 default-k8s-diff-port-585256 kubelet[2905]: E0814 01:27:26.872396    2905 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598846871898657,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 01:27:32 default-k8s-diff-port-585256 kubelet[2905]: E0814 01:27:32.591247    2905 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-lzfpz" podUID="2dd31ad2-c384-4edd-8d5c-561bc2fa72e4"
	Aug 14 01:27:36 default-k8s-diff-port-585256 kubelet[2905]: E0814 01:27:36.874730    2905 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598856874182207,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 01:27:36 default-k8s-diff-port-585256 kubelet[2905]: E0814 01:27:36.875068    2905 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598856874182207,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 01:27:46 default-k8s-diff-port-585256 kubelet[2905]: E0814 01:27:46.592504    2905 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-lzfpz" podUID="2dd31ad2-c384-4edd-8d5c-561bc2fa72e4"
	Aug 14 01:27:46 default-k8s-diff-port-585256 kubelet[2905]: E0814 01:27:46.609747    2905 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 14 01:27:46 default-k8s-diff-port-585256 kubelet[2905]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 14 01:27:46 default-k8s-diff-port-585256 kubelet[2905]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 14 01:27:46 default-k8s-diff-port-585256 kubelet[2905]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 14 01:27:46 default-k8s-diff-port-585256 kubelet[2905]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 14 01:27:46 default-k8s-diff-port-585256 kubelet[2905]: E0814 01:27:46.876928    2905 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598866876499754,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 01:27:46 default-k8s-diff-port-585256 kubelet[2905]: E0814 01:27:46.877068    2905 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598866876499754,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 01:27:56 default-k8s-diff-port-585256 kubelet[2905]: E0814 01:27:56.878580    2905 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598876878181819,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 01:27:56 default-k8s-diff-port-585256 kubelet[2905]: E0814 01:27:56.878922    2905 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598876878181819,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 01:27:58 default-k8s-diff-port-585256 kubelet[2905]: E0814 01:27:58.592186    2905 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-lzfpz" podUID="2dd31ad2-c384-4edd-8d5c-561bc2fa72e4"
	
	
	==> storage-provisioner [178ad8a6bac1357bb802e3d04d4a245d48d7d17ada831702e7c8b8576d501dd2] <==
	I0814 01:10:53.502743       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0814 01:10:53.518304       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0814 01:10:53.518391       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0814 01:10:53.549838       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0814 01:10:53.550404       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-585256_b8154309-70c8-444e-8be9-df686861cf5d!
	I0814 01:10:53.551716       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"887a86e8-3ce8-4d79-9ca4-abb6cd830367", APIVersion:"v1", ResourceVersion:"429", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-585256_b8154309-70c8-444e-8be9-df686861cf5d became leader
	I0814 01:10:53.651495       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-585256_b8154309-70c8-444e-8be9-df686861cf5d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-585256 -n default-k8s-diff-port-585256
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-585256 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-lzfpz
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-585256 describe pod metrics-server-6867b74b74-lzfpz
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-585256 describe pod metrics-server-6867b74b74-lzfpz: exit status 1 (75.32913ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-lzfpz" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-585256 describe pod metrics-server-6867b74b74-lzfpz: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (478.36s)
E0814 01:29:50.830999   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (343.69s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-901410 -n embed-certs-901410
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-08-14 01:26:26.288427244 +0000 UTC m=+5976.451924452
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-901410 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-901410 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.747µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-901410 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-901410 -n embed-certs-901410
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-901410 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-901410 logs -n 25: (1.436535658s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p cert-expiration-769488                              | cert-expiration-769488       | jenkins | v1.33.1 | 14 Aug 24 00:57 UTC | 14 Aug 24 00:58 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-769488                              | cert-expiration-769488       | jenkins | v1.33.1 | 14 Aug 24 00:58 UTC | 14 Aug 24 00:58 UTC |
	| start   | -p                                                     | default-k8s-diff-port-585256 | jenkins | v1.33.1 | 14 Aug 24 00:58 UTC | 14 Aug 24 00:58 UTC |
	|         | default-k8s-diff-port-585256                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-901410            | embed-certs-901410           | jenkins | v1.33.1 | 14 Aug 24 00:58 UTC | 14 Aug 24 00:58 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-901410                                  | embed-certs-901410           | jenkins | v1.33.1 | 14 Aug 24 00:58 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-776907             | no-preload-776907            | jenkins | v1.33.1 | 14 Aug 24 00:58 UTC | 14 Aug 24 00:58 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-776907                                   | no-preload-776907            | jenkins | v1.33.1 | 14 Aug 24 00:58 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-585256  | default-k8s-diff-port-585256 | jenkins | v1.33.1 | 14 Aug 24 00:59 UTC | 14 Aug 24 00:59 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-585256 | jenkins | v1.33.1 | 14 Aug 24 00:59 UTC |                     |
	|         | default-k8s-diff-port-585256                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-179312        | old-k8s-version-179312       | jenkins | v1.33.1 | 14 Aug 24 01:00 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-901410                 | embed-certs-901410           | jenkins | v1.33.1 | 14 Aug 24 01:00 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-901410                                  | embed-certs-901410           | jenkins | v1.33.1 | 14 Aug 24 01:00 UTC | 14 Aug 24 01:11 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-776907                  | no-preload-776907            | jenkins | v1.33.1 | 14 Aug 24 01:01 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-776907                                   | no-preload-776907            | jenkins | v1.33.1 | 14 Aug 24 01:01 UTC | 14 Aug 24 01:10 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-585256       | default-k8s-diff-port-585256 | jenkins | v1.33.1 | 14 Aug 24 01:01 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-179312                              | old-k8s-version-179312       | jenkins | v1.33.1 | 14 Aug 24 01:01 UTC | 14 Aug 24 01:01 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-585256 | jenkins | v1.33.1 | 14 Aug 24 01:01 UTC | 14 Aug 24 01:11 UTC |
	|         | default-k8s-diff-port-585256                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-179312             | old-k8s-version-179312       | jenkins | v1.33.1 | 14 Aug 24 01:01 UTC | 14 Aug 24 01:01 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-179312                              | old-k8s-version-179312       | jenkins | v1.33.1 | 14 Aug 24 01:01 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-179312                              | old-k8s-version-179312       | jenkins | v1.33.1 | 14 Aug 24 01:25 UTC | 14 Aug 24 01:25 UTC |
	| start   | -p newest-cni-137211 --memory=2200 --alsologtostderr   | newest-cni-137211            | jenkins | v1.33.1 | 14 Aug 24 01:25 UTC | 14 Aug 24 01:25 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-137211             | newest-cni-137211            | jenkins | v1.33.1 | 14 Aug 24 01:25 UTC | 14 Aug 24 01:25 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-137211                                   | newest-cni-137211            | jenkins | v1.33.1 | 14 Aug 24 01:25 UTC | 14 Aug 24 01:26 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-137211                  | newest-cni-137211            | jenkins | v1.33.1 | 14 Aug 24 01:26 UTC | 14 Aug 24 01:26 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-137211 --memory=2200 --alsologtostderr   | newest-cni-137211            | jenkins | v1.33.1 | 14 Aug 24 01:26 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/14 01:26:05
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0814 01:26:05.160327   68995 out.go:291] Setting OutFile to fd 1 ...
	I0814 01:26:05.160592   68995 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 01:26:05.160601   68995 out.go:304] Setting ErrFile to fd 2...
	I0814 01:26:05.160605   68995 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 01:26:05.160799   68995 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19429-9425/.minikube/bin
	I0814 01:26:05.161298   68995 out.go:298] Setting JSON to false
	I0814 01:26:05.162250   68995 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7711,"bootTime":1723591054,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0814 01:26:05.162304   68995 start.go:139] virtualization: kvm guest
	I0814 01:26:05.164653   68995 out.go:177] * [newest-cni-137211] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0814 01:26:05.165894   68995 notify.go:220] Checking for updates...
	I0814 01:26:05.165905   68995 out.go:177]   - MINIKUBE_LOCATION=19429
	I0814 01:26:05.167129   68995 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0814 01:26:05.168364   68995 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19429-9425/kubeconfig
	I0814 01:26:05.169508   68995 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19429-9425/.minikube
	I0814 01:26:05.170660   68995 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0814 01:26:05.171790   68995 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0814 01:26:05.173219   68995 config.go:182] Loaded profile config "newest-cni-137211": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 01:26:05.173620   68995 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:26:05.173658   68995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:26:05.189245   68995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41053
	I0814 01:26:05.189724   68995 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:26:05.190353   68995 main.go:141] libmachine: Using API Version  1
	I0814 01:26:05.190386   68995 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:26:05.190712   68995 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:26:05.190888   68995 main.go:141] libmachine: (newest-cni-137211) Calling .DriverName
	I0814 01:26:05.191114   68995 driver.go:392] Setting default libvirt URI to qemu:///system
	I0814 01:26:05.191393   68995 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:26:05.191440   68995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:26:05.205993   68995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46711
	I0814 01:26:05.206481   68995 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:26:05.206924   68995 main.go:141] libmachine: Using API Version  1
	I0814 01:26:05.206945   68995 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:26:05.207247   68995 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:26:05.207433   68995 main.go:141] libmachine: (newest-cni-137211) Calling .DriverName
	I0814 01:26:05.243595   68995 out.go:177] * Using the kvm2 driver based on existing profile
	I0814 01:26:05.244875   68995 start.go:297] selected driver: kvm2
	I0814 01:26:05.244891   68995 start.go:901] validating driver "kvm2" against &{Name:newest-cni-137211 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:newest-cni-137211 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.50 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] Sta
rtHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 01:26:05.245015   68995 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0814 01:26:05.245730   68995 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 01:26:05.245795   68995 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19429-9425/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0814 01:26:05.260332   68995 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0814 01:26:05.260691   68995 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0814 01:26:05.260753   68995 cni.go:84] Creating CNI manager for ""
	I0814 01:26:05.260766   68995 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 01:26:05.260803   68995 start.go:340] cluster config:
	{Name:newest-cni-137211 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:newest-cni-137211 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.50 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:
Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 01:26:05.260914   68995 iso.go:125] acquiring lock: {Name:mk654171f0e78c238a265344dbbd1eacb21d0f1b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 01:26:05.262661   68995 out.go:177] * Starting "newest-cni-137211" primary control-plane node in "newest-cni-137211" cluster
	I0814 01:26:05.263855   68995 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0814 01:26:05.263887   68995 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0814 01:26:05.263905   68995 cache.go:56] Caching tarball of preloaded images
	I0814 01:26:05.263995   68995 preload.go:172] Found /home/jenkins/minikube-integration/19429-9425/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0814 01:26:05.264008   68995 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0814 01:26:05.264099   68995 profile.go:143] Saving config to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/newest-cni-137211/config.json ...
	I0814 01:26:05.264270   68995 start.go:360] acquireMachinesLock for newest-cni-137211: {Name:mk8ab1b491404dd6cc95a37a50c74a14d6d0e4c9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 01:26:05.264307   68995 start.go:364] duration metric: took 21.099µs to acquireMachinesLock for "newest-cni-137211"
	I0814 01:26:05.264320   68995 start.go:96] Skipping create...Using existing machine configuration
	I0814 01:26:05.264332   68995 fix.go:54] fixHost starting: 
	I0814 01:26:05.264591   68995 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:26:05.264624   68995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:26:05.278982   68995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41869
	I0814 01:26:05.279379   68995 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:26:05.279873   68995 main.go:141] libmachine: Using API Version  1
	I0814 01:26:05.279898   68995 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:26:05.280272   68995 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:26:05.280485   68995 main.go:141] libmachine: (newest-cni-137211) Calling .DriverName
	I0814 01:26:05.280652   68995 main.go:141] libmachine: (newest-cni-137211) Calling .GetState
	I0814 01:26:05.282382   68995 fix.go:112] recreateIfNeeded on newest-cni-137211: state=Stopped err=<nil>
	I0814 01:26:05.282421   68995 main.go:141] libmachine: (newest-cni-137211) Calling .DriverName
	W0814 01:26:05.282593   68995 fix.go:138] unexpected machine state, will restart: <nil>
	I0814 01:26:05.285246   68995 out.go:177] * Restarting existing kvm2 VM for "newest-cni-137211" ...
	I0814 01:26:05.286351   68995 main.go:141] libmachine: (newest-cni-137211) Calling .Start
	I0814 01:26:05.286517   68995 main.go:141] libmachine: (newest-cni-137211) Ensuring networks are active...
	I0814 01:26:05.287222   68995 main.go:141] libmachine: (newest-cni-137211) Ensuring network default is active
	I0814 01:26:05.287584   68995 main.go:141] libmachine: (newest-cni-137211) Ensuring network mk-newest-cni-137211 is active
	I0814 01:26:05.287941   68995 main.go:141] libmachine: (newest-cni-137211) Getting domain xml...
	I0814 01:26:05.288667   68995 main.go:141] libmachine: (newest-cni-137211) Creating domain...
	I0814 01:26:06.515629   68995 main.go:141] libmachine: (newest-cni-137211) Waiting to get IP...
	I0814 01:26:06.516430   68995 main.go:141] libmachine: (newest-cni-137211) DBG | domain newest-cni-137211 has defined MAC address 52:54:00:15:b5:2a in network mk-newest-cni-137211
	I0814 01:26:06.516852   68995 main.go:141] libmachine: (newest-cni-137211) DBG | unable to find current IP address of domain newest-cni-137211 in network mk-newest-cni-137211
	I0814 01:26:06.516966   68995 main.go:141] libmachine: (newest-cni-137211) DBG | I0814 01:26:06.516856   69030 retry.go:31] will retry after 250.184552ms: waiting for machine to come up
	I0814 01:26:06.768375   68995 main.go:141] libmachine: (newest-cni-137211) DBG | domain newest-cni-137211 has defined MAC address 52:54:00:15:b5:2a in network mk-newest-cni-137211
	I0814 01:26:06.768894   68995 main.go:141] libmachine: (newest-cni-137211) DBG | unable to find current IP address of domain newest-cni-137211 in network mk-newest-cni-137211
	I0814 01:26:06.768920   68995 main.go:141] libmachine: (newest-cni-137211) DBG | I0814 01:26:06.768827   69030 retry.go:31] will retry after 382.399692ms: waiting for machine to come up
	I0814 01:26:07.152313   68995 main.go:141] libmachine: (newest-cni-137211) DBG | domain newest-cni-137211 has defined MAC address 52:54:00:15:b5:2a in network mk-newest-cni-137211
	I0814 01:26:07.152803   68995 main.go:141] libmachine: (newest-cni-137211) DBG | unable to find current IP address of domain newest-cni-137211 in network mk-newest-cni-137211
	I0814 01:26:07.152829   68995 main.go:141] libmachine: (newest-cni-137211) DBG | I0814 01:26:07.152748   69030 retry.go:31] will retry after 322.036886ms: waiting for machine to come up
	I0814 01:26:07.476414   68995 main.go:141] libmachine: (newest-cni-137211) DBG | domain newest-cni-137211 has defined MAC address 52:54:00:15:b5:2a in network mk-newest-cni-137211
	I0814 01:26:07.476927   68995 main.go:141] libmachine: (newest-cni-137211) DBG | unable to find current IP address of domain newest-cni-137211 in network mk-newest-cni-137211
	I0814 01:26:07.476952   68995 main.go:141] libmachine: (newest-cni-137211) DBG | I0814 01:26:07.476900   69030 retry.go:31] will retry after 442.416068ms: waiting for machine to come up
	I0814 01:26:07.921298   68995 main.go:141] libmachine: (newest-cni-137211) DBG | domain newest-cni-137211 has defined MAC address 52:54:00:15:b5:2a in network mk-newest-cni-137211
	I0814 01:26:07.921805   68995 main.go:141] libmachine: (newest-cni-137211) DBG | unable to find current IP address of domain newest-cni-137211 in network mk-newest-cni-137211
	I0814 01:26:07.921870   68995 main.go:141] libmachine: (newest-cni-137211) DBG | I0814 01:26:07.921780   69030 retry.go:31] will retry after 526.809428ms: waiting for machine to come up
	I0814 01:26:08.450596   68995 main.go:141] libmachine: (newest-cni-137211) DBG | domain newest-cni-137211 has defined MAC address 52:54:00:15:b5:2a in network mk-newest-cni-137211
	I0814 01:26:08.451063   68995 main.go:141] libmachine: (newest-cni-137211) DBG | unable to find current IP address of domain newest-cni-137211 in network mk-newest-cni-137211
	I0814 01:26:08.451092   68995 main.go:141] libmachine: (newest-cni-137211) DBG | I0814 01:26:08.451018   69030 retry.go:31] will retry after 685.434469ms: waiting for machine to come up
	I0814 01:26:09.137731   68995 main.go:141] libmachine: (newest-cni-137211) DBG | domain newest-cni-137211 has defined MAC address 52:54:00:15:b5:2a in network mk-newest-cni-137211
	I0814 01:26:09.138202   68995 main.go:141] libmachine: (newest-cni-137211) DBG | unable to find current IP address of domain newest-cni-137211 in network mk-newest-cni-137211
	I0814 01:26:09.138232   68995 main.go:141] libmachine: (newest-cni-137211) DBG | I0814 01:26:09.138155   69030 retry.go:31] will retry after 962.37891ms: waiting for machine to come up
	I0814 01:26:10.102172   68995 main.go:141] libmachine: (newest-cni-137211) DBG | domain newest-cni-137211 has defined MAC address 52:54:00:15:b5:2a in network mk-newest-cni-137211
	I0814 01:26:10.102525   68995 main.go:141] libmachine: (newest-cni-137211) DBG | unable to find current IP address of domain newest-cni-137211 in network mk-newest-cni-137211
	I0814 01:26:10.102549   68995 main.go:141] libmachine: (newest-cni-137211) DBG | I0814 01:26:10.102487   69030 retry.go:31] will retry after 941.939921ms: waiting for machine to come up
	I0814 01:26:11.046302   68995 main.go:141] libmachine: (newest-cni-137211) DBG | domain newest-cni-137211 has defined MAC address 52:54:00:15:b5:2a in network mk-newest-cni-137211
	I0814 01:26:11.047007   68995 main.go:141] libmachine: (newest-cni-137211) DBG | unable to find current IP address of domain newest-cni-137211 in network mk-newest-cni-137211
	I0814 01:26:11.047039   68995 main.go:141] libmachine: (newest-cni-137211) DBG | I0814 01:26:11.046956   69030 retry.go:31] will retry after 1.395132971s: waiting for machine to come up
	I0814 01:26:12.443479   68995 main.go:141] libmachine: (newest-cni-137211) DBG | domain newest-cni-137211 has defined MAC address 52:54:00:15:b5:2a in network mk-newest-cni-137211
	I0814 01:26:12.444007   68995 main.go:141] libmachine: (newest-cni-137211) DBG | unable to find current IP address of domain newest-cni-137211 in network mk-newest-cni-137211
	I0814 01:26:12.444040   68995 main.go:141] libmachine: (newest-cni-137211) DBG | I0814 01:26:12.443958   69030 retry.go:31] will retry after 1.761005507s: waiting for machine to come up
	I0814 01:26:14.206803   68995 main.go:141] libmachine: (newest-cni-137211) DBG | domain newest-cni-137211 has defined MAC address 52:54:00:15:b5:2a in network mk-newest-cni-137211
	I0814 01:26:14.207247   68995 main.go:141] libmachine: (newest-cni-137211) DBG | unable to find current IP address of domain newest-cni-137211 in network mk-newest-cni-137211
	I0814 01:26:14.207270   68995 main.go:141] libmachine: (newest-cni-137211) DBG | I0814 01:26:14.207199   69030 retry.go:31] will retry after 2.513047333s: waiting for machine to come up
	I0814 01:26:16.722630   68995 main.go:141] libmachine: (newest-cni-137211) DBG | domain newest-cni-137211 has defined MAC address 52:54:00:15:b5:2a in network mk-newest-cni-137211
	I0814 01:26:16.723281   68995 main.go:141] libmachine: (newest-cni-137211) DBG | unable to find current IP address of domain newest-cni-137211 in network mk-newest-cni-137211
	I0814 01:26:16.723306   68995 main.go:141] libmachine: (newest-cni-137211) DBG | I0814 01:26:16.723178   69030 retry.go:31] will retry after 3.40733215s: waiting for machine to come up
	I0814 01:26:20.133994   68995 main.go:141] libmachine: (newest-cni-137211) DBG | domain newest-cni-137211 has defined MAC address 52:54:00:15:b5:2a in network mk-newest-cni-137211
	I0814 01:26:20.134503   68995 main.go:141] libmachine: (newest-cni-137211) DBG | unable to find current IP address of domain newest-cni-137211 in network mk-newest-cni-137211
	I0814 01:26:20.134534   68995 main.go:141] libmachine: (newest-cni-137211) DBG | I0814 01:26:20.134456   69030 retry.go:31] will retry after 3.170359634s: waiting for machine to come up
	I0814 01:26:23.307031   68995 main.go:141] libmachine: (newest-cni-137211) DBG | domain newest-cni-137211 has defined MAC address 52:54:00:15:b5:2a in network mk-newest-cni-137211
	I0814 01:26:23.307460   68995 main.go:141] libmachine: (newest-cni-137211) DBG | domain newest-cni-137211 has current primary IP address 192.168.61.50 and MAC address 52:54:00:15:b5:2a in network mk-newest-cni-137211
	I0814 01:26:23.307494   68995 main.go:141] libmachine: (newest-cni-137211) Found IP for machine: 192.168.61.50
	I0814 01:26:23.307510   68995 main.go:141] libmachine: (newest-cni-137211) Reserving static IP address...
	I0814 01:26:23.307943   68995 main.go:141] libmachine: (newest-cni-137211) DBG | found host DHCP lease matching {name: "newest-cni-137211", mac: "52:54:00:15:b5:2a", ip: "192.168.61.50"} in network mk-newest-cni-137211: {Iface:virbr4 ExpiryTime:2024-08-14 02:26:15 +0000 UTC Type:0 Mac:52:54:00:15:b5:2a Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:newest-cni-137211 Clientid:01:52:54:00:15:b5:2a}
	I0814 01:26:23.307980   68995 main.go:141] libmachine: (newest-cni-137211) DBG | skip adding static IP to network mk-newest-cni-137211 - found existing host DHCP lease matching {name: "newest-cni-137211", mac: "52:54:00:15:b5:2a", ip: "192.168.61.50"}
	I0814 01:26:23.307989   68995 main.go:141] libmachine: (newest-cni-137211) Reserved static IP address: 192.168.61.50
	I0814 01:26:23.307999   68995 main.go:141] libmachine: (newest-cni-137211) Waiting for SSH to be available...
	I0814 01:26:23.308013   68995 main.go:141] libmachine: (newest-cni-137211) DBG | Getting to WaitForSSH function...
	I0814 01:26:23.310470   68995 main.go:141] libmachine: (newest-cni-137211) DBG | domain newest-cni-137211 has defined MAC address 52:54:00:15:b5:2a in network mk-newest-cni-137211
	I0814 01:26:23.310823   68995 main.go:141] libmachine: (newest-cni-137211) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:b5:2a", ip: ""} in network mk-newest-cni-137211: {Iface:virbr4 ExpiryTime:2024-08-14 02:26:15 +0000 UTC Type:0 Mac:52:54:00:15:b5:2a Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:newest-cni-137211 Clientid:01:52:54:00:15:b5:2a}
	I0814 01:26:23.310846   68995 main.go:141] libmachine: (newest-cni-137211) DBG | domain newest-cni-137211 has defined IP address 192.168.61.50 and MAC address 52:54:00:15:b5:2a in network mk-newest-cni-137211
	I0814 01:26:23.311009   68995 main.go:141] libmachine: (newest-cni-137211) DBG | Using SSH client type: external
	I0814 01:26:23.311032   68995 main.go:141] libmachine: (newest-cni-137211) DBG | Using SSH private key: /home/jenkins/minikube-integration/19429-9425/.minikube/machines/newest-cni-137211/id_rsa (-rw-------)
	I0814 01:26:23.311061   68995 main.go:141] libmachine: (newest-cni-137211) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.50 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19429-9425/.minikube/machines/newest-cni-137211/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0814 01:26:23.311081   68995 main.go:141] libmachine: (newest-cni-137211) DBG | About to run SSH command:
	I0814 01:26:23.311109   68995 main.go:141] libmachine: (newest-cni-137211) DBG | exit 0
	I0814 01:26:23.437874   68995 main.go:141] libmachine: (newest-cni-137211) DBG | SSH cmd err, output: <nil>: 
	I0814 01:26:23.438237   68995 main.go:141] libmachine: (newest-cni-137211) Calling .GetConfigRaw
	I0814 01:26:23.438946   68995 main.go:141] libmachine: (newest-cni-137211) Calling .GetIP
	I0814 01:26:23.441469   68995 main.go:141] libmachine: (newest-cni-137211) DBG | domain newest-cni-137211 has defined MAC address 52:54:00:15:b5:2a in network mk-newest-cni-137211
	I0814 01:26:23.441817   68995 main.go:141] libmachine: (newest-cni-137211) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:b5:2a", ip: ""} in network mk-newest-cni-137211: {Iface:virbr4 ExpiryTime:2024-08-14 02:26:15 +0000 UTC Type:0 Mac:52:54:00:15:b5:2a Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:newest-cni-137211 Clientid:01:52:54:00:15:b5:2a}
	I0814 01:26:23.441843   68995 main.go:141] libmachine: (newest-cni-137211) DBG | domain newest-cni-137211 has defined IP address 192.168.61.50 and MAC address 52:54:00:15:b5:2a in network mk-newest-cni-137211
	I0814 01:26:23.442054   68995 profile.go:143] Saving config to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/newest-cni-137211/config.json ...
	I0814 01:26:23.442251   68995 machine.go:94] provisionDockerMachine start ...
	I0814 01:26:23.442269   68995 main.go:141] libmachine: (newest-cni-137211) Calling .DriverName
	I0814 01:26:23.442486   68995 main.go:141] libmachine: (newest-cni-137211) Calling .GetSSHHostname
	I0814 01:26:23.444594   68995 main.go:141] libmachine: (newest-cni-137211) DBG | domain newest-cni-137211 has defined MAC address 52:54:00:15:b5:2a in network mk-newest-cni-137211
	I0814 01:26:23.444865   68995 main.go:141] libmachine: (newest-cni-137211) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:b5:2a", ip: ""} in network mk-newest-cni-137211: {Iface:virbr4 ExpiryTime:2024-08-14 02:26:15 +0000 UTC Type:0 Mac:52:54:00:15:b5:2a Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:newest-cni-137211 Clientid:01:52:54:00:15:b5:2a}
	I0814 01:26:23.444886   68995 main.go:141] libmachine: (newest-cni-137211) DBG | domain newest-cni-137211 has defined IP address 192.168.61.50 and MAC address 52:54:00:15:b5:2a in network mk-newest-cni-137211
	I0814 01:26:23.445023   68995 main.go:141] libmachine: (newest-cni-137211) Calling .GetSSHPort
	I0814 01:26:23.445197   68995 main.go:141] libmachine: (newest-cni-137211) Calling .GetSSHKeyPath
	I0814 01:26:23.445377   68995 main.go:141] libmachine: (newest-cni-137211) Calling .GetSSHKeyPath
	I0814 01:26:23.445525   68995 main.go:141] libmachine: (newest-cni-137211) Calling .GetSSHUsername
	I0814 01:26:23.445697   68995 main.go:141] libmachine: Using SSH client type: native
	I0814 01:26:23.445917   68995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.50 22 <nil> <nil>}
	I0814 01:26:23.445929   68995 main.go:141] libmachine: About to run SSH command:
	hostname
	I0814 01:26:23.553851   68995 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0814 01:26:23.553878   68995 main.go:141] libmachine: (newest-cni-137211) Calling .GetMachineName
	I0814 01:26:23.554120   68995 buildroot.go:166] provisioning hostname "newest-cni-137211"
	I0814 01:26:23.554147   68995 main.go:141] libmachine: (newest-cni-137211) Calling .GetMachineName
	I0814 01:26:23.554377   68995 main.go:141] libmachine: (newest-cni-137211) Calling .GetSSHHostname
	I0814 01:26:23.556827   68995 main.go:141] libmachine: (newest-cni-137211) DBG | domain newest-cni-137211 has defined MAC address 52:54:00:15:b5:2a in network mk-newest-cni-137211
	I0814 01:26:23.557204   68995 main.go:141] libmachine: (newest-cni-137211) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:b5:2a", ip: ""} in network mk-newest-cni-137211: {Iface:virbr4 ExpiryTime:2024-08-14 02:26:15 +0000 UTC Type:0 Mac:52:54:00:15:b5:2a Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:newest-cni-137211 Clientid:01:52:54:00:15:b5:2a}
	I0814 01:26:23.557243   68995 main.go:141] libmachine: (newest-cni-137211) DBG | domain newest-cni-137211 has defined IP address 192.168.61.50 and MAC address 52:54:00:15:b5:2a in network mk-newest-cni-137211
	I0814 01:26:23.557413   68995 main.go:141] libmachine: (newest-cni-137211) Calling .GetSSHPort
	I0814 01:26:23.557623   68995 main.go:141] libmachine: (newest-cni-137211) Calling .GetSSHKeyPath
	I0814 01:26:23.557783   68995 main.go:141] libmachine: (newest-cni-137211) Calling .GetSSHKeyPath
	I0814 01:26:23.557923   68995 main.go:141] libmachine: (newest-cni-137211) Calling .GetSSHUsername
	I0814 01:26:23.558117   68995 main.go:141] libmachine: Using SSH client type: native
	I0814 01:26:23.558328   68995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.50 22 <nil> <nil>}
	I0814 01:26:23.558354   68995 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-137211 && echo "newest-cni-137211" | sudo tee /etc/hostname
	I0814 01:26:23.684685   68995 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-137211
	
	I0814 01:26:23.684711   68995 main.go:141] libmachine: (newest-cni-137211) Calling .GetSSHHostname
	I0814 01:26:23.687443   68995 main.go:141] libmachine: (newest-cni-137211) DBG | domain newest-cni-137211 has defined MAC address 52:54:00:15:b5:2a in network mk-newest-cni-137211
	I0814 01:26:23.687791   68995 main.go:141] libmachine: (newest-cni-137211) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:b5:2a", ip: ""} in network mk-newest-cni-137211: {Iface:virbr4 ExpiryTime:2024-08-14 02:26:15 +0000 UTC Type:0 Mac:52:54:00:15:b5:2a Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:newest-cni-137211 Clientid:01:52:54:00:15:b5:2a}
	I0814 01:26:23.687829   68995 main.go:141] libmachine: (newest-cni-137211) DBG | domain newest-cni-137211 has defined IP address 192.168.61.50 and MAC address 52:54:00:15:b5:2a in network mk-newest-cni-137211
	I0814 01:26:23.687978   68995 main.go:141] libmachine: (newest-cni-137211) Calling .GetSSHPort
	I0814 01:26:23.688180   68995 main.go:141] libmachine: (newest-cni-137211) Calling .GetSSHKeyPath
	I0814 01:26:23.688338   68995 main.go:141] libmachine: (newest-cni-137211) Calling .GetSSHKeyPath
	I0814 01:26:23.688480   68995 main.go:141] libmachine: (newest-cni-137211) Calling .GetSSHUsername
	I0814 01:26:23.688627   68995 main.go:141] libmachine: Using SSH client type: native
	I0814 01:26:23.688795   68995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.50 22 <nil> <nil>}
	I0814 01:26:23.688811   68995 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-137211' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-137211/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-137211' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0814 01:26:23.806626   68995 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 01:26:23.806651   68995 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19429-9425/.minikube CaCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19429-9425/.minikube}
	I0814 01:26:23.806689   68995 buildroot.go:174] setting up certificates
	I0814 01:26:23.806699   68995 provision.go:84] configureAuth start
	I0814 01:26:23.806707   68995 main.go:141] libmachine: (newest-cni-137211) Calling .GetMachineName
	I0814 01:26:23.807003   68995 main.go:141] libmachine: (newest-cni-137211) Calling .GetIP
	I0814 01:26:23.810017   68995 main.go:141] libmachine: (newest-cni-137211) DBG | domain newest-cni-137211 has defined MAC address 52:54:00:15:b5:2a in network mk-newest-cni-137211
	I0814 01:26:23.810440   68995 main.go:141] libmachine: (newest-cni-137211) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:b5:2a", ip: ""} in network mk-newest-cni-137211: {Iface:virbr4 ExpiryTime:2024-08-14 02:26:15 +0000 UTC Type:0 Mac:52:54:00:15:b5:2a Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:newest-cni-137211 Clientid:01:52:54:00:15:b5:2a}
	I0814 01:26:23.810464   68995 main.go:141] libmachine: (newest-cni-137211) DBG | domain newest-cni-137211 has defined IP address 192.168.61.50 and MAC address 52:54:00:15:b5:2a in network mk-newest-cni-137211
	I0814 01:26:23.810637   68995 main.go:141] libmachine: (newest-cni-137211) Calling .GetSSHHostname
	I0814 01:26:23.812940   68995 main.go:141] libmachine: (newest-cni-137211) DBG | domain newest-cni-137211 has defined MAC address 52:54:00:15:b5:2a in network mk-newest-cni-137211
	I0814 01:26:23.813268   68995 main.go:141] libmachine: (newest-cni-137211) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:b5:2a", ip: ""} in network mk-newest-cni-137211: {Iface:virbr4 ExpiryTime:2024-08-14 02:26:15 +0000 UTC Type:0 Mac:52:54:00:15:b5:2a Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:newest-cni-137211 Clientid:01:52:54:00:15:b5:2a}
	I0814 01:26:23.813353   68995 main.go:141] libmachine: (newest-cni-137211) DBG | domain newest-cni-137211 has defined IP address 192.168.61.50 and MAC address 52:54:00:15:b5:2a in network mk-newest-cni-137211
	I0814 01:26:23.813454   68995 provision.go:143] copyHostCerts
	I0814 01:26:23.813497   68995 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem, removing ...
	I0814 01:26:23.813506   68995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem
	I0814 01:26:23.813568   68995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem (1082 bytes)
	I0814 01:26:23.813666   68995 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem, removing ...
	I0814 01:26:23.813675   68995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem
	I0814 01:26:23.813709   68995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem (1123 bytes)
	I0814 01:26:23.813760   68995 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem, removing ...
	I0814 01:26:23.813766   68995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem
	I0814 01:26:23.813787   68995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem (1675 bytes)
	I0814 01:26:23.813829   68995 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem org=jenkins.newest-cni-137211 san=[127.0.0.1 192.168.61.50 localhost minikube newest-cni-137211]
	I0814 01:26:24.033945   68995 provision.go:177] copyRemoteCerts
	I0814 01:26:24.033996   68995 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0814 01:26:24.034022   68995 main.go:141] libmachine: (newest-cni-137211) Calling .GetSSHHostname
	I0814 01:26:24.036359   68995 main.go:141] libmachine: (newest-cni-137211) DBG | domain newest-cni-137211 has defined MAC address 52:54:00:15:b5:2a in network mk-newest-cni-137211
	I0814 01:26:24.036603   68995 main.go:141] libmachine: (newest-cni-137211) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:b5:2a", ip: ""} in network mk-newest-cni-137211: {Iface:virbr4 ExpiryTime:2024-08-14 02:26:15 +0000 UTC Type:0 Mac:52:54:00:15:b5:2a Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:newest-cni-137211 Clientid:01:52:54:00:15:b5:2a}
	I0814 01:26:24.036634   68995 main.go:141] libmachine: (newest-cni-137211) DBG | domain newest-cni-137211 has defined IP address 192.168.61.50 and MAC address 52:54:00:15:b5:2a in network mk-newest-cni-137211
	I0814 01:26:24.036811   68995 main.go:141] libmachine: (newest-cni-137211) Calling .GetSSHPort
	I0814 01:26:24.037006   68995 main.go:141] libmachine: (newest-cni-137211) Calling .GetSSHKeyPath
	I0814 01:26:24.037150   68995 main.go:141] libmachine: (newest-cni-137211) Calling .GetSSHUsername
	I0814 01:26:24.037285   68995 sshutil.go:53] new ssh client: &{IP:192.168.61.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/newest-cni-137211/id_rsa Username:docker}
	I0814 01:26:24.120928   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0814 01:26:24.143796   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0814 01:26:24.166732   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0814 01:26:24.188288   68995 provision.go:87] duration metric: took 381.576103ms to configureAuth
	I0814 01:26:24.188323   68995 buildroot.go:189] setting minikube options for container-runtime
	I0814 01:26:24.188559   68995 config.go:182] Loaded profile config "newest-cni-137211": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 01:26:24.188638   68995 main.go:141] libmachine: (newest-cni-137211) Calling .GetSSHHostname
	I0814 01:26:24.191496   68995 main.go:141] libmachine: (newest-cni-137211) DBG | domain newest-cni-137211 has defined MAC address 52:54:00:15:b5:2a in network mk-newest-cni-137211
	I0814 01:26:24.191911   68995 main.go:141] libmachine: (newest-cni-137211) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:b5:2a", ip: ""} in network mk-newest-cni-137211: {Iface:virbr4 ExpiryTime:2024-08-14 02:26:15 +0000 UTC Type:0 Mac:52:54:00:15:b5:2a Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:newest-cni-137211 Clientid:01:52:54:00:15:b5:2a}
	I0814 01:26:24.191935   68995 main.go:141] libmachine: (newest-cni-137211) DBG | domain newest-cni-137211 has defined IP address 192.168.61.50 and MAC address 52:54:00:15:b5:2a in network mk-newest-cni-137211
	I0814 01:26:24.192253   68995 main.go:141] libmachine: (newest-cni-137211) Calling .GetSSHPort
	I0814 01:26:24.192452   68995 main.go:141] libmachine: (newest-cni-137211) Calling .GetSSHKeyPath
	I0814 01:26:24.192607   68995 main.go:141] libmachine: (newest-cni-137211) Calling .GetSSHKeyPath
	I0814 01:26:24.192756   68995 main.go:141] libmachine: (newest-cni-137211) Calling .GetSSHUsername
	I0814 01:26:24.192941   68995 main.go:141] libmachine: Using SSH client type: native
	I0814 01:26:24.193084   68995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.50 22 <nil> <nil>}
	I0814 01:26:24.193097   68995 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0814 01:26:24.450394   68995 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0814 01:26:24.450429   68995 machine.go:97] duration metric: took 1.008163529s to provisionDockerMachine
	I0814 01:26:24.450445   68995 start.go:293] postStartSetup for "newest-cni-137211" (driver="kvm2")
	I0814 01:26:24.450459   68995 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0814 01:26:24.450484   68995 main.go:141] libmachine: (newest-cni-137211) Calling .DriverName
	I0814 01:26:24.450853   68995 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0814 01:26:24.450882   68995 main.go:141] libmachine: (newest-cni-137211) Calling .GetSSHHostname
	I0814 01:26:24.453275   68995 main.go:141] libmachine: (newest-cni-137211) DBG | domain newest-cni-137211 has defined MAC address 52:54:00:15:b5:2a in network mk-newest-cni-137211
	I0814 01:26:24.453624   68995 main.go:141] libmachine: (newest-cni-137211) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:b5:2a", ip: ""} in network mk-newest-cni-137211: {Iface:virbr4 ExpiryTime:2024-08-14 02:26:15 +0000 UTC Type:0 Mac:52:54:00:15:b5:2a Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:newest-cni-137211 Clientid:01:52:54:00:15:b5:2a}
	I0814 01:26:24.453646   68995 main.go:141] libmachine: (newest-cni-137211) DBG | domain newest-cni-137211 has defined IP address 192.168.61.50 and MAC address 52:54:00:15:b5:2a in network mk-newest-cni-137211
	I0814 01:26:24.453781   68995 main.go:141] libmachine: (newest-cni-137211) Calling .GetSSHPort
	I0814 01:26:24.453969   68995 main.go:141] libmachine: (newest-cni-137211) Calling .GetSSHKeyPath
	I0814 01:26:24.454140   68995 main.go:141] libmachine: (newest-cni-137211) Calling .GetSSHUsername
	I0814 01:26:24.454287   68995 sshutil.go:53] new ssh client: &{IP:192.168.61.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/newest-cni-137211/id_rsa Username:docker}
	I0814 01:26:24.536271   68995 ssh_runner.go:195] Run: cat /etc/os-release
	I0814 01:26:24.539920   68995 info.go:137] Remote host: Buildroot 2023.02.9
	I0814 01:26:24.539947   68995 filesync.go:126] Scanning /home/jenkins/minikube-integration/19429-9425/.minikube/addons for local assets ...
	I0814 01:26:24.540014   68995 filesync.go:126] Scanning /home/jenkins/minikube-integration/19429-9425/.minikube/files for local assets ...
	I0814 01:26:24.540097   68995 filesync.go:149] local asset: /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem -> 165892.pem in /etc/ssl/certs
	I0814 01:26:24.540190   68995 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0814 01:26:24.549478   68995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem --> /etc/ssl/certs/165892.pem (1708 bytes)
	I0814 01:26:24.571869   68995 start.go:296] duration metric: took 121.409594ms for postStartSetup
	I0814 01:26:24.571911   68995 fix.go:56] duration metric: took 19.30758252s for fixHost
	I0814 01:26:24.571934   68995 main.go:141] libmachine: (newest-cni-137211) Calling .GetSSHHostname
	I0814 01:26:24.574552   68995 main.go:141] libmachine: (newest-cni-137211) DBG | domain newest-cni-137211 has defined MAC address 52:54:00:15:b5:2a in network mk-newest-cni-137211
	I0814 01:26:24.574883   68995 main.go:141] libmachine: (newest-cni-137211) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:b5:2a", ip: ""} in network mk-newest-cni-137211: {Iface:virbr4 ExpiryTime:2024-08-14 02:26:15 +0000 UTC Type:0 Mac:52:54:00:15:b5:2a Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:newest-cni-137211 Clientid:01:52:54:00:15:b5:2a}
	I0814 01:26:24.574912   68995 main.go:141] libmachine: (newest-cni-137211) DBG | domain newest-cni-137211 has defined IP address 192.168.61.50 and MAC address 52:54:00:15:b5:2a in network mk-newest-cni-137211
	I0814 01:26:24.575090   68995 main.go:141] libmachine: (newest-cni-137211) Calling .GetSSHPort
	I0814 01:26:24.575285   68995 main.go:141] libmachine: (newest-cni-137211) Calling .GetSSHKeyPath
	I0814 01:26:24.575432   68995 main.go:141] libmachine: (newest-cni-137211) Calling .GetSSHKeyPath
	I0814 01:26:24.575607   68995 main.go:141] libmachine: (newest-cni-137211) Calling .GetSSHUsername
	I0814 01:26:24.575785   68995 main.go:141] libmachine: Using SSH client type: native
	I0814 01:26:24.575966   68995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.50 22 <nil> <nil>}
	I0814 01:26:24.575979   68995 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0814 01:26:24.682448   68995 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723598784.660141980
	
	I0814 01:26:24.682469   68995 fix.go:216] guest clock: 1723598784.660141980
	I0814 01:26:24.682479   68995 fix.go:229] Guest: 2024-08-14 01:26:24.66014198 +0000 UTC Remote: 2024-08-14 01:26:24.571916602 +0000 UTC m=+19.447055013 (delta=88.225378ms)
	I0814 01:26:24.682502   68995 fix.go:200] guest clock delta is within tolerance: 88.225378ms
	I0814 01:26:24.682508   68995 start.go:83] releasing machines lock for "newest-cni-137211", held for 19.418191658s
	I0814 01:26:24.682531   68995 main.go:141] libmachine: (newest-cni-137211) Calling .DriverName
	I0814 01:26:24.682771   68995 main.go:141] libmachine: (newest-cni-137211) Calling .GetIP
	I0814 01:26:24.685328   68995 main.go:141] libmachine: (newest-cni-137211) DBG | domain newest-cni-137211 has defined MAC address 52:54:00:15:b5:2a in network mk-newest-cni-137211
	I0814 01:26:24.685609   68995 main.go:141] libmachine: (newest-cni-137211) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:b5:2a", ip: ""} in network mk-newest-cni-137211: {Iface:virbr4 ExpiryTime:2024-08-14 02:26:15 +0000 UTC Type:0 Mac:52:54:00:15:b5:2a Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:newest-cni-137211 Clientid:01:52:54:00:15:b5:2a}
	I0814 01:26:24.685636   68995 main.go:141] libmachine: (newest-cni-137211) DBG | domain newest-cni-137211 has defined IP address 192.168.61.50 and MAC address 52:54:00:15:b5:2a in network mk-newest-cni-137211
	I0814 01:26:24.685822   68995 main.go:141] libmachine: (newest-cni-137211) Calling .DriverName
	I0814 01:26:24.686278   68995 main.go:141] libmachine: (newest-cni-137211) Calling .DriverName
	I0814 01:26:24.686448   68995 main.go:141] libmachine: (newest-cni-137211) Calling .DriverName
	I0814 01:26:24.686541   68995 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0814 01:26:24.686593   68995 main.go:141] libmachine: (newest-cni-137211) Calling .GetSSHHostname
	I0814 01:26:24.686599   68995 ssh_runner.go:195] Run: cat /version.json
	I0814 01:26:24.686613   68995 main.go:141] libmachine: (newest-cni-137211) Calling .GetSSHHostname
	I0814 01:26:24.689129   68995 main.go:141] libmachine: (newest-cni-137211) DBG | domain newest-cni-137211 has defined MAC address 52:54:00:15:b5:2a in network mk-newest-cni-137211
	I0814 01:26:24.689214   68995 main.go:141] libmachine: (newest-cni-137211) DBG | domain newest-cni-137211 has defined MAC address 52:54:00:15:b5:2a in network mk-newest-cni-137211
	I0814 01:26:24.689469   68995 main.go:141] libmachine: (newest-cni-137211) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:b5:2a", ip: ""} in network mk-newest-cni-137211: {Iface:virbr4 ExpiryTime:2024-08-14 02:26:15 +0000 UTC Type:0 Mac:52:54:00:15:b5:2a Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:newest-cni-137211 Clientid:01:52:54:00:15:b5:2a}
	I0814 01:26:24.689492   68995 main.go:141] libmachine: (newest-cni-137211) DBG | domain newest-cni-137211 has defined IP address 192.168.61.50 and MAC address 52:54:00:15:b5:2a in network mk-newest-cni-137211
	I0814 01:26:24.689536   68995 main.go:141] libmachine: (newest-cni-137211) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:b5:2a", ip: ""} in network mk-newest-cni-137211: {Iface:virbr4 ExpiryTime:2024-08-14 02:26:15 +0000 UTC Type:0 Mac:52:54:00:15:b5:2a Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:newest-cni-137211 Clientid:01:52:54:00:15:b5:2a}
	I0814 01:26:24.689558   68995 main.go:141] libmachine: (newest-cni-137211) DBG | domain newest-cni-137211 has defined IP address 192.168.61.50 and MAC address 52:54:00:15:b5:2a in network mk-newest-cni-137211
	I0814 01:26:24.689707   68995 main.go:141] libmachine: (newest-cni-137211) Calling .GetSSHPort
	I0814 01:26:24.689732   68995 main.go:141] libmachine: (newest-cni-137211) Calling .GetSSHPort
	I0814 01:26:24.689884   68995 main.go:141] libmachine: (newest-cni-137211) Calling .GetSSHKeyPath
	I0814 01:26:24.689888   68995 main.go:141] libmachine: (newest-cni-137211) Calling .GetSSHKeyPath
	I0814 01:26:24.690057   68995 main.go:141] libmachine: (newest-cni-137211) Calling .GetSSHUsername
	I0814 01:26:24.690060   68995 main.go:141] libmachine: (newest-cni-137211) Calling .GetSSHUsername
	I0814 01:26:24.690180   68995 sshutil.go:53] new ssh client: &{IP:192.168.61.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/newest-cni-137211/id_rsa Username:docker}
	I0814 01:26:24.690233   68995 sshutil.go:53] new ssh client: &{IP:192.168.61.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/newest-cni-137211/id_rsa Username:docker}
	I0814 01:26:24.794455   68995 ssh_runner.go:195] Run: systemctl --version
	I0814 01:26:24.800300   68995 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0814 01:26:24.942664   68995 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0814 01:26:24.948366   68995 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0814 01:26:24.948431   68995 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0814 01:26:24.962959   68995 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0814 01:26:24.962983   68995 start.go:495] detecting cgroup driver to use...
	I0814 01:26:24.963048   68995 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0814 01:26:24.977331   68995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0814 01:26:24.991013   68995 docker.go:217] disabling cri-docker service (if available) ...
	I0814 01:26:24.991057   68995 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0814 01:26:25.003271   68995 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0814 01:26:25.017199   68995 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0814 01:26:25.127426   68995 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0814 01:26:25.267671   68995 docker.go:233] disabling docker service ...
	I0814 01:26:25.267743   68995 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0814 01:26:25.281728   68995 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0814 01:26:25.294175   68995 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0814 01:26:25.437149   68995 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0814 01:26:25.557605   68995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0814 01:26:25.571434   68995 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 01:26:25.588611   68995 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0814 01:26:25.588684   68995 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:26:25.598221   68995 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0814 01:26:25.598294   68995 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:26:25.607820   68995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:26:25.617577   68995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:26:25.627226   68995 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0814 01:26:25.637468   68995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:26:25.646951   68995 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:26:25.662470   68995 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:26:25.671598   68995 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0814 01:26:25.679952   68995 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0814 01:26:25.679995   68995 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0814 01:26:25.692084   68995 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0814 01:26:25.700567   68995 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 01:26:25.811042   68995 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0814 01:26:25.934948   68995 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0814 01:26:25.935026   68995 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0814 01:26:25.939735   68995 start.go:563] Will wait 60s for crictl version
	I0814 01:26:25.939781   68995 ssh_runner.go:195] Run: which crictl
	I0814 01:26:25.944003   68995 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0814 01:26:25.985566   68995 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0814 01:26:25.985655   68995 ssh_runner.go:195] Run: crio --version
	I0814 01:26:26.013775   68995 ssh_runner.go:195] Run: crio --version
	I0814 01:26:26.045160   68995 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0814 01:26:26.046299   68995 main.go:141] libmachine: (newest-cni-137211) Calling .GetIP
	I0814 01:26:26.049107   68995 main.go:141] libmachine: (newest-cni-137211) DBG | domain newest-cni-137211 has defined MAC address 52:54:00:15:b5:2a in network mk-newest-cni-137211
	I0814 01:26:26.049484   68995 main.go:141] libmachine: (newest-cni-137211) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:b5:2a", ip: ""} in network mk-newest-cni-137211: {Iface:virbr4 ExpiryTime:2024-08-14 02:26:15 +0000 UTC Type:0 Mac:52:54:00:15:b5:2a Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:newest-cni-137211 Clientid:01:52:54:00:15:b5:2a}
	I0814 01:26:26.049516   68995 main.go:141] libmachine: (newest-cni-137211) DBG | domain newest-cni-137211 has defined IP address 192.168.61.50 and MAC address 52:54:00:15:b5:2a in network mk-newest-cni-137211
	I0814 01:26:26.049663   68995 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0814 01:26:26.054148   68995 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 01:26:26.068212   68995 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	
	
	==> CRI-O <==
	Aug 14 01:26:27 embed-certs-901410 crio[719]: time="2024-08-14 01:26:27.047267937Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598787047197014,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8925088c-ebab-4443-95a4-f6f42c53966f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 01:26:27 embed-certs-901410 crio[719]: time="2024-08-14 01:26:27.048349114Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0e5cc3f8-3b31-4eaf-82bb-1a3e3586df69 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 01:26:27 embed-certs-901410 crio[719]: time="2024-08-14 01:26:27.048513058Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0e5cc3f8-3b31-4eaf-82bb-1a3e3586df69 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 01:26:27 embed-certs-901410 crio[719]: time="2024-08-14 01:26:27.048922054Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:41fb6b83dfb3f8f5909f7ee7957b423f57086bfef6610cebdf4982ec8169f750,PodSandboxId:456824ba216bc02d7eea01f29a435927718740b95335ec0605a839a5396144cb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723597893698047975,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-bq2xk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6593bc2b-ef8f-4738-8674-dcaea675b88b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31ff007001de7d87571616c153cc4f440e2731c8b4ea3189746ca0c2b48fc1dd,PodSandboxId:1aeb9620f6e92bb3059530d1e00fd469e38cd2cf9e954759228673529d289306,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723597893623304476,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-lwd2j,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 75f6e3fe-c5ac-4dbc-bbbb-bfb91796aaff,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:494f8cefbe325338845c1fc777d9263142510e7e99b8ff1217f99009a69f7db0,PodSandboxId:d22748ea915f0112abd8b3b2fb5387e403c18daabe81b7ccabc4d7628f290dbc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAI
NER_RUNNING,CreatedAt:1723597893175687740,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f82856-b50c-4a5f-b0c7-4cd81e4b896e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3217f55ca95d151e1c155a2f1a0579f0da830b734b1aece44be67ebda5d316ec,PodSandboxId:59094e46534ecc6cf847e184e4c1b9df403daf0ed3a6ff0eb7ffebafced70784,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt
:1723597892495185110,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fqmzw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9d63b14-ce56-4d0b-8511-1198b306b70e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3eb4c3d012388b816b911cdc033948549417639e9a68f72586d72a9b0a9614b,PodSandboxId:00be091a5308bf9986dd3b0b658dd5d29deed7448be32fd8bfebfdc626d6310d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723597881630040350,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-901410,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9fb6ac68784a32ac3c43783c2aebbb5,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e65af6cb886c8d6ed105e20eb2a92ce2351df090146b9668765c9487e8fe148,PodSandboxId:16ffbc8b427803d25768aa74bfbf40b3f96b30cfa716709f51387a164c705913,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723597881610673438,Labe
ls:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-901410,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb842eb0c22d098ebfbdd3f6dcb5e402,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9eb586f43234ce42831ea6736853ad2af69f18c7b5bde338b29f19749c8b60b8,PodSandboxId:b3fad8a44c7c9d047bc07a3eda3bf5c694b82a2ca714d5c873472bd6668e49b0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723597881556133733,Labels:map[string]string{io.kubernet
es.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-901410,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9eb85559dc39794c5c6b039a2647d929,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9118fcad6781b261307d099b3d7883f3508d0b188641ebc28db65e60502c975,PodSandboxId:d491ad9827cf45f4ec888575f176a81f87ce619d0294a6a4eb58ffe9cafadcff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723597881576634978,Labels:map[string]string{io.kubernetes.container
.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-901410,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 974f8dae03a593e482ff3abf15b255b4,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6efc64c66f052fcd425420e6d1adc2be719b96dcee74ae7ecf504620233a36c,PodSandboxId:27c7b14f7d5570f869dabb48fd19795527668dc71e7e276cd6f823d2aba11740,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723597599439548471,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-901410,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9eb85559dc39794c5c6b039a2647d929,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0e5cc3f8-3b31-4eaf-82bb-1a3e3586df69 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 01:26:27 embed-certs-901410 crio[719]: time="2024-08-14 01:26:27.102169991Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0ef9151d-a022-44f3-9c9a-3054ee78571d name=/runtime.v1.RuntimeService/Version
	Aug 14 01:26:27 embed-certs-901410 crio[719]: time="2024-08-14 01:26:27.102277516Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0ef9151d-a022-44f3-9c9a-3054ee78571d name=/runtime.v1.RuntimeService/Version
	Aug 14 01:26:27 embed-certs-901410 crio[719]: time="2024-08-14 01:26:27.104336264Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7c7c8c1c-767a-4e47-9e4b-09afe489a90e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 01:26:27 embed-certs-901410 crio[719]: time="2024-08-14 01:26:27.104923320Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598787104888495,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7c7c8c1c-767a-4e47-9e4b-09afe489a90e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 01:26:27 embed-certs-901410 crio[719]: time="2024-08-14 01:26:27.105851029Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6a5f2e5e-ab96-4b8c-adab-100b2693ecb5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 01:26:27 embed-certs-901410 crio[719]: time="2024-08-14 01:26:27.105924784Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6a5f2e5e-ab96-4b8c-adab-100b2693ecb5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 01:26:27 embed-certs-901410 crio[719]: time="2024-08-14 01:26:27.106279457Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:41fb6b83dfb3f8f5909f7ee7957b423f57086bfef6610cebdf4982ec8169f750,PodSandboxId:456824ba216bc02d7eea01f29a435927718740b95335ec0605a839a5396144cb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723597893698047975,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-bq2xk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6593bc2b-ef8f-4738-8674-dcaea675b88b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31ff007001de7d87571616c153cc4f440e2731c8b4ea3189746ca0c2b48fc1dd,PodSandboxId:1aeb9620f6e92bb3059530d1e00fd469e38cd2cf9e954759228673529d289306,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723597893623304476,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-lwd2j,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 75f6e3fe-c5ac-4dbc-bbbb-bfb91796aaff,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:494f8cefbe325338845c1fc777d9263142510e7e99b8ff1217f99009a69f7db0,PodSandboxId:d22748ea915f0112abd8b3b2fb5387e403c18daabe81b7ccabc4d7628f290dbc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAI
NER_RUNNING,CreatedAt:1723597893175687740,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f82856-b50c-4a5f-b0c7-4cd81e4b896e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3217f55ca95d151e1c155a2f1a0579f0da830b734b1aece44be67ebda5d316ec,PodSandboxId:59094e46534ecc6cf847e184e4c1b9df403daf0ed3a6ff0eb7ffebafced70784,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt
:1723597892495185110,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fqmzw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9d63b14-ce56-4d0b-8511-1198b306b70e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3eb4c3d012388b816b911cdc033948549417639e9a68f72586d72a9b0a9614b,PodSandboxId:00be091a5308bf9986dd3b0b658dd5d29deed7448be32fd8bfebfdc626d6310d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723597881630040350,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-901410,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9fb6ac68784a32ac3c43783c2aebbb5,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e65af6cb886c8d6ed105e20eb2a92ce2351df090146b9668765c9487e8fe148,PodSandboxId:16ffbc8b427803d25768aa74bfbf40b3f96b30cfa716709f51387a164c705913,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723597881610673438,Labe
ls:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-901410,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb842eb0c22d098ebfbdd3f6dcb5e402,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9eb586f43234ce42831ea6736853ad2af69f18c7b5bde338b29f19749c8b60b8,PodSandboxId:b3fad8a44c7c9d047bc07a3eda3bf5c694b82a2ca714d5c873472bd6668e49b0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723597881556133733,Labels:map[string]string{io.kubernet
es.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-901410,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9eb85559dc39794c5c6b039a2647d929,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9118fcad6781b261307d099b3d7883f3508d0b188641ebc28db65e60502c975,PodSandboxId:d491ad9827cf45f4ec888575f176a81f87ce619d0294a6a4eb58ffe9cafadcff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723597881576634978,Labels:map[string]string{io.kubernetes.container
.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-901410,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 974f8dae03a593e482ff3abf15b255b4,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6efc64c66f052fcd425420e6d1adc2be719b96dcee74ae7ecf504620233a36c,PodSandboxId:27c7b14f7d5570f869dabb48fd19795527668dc71e7e276cd6f823d2aba11740,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723597599439548471,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-901410,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9eb85559dc39794c5c6b039a2647d929,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6a5f2e5e-ab96-4b8c-adab-100b2693ecb5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 01:26:27 embed-certs-901410 crio[719]: time="2024-08-14 01:26:27.159779947Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fc9862d1-bcbe-4a49-8a25-187c38d56cad name=/runtime.v1.RuntimeService/Version
	Aug 14 01:26:27 embed-certs-901410 crio[719]: time="2024-08-14 01:26:27.159882862Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fc9862d1-bcbe-4a49-8a25-187c38d56cad name=/runtime.v1.RuntimeService/Version
	Aug 14 01:26:27 embed-certs-901410 crio[719]: time="2024-08-14 01:26:27.162922538Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=20f4caab-2a58-4a42-a333-ce30a2afb987 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 01:26:27 embed-certs-901410 crio[719]: time="2024-08-14 01:26:27.163913538Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598787163871830,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=20f4caab-2a58-4a42-a333-ce30a2afb987 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 01:26:27 embed-certs-901410 crio[719]: time="2024-08-14 01:26:27.164653224Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=47a18564-d0cc-4d1c-a226-8eeb8a8a19cd name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 01:26:27 embed-certs-901410 crio[719]: time="2024-08-14 01:26:27.164895787Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=47a18564-d0cc-4d1c-a226-8eeb8a8a19cd name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 01:26:27 embed-certs-901410 crio[719]: time="2024-08-14 01:26:27.165627735Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:41fb6b83dfb3f8f5909f7ee7957b423f57086bfef6610cebdf4982ec8169f750,PodSandboxId:456824ba216bc02d7eea01f29a435927718740b95335ec0605a839a5396144cb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723597893698047975,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-bq2xk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6593bc2b-ef8f-4738-8674-dcaea675b88b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31ff007001de7d87571616c153cc4f440e2731c8b4ea3189746ca0c2b48fc1dd,PodSandboxId:1aeb9620f6e92bb3059530d1e00fd469e38cd2cf9e954759228673529d289306,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723597893623304476,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-lwd2j,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 75f6e3fe-c5ac-4dbc-bbbb-bfb91796aaff,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:494f8cefbe325338845c1fc777d9263142510e7e99b8ff1217f99009a69f7db0,PodSandboxId:d22748ea915f0112abd8b3b2fb5387e403c18daabe81b7ccabc4d7628f290dbc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAI
NER_RUNNING,CreatedAt:1723597893175687740,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f82856-b50c-4a5f-b0c7-4cd81e4b896e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3217f55ca95d151e1c155a2f1a0579f0da830b734b1aece44be67ebda5d316ec,PodSandboxId:59094e46534ecc6cf847e184e4c1b9df403daf0ed3a6ff0eb7ffebafced70784,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt
:1723597892495185110,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fqmzw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9d63b14-ce56-4d0b-8511-1198b306b70e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3eb4c3d012388b816b911cdc033948549417639e9a68f72586d72a9b0a9614b,PodSandboxId:00be091a5308bf9986dd3b0b658dd5d29deed7448be32fd8bfebfdc626d6310d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723597881630040350,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-901410,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9fb6ac68784a32ac3c43783c2aebbb5,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e65af6cb886c8d6ed105e20eb2a92ce2351df090146b9668765c9487e8fe148,PodSandboxId:16ffbc8b427803d25768aa74bfbf40b3f96b30cfa716709f51387a164c705913,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723597881610673438,Labe
ls:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-901410,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb842eb0c22d098ebfbdd3f6dcb5e402,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9eb586f43234ce42831ea6736853ad2af69f18c7b5bde338b29f19749c8b60b8,PodSandboxId:b3fad8a44c7c9d047bc07a3eda3bf5c694b82a2ca714d5c873472bd6668e49b0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723597881556133733,Labels:map[string]string{io.kubernet
es.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-901410,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9eb85559dc39794c5c6b039a2647d929,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9118fcad6781b261307d099b3d7883f3508d0b188641ebc28db65e60502c975,PodSandboxId:d491ad9827cf45f4ec888575f176a81f87ce619d0294a6a4eb58ffe9cafadcff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723597881576634978,Labels:map[string]string{io.kubernetes.container
.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-901410,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 974f8dae03a593e482ff3abf15b255b4,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6efc64c66f052fcd425420e6d1adc2be719b96dcee74ae7ecf504620233a36c,PodSandboxId:27c7b14f7d5570f869dabb48fd19795527668dc71e7e276cd6f823d2aba11740,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723597599439548471,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-901410,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9eb85559dc39794c5c6b039a2647d929,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=47a18564-d0cc-4d1c-a226-8eeb8a8a19cd name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 01:26:27 embed-certs-901410 crio[719]: time="2024-08-14 01:26:27.211759216Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ad4a3375-a502-4590-989f-612b157407fa name=/runtime.v1.RuntimeService/Version
	Aug 14 01:26:27 embed-certs-901410 crio[719]: time="2024-08-14 01:26:27.211867417Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ad4a3375-a502-4590-989f-612b157407fa name=/runtime.v1.RuntimeService/Version
	Aug 14 01:26:27 embed-certs-901410 crio[719]: time="2024-08-14 01:26:27.213431516Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=728619a2-6093-4710-ad96-2ed91094a4d2 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 01:26:27 embed-certs-901410 crio[719]: time="2024-08-14 01:26:27.214479775Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598787214408571,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=728619a2-6093-4710-ad96-2ed91094a4d2 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 01:26:27 embed-certs-901410 crio[719]: time="2024-08-14 01:26:27.216062094Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ee04cfce-cff2-443e-b47d-08eb2d97b198 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 01:26:27 embed-certs-901410 crio[719]: time="2024-08-14 01:26:27.216138961Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ee04cfce-cff2-443e-b47d-08eb2d97b198 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 01:26:27 embed-certs-901410 crio[719]: time="2024-08-14 01:26:27.216425206Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:41fb6b83dfb3f8f5909f7ee7957b423f57086bfef6610cebdf4982ec8169f750,PodSandboxId:456824ba216bc02d7eea01f29a435927718740b95335ec0605a839a5396144cb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723597893698047975,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-bq2xk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6593bc2b-ef8f-4738-8674-dcaea675b88b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31ff007001de7d87571616c153cc4f440e2731c8b4ea3189746ca0c2b48fc1dd,PodSandboxId:1aeb9620f6e92bb3059530d1e00fd469e38cd2cf9e954759228673529d289306,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723597893623304476,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-lwd2j,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 75f6e3fe-c5ac-4dbc-bbbb-bfb91796aaff,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:494f8cefbe325338845c1fc777d9263142510e7e99b8ff1217f99009a69f7db0,PodSandboxId:d22748ea915f0112abd8b3b2fb5387e403c18daabe81b7ccabc4d7628f290dbc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAI
NER_RUNNING,CreatedAt:1723597893175687740,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f82856-b50c-4a5f-b0c7-4cd81e4b896e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3217f55ca95d151e1c155a2f1a0579f0da830b734b1aece44be67ebda5d316ec,PodSandboxId:59094e46534ecc6cf847e184e4c1b9df403daf0ed3a6ff0eb7ffebafced70784,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt
:1723597892495185110,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fqmzw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9d63b14-ce56-4d0b-8511-1198b306b70e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3eb4c3d012388b816b911cdc033948549417639e9a68f72586d72a9b0a9614b,PodSandboxId:00be091a5308bf9986dd3b0b658dd5d29deed7448be32fd8bfebfdc626d6310d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723597881630040350,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-901410,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9fb6ac68784a32ac3c43783c2aebbb5,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e65af6cb886c8d6ed105e20eb2a92ce2351df090146b9668765c9487e8fe148,PodSandboxId:16ffbc8b427803d25768aa74bfbf40b3f96b30cfa716709f51387a164c705913,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723597881610673438,Labe
ls:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-901410,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb842eb0c22d098ebfbdd3f6dcb5e402,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9eb586f43234ce42831ea6736853ad2af69f18c7b5bde338b29f19749c8b60b8,PodSandboxId:b3fad8a44c7c9d047bc07a3eda3bf5c694b82a2ca714d5c873472bd6668e49b0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723597881556133733,Labels:map[string]string{io.kubernet
es.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-901410,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9eb85559dc39794c5c6b039a2647d929,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9118fcad6781b261307d099b3d7883f3508d0b188641ebc28db65e60502c975,PodSandboxId:d491ad9827cf45f4ec888575f176a81f87ce619d0294a6a4eb58ffe9cafadcff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723597881576634978,Labels:map[string]string{io.kubernetes.container
.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-901410,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 974f8dae03a593e482ff3abf15b255b4,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6efc64c66f052fcd425420e6d1adc2be719b96dcee74ae7ecf504620233a36c,PodSandboxId:27c7b14f7d5570f869dabb48fd19795527668dc71e7e276cd6f823d2aba11740,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723597599439548471,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-901410,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9eb85559dc39794c5c6b039a2647d929,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ee04cfce-cff2-443e-b47d-08eb2d97b198 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	41fb6b83dfb3f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   14 minutes ago      Running             coredns                   0                   456824ba216bc       coredns-6f6b679f8f-bq2xk
	31ff007001de7       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   14 minutes ago      Running             coredns                   0                   1aeb9620f6e92       coredns-6f6b679f8f-lwd2j
	494f8cefbe325       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 minutes ago      Running             storage-provisioner       0                   d22748ea915f0       storage-provisioner
	3217f55ca95d1       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   14 minutes ago      Running             kube-proxy                0                   59094e46534ec       kube-proxy-fqmzw
	d3eb4c3d01238       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   15 minutes ago      Running             kube-controller-manager   2                   00be091a5308b       kube-controller-manager-embed-certs-901410
	5e65af6cb886c       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   15 minutes ago      Running             etcd                      2                   16ffbc8b42780       etcd-embed-certs-901410
	d9118fcad6781       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   15 minutes ago      Running             kube-scheduler            2                   d491ad9827cf4       kube-scheduler-embed-certs-901410
	9eb586f43234c       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   15 minutes ago      Running             kube-apiserver            2                   b3fad8a44c7c9       kube-apiserver-embed-certs-901410
	b6efc64c66f05       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   19 minutes ago      Exited              kube-apiserver            1                   27c7b14f7d557       kube-apiserver-embed-certs-901410
	
	
	==> coredns [31ff007001de7d87571616c153cc4f440e2731c8b4ea3189746ca0c2b48fc1dd] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [41fb6b83dfb3f8f5909f7ee7957b423f57086bfef6610cebdf4982ec8169f750] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               embed-certs-901410
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-901410
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a5ac17629aabe8562ef9220195ba6559cf416caf
	                    minikube.k8s.io/name=embed-certs-901410
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_14T01_11_27_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 14 Aug 2024 01:11:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-901410
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 14 Aug 2024 01:26:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 14 Aug 2024 01:21:49 +0000   Wed, 14 Aug 2024 01:11:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 14 Aug 2024 01:21:49 +0000   Wed, 14 Aug 2024 01:11:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 14 Aug 2024 01:21:49 +0000   Wed, 14 Aug 2024 01:11:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 14 Aug 2024 01:21:49 +0000   Wed, 14 Aug 2024 01:11:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.210
	  Hostname:    embed-certs-901410
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 93e154269592459d97e1c17229f46f37
	  System UUID:                93e15426-9592-459d-97e1-c17229f46f37
	  Boot ID:                    300eaa70-a88c-442b-b909-4a6828c5fd21
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-bq2xk                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 coredns-6f6b679f8f-lwd2j                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 etcd-embed-certs-901410                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kube-apiserver-embed-certs-901410             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-embed-certs-901410    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-fqmzw                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-embed-certs-901410             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 metrics-server-6867b74b74-mwl74               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14m                kube-proxy       
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)  kubelet          Node embed-certs-901410 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)  kubelet          Node embed-certs-901410 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)  kubelet          Node embed-certs-901410 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m                kubelet          Node embed-certs-901410 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m                kubelet          Node embed-certs-901410 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m                kubelet          Node embed-certs-901410 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14m                node-controller  Node embed-certs-901410 event: Registered Node embed-certs-901410 in Controller
	
	
	==> dmesg <==
	[  +0.062072] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.046627] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.031549] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.807074] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.624816] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.508933] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.060346] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.066118] systemd-fstab-generator[647]: Ignoring "noauto" option for root device
	[  +0.162115] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[  +0.135580] systemd-fstab-generator[673]: Ignoring "noauto" option for root device
	[  +0.254977] systemd-fstab-generator[703]: Ignoring "noauto" option for root device
	[  +3.908429] systemd-fstab-generator[801]: Ignoring "noauto" option for root device
	[  +1.835118] systemd-fstab-generator[921]: Ignoring "noauto" option for root device
	[  +0.065900] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.493363] kauditd_printk_skb: 69 callbacks suppressed
	[  +6.246596] kauditd_printk_skb: 85 callbacks suppressed
	[Aug14 01:11] kauditd_printk_skb: 6 callbacks suppressed
	[  +1.549789] systemd-fstab-generator[2623]: Ignoring "noauto" option for root device
	[  +4.598913] kauditd_printk_skb: 54 callbacks suppressed
	[  +1.447837] systemd-fstab-generator[2943]: Ignoring "noauto" option for root device
	[  +5.379031] systemd-fstab-generator[3054]: Ignoring "noauto" option for root device
	[  +0.091855] kauditd_printk_skb: 14 callbacks suppressed
	[  +9.952538] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [5e65af6cb886c8d6ed105e20eb2a92ce2351df090146b9668765c9487e8fe148] <==
	{"level":"info","ts":"2024-08-14T01:11:22.251153Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"76d7bf11a8e4dc23 became candidate at term 2"}
	{"level":"info","ts":"2024-08-14T01:11:22.251158Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"76d7bf11a8e4dc23 received MsgVoteResp from 76d7bf11a8e4dc23 at term 2"}
	{"level":"info","ts":"2024-08-14T01:11:22.251167Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"76d7bf11a8e4dc23 became leader at term 2"}
	{"level":"info","ts":"2024-08-14T01:11:22.251174Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 76d7bf11a8e4dc23 elected leader 76d7bf11a8e4dc23 at term 2"}
	{"level":"info","ts":"2024-08-14T01:11:22.255182Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-14T01:11:22.259800Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"76d7bf11a8e4dc23","local-member-attributes":"{Name:embed-certs-901410 ClientURLs:[https://192.168.50.210:2379]}","request-path":"/0/members/76d7bf11a8e4dc23/attributes","cluster-id":"92c5f3445ccd6516","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-14T01:11:22.260632Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"92c5f3445ccd6516","local-member-id":"76d7bf11a8e4dc23","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-14T01:11:22.260717Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-14T01:11:22.260750Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-14T01:11:22.260788Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-14T01:11:22.261085Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-14T01:11:22.268167Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-14T01:11:22.275131Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.210:2379"}
	{"level":"info","ts":"2024-08-14T01:11:22.275675Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-14T01:11:22.279433Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-14T01:11:22.279460Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-14T01:11:22.279842Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-14T01:21:22.422315Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":718}
	{"level":"info","ts":"2024-08-14T01:21:22.431160Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":718,"took":"8.338661ms","hash":137738887,"current-db-size-bytes":2260992,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":2260992,"current-db-size-in-use":"2.3 MB"}
	{"level":"info","ts":"2024-08-14T01:21:22.431253Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":137738887,"revision":718,"compact-revision":-1}
	{"level":"warn","ts":"2024-08-14T01:25:39.782456Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.04859ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-14T01:25:39.783235Z","caller":"traceutil/trace.go:171","msg":"trace[1697082136] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1170; }","duration":"104.907923ms","start":"2024-08-14T01:25:39.678289Z","end":"2024-08-14T01:25:39.783197Z","steps":["trace[1697082136] 'range keys from in-memory index tree'  (duration: 103.959755ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-14T01:26:22.428366Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":961}
	{"level":"info","ts":"2024-08-14T01:26:22.431619Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":961,"took":"2.989607ms","hash":3920959002,"current-db-size-bytes":2260992,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":1605632,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-08-14T01:26:22.431665Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3920959002,"revision":961,"compact-revision":718}
	
	
	==> kernel <==
	 01:26:27 up 20 min,  0 users,  load average: 0.26, 0.19, 0.12
	Linux embed-certs-901410 5.10.207 #1 SMP Tue Aug 13 22:05:29 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [9eb586f43234ce42831ea6736853ad2af69f18c7b5bde338b29f19749c8b60b8] <==
	I0814 01:22:25.130718       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0814 01:22:25.130791       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0814 01:24:25.131498       1 handler_proxy.go:99] no RequestInfo found in the context
	W0814 01:24:25.131511       1 handler_proxy.go:99] no RequestInfo found in the context
	E0814 01:24:25.131885       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0814 01:24:25.131979       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0814 01:24:25.133247       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0814 01:24:25.133321       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0814 01:26:24.131388       1 handler_proxy.go:99] no RequestInfo found in the context
	E0814 01:26:24.131562       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0814 01:26:25.133280       1 handler_proxy.go:99] no RequestInfo found in the context
	E0814 01:26:25.133365       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0814 01:26:25.133426       1 handler_proxy.go:99] no RequestInfo found in the context
	E0814 01:26:25.133446       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0814 01:26:25.134479       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0814 01:26:25.134538       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [b6efc64c66f052fcd425420e6d1adc2be719b96dcee74ae7ecf504620233a36c] <==
	W0814 01:11:17.623235       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 01:11:17.634916       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 01:11:17.669376       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 01:11:17.781813       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 01:11:17.797649       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 01:11:17.807698       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 01:11:17.811235       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 01:11:17.854353       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 01:11:17.891616       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 01:11:17.906372       1 logging.go:55] [core] [Channel #10 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 01:11:17.912888       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 01:11:17.918379       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 01:11:17.939965       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 01:11:17.943329       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 01:11:17.974275       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 01:11:17.982093       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 01:11:18.001569       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 01:11:18.094658       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 01:11:18.099079       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 01:11:18.132635       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 01:11:18.141433       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 01:11:18.143834       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 01:11:18.324951       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 01:11:18.436610       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 01:11:18.462600       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [d3eb4c3d012388b816b911cdc033948549417639e9a68f72586d72a9b0a9614b] <==
	E0814 01:21:01.202996       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 01:21:01.655623       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0814 01:21:31.210128       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 01:21:31.664163       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0814 01:21:49.342085       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-901410"
	E0814 01:22:01.216317       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 01:22:01.672264       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0814 01:22:26.708947       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="247.388µs"
	E0814 01:22:31.225309       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 01:22:31.681436       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0814 01:22:37.705607       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="70.508µs"
	E0814 01:23:01.231069       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 01:23:01.688903       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0814 01:23:31.238752       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 01:23:31.697359       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0814 01:24:01.245847       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 01:24:01.705416       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0814 01:24:31.252547       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 01:24:31.712846       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0814 01:25:01.258655       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 01:25:01.720957       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0814 01:25:31.265061       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 01:25:31.730538       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0814 01:26:01.270679       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 01:26:01.738429       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [3217f55ca95d151e1c155a2f1a0579f0da830b734b1aece44be67ebda5d316ec] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0814 01:11:33.037319       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0814 01:11:33.066538       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.210"]
	E0814 01:11:33.066608       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0814 01:11:33.137963       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0814 01:11:33.138036       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0814 01:11:33.138090       1 server_linux.go:169] "Using iptables Proxier"
	I0814 01:11:33.144440       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0814 01:11:33.144732       1 server.go:483] "Version info" version="v1.31.0"
	I0814 01:11:33.144744       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0814 01:11:33.146307       1 config.go:197] "Starting service config controller"
	I0814 01:11:33.146332       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0814 01:11:33.146360       1 config.go:104] "Starting endpoint slice config controller"
	I0814 01:11:33.146365       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0814 01:11:33.147999       1 config.go:326] "Starting node config controller"
	I0814 01:11:33.148066       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0814 01:11:33.247140       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0814 01:11:33.247230       1 shared_informer.go:320] Caches are synced for service config
	I0814 01:11:33.249169       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [d9118fcad6781b261307d099b3d7883f3508d0b188641ebc28db65e60502c975] <==
	W0814 01:11:24.546067       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0814 01:11:24.547345       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0814 01:11:24.546209       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0814 01:11:24.547362       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0814 01:11:24.546291       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0814 01:11:24.547394       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0814 01:11:24.546336       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0814 01:11:24.547415       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0814 01:11:24.546398       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0814 01:11:24.547430       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0814 01:11:24.546481       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0814 01:11:24.547446       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0814 01:11:24.550142       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0814 01:11:24.550197       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0814 01:11:24.550603       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0814 01:11:24.550656       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0814 01:11:24.550756       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0814 01:11:24.550791       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0814 01:11:24.550838       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0814 01:11:24.550876       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0814 01:11:24.550888       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0814 01:11:24.550970       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0814 01:11:25.444056       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0814 01:11:25.444169       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0814 01:11:26.147963       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 14 01:25:26 embed-certs-901410 kubelet[2950]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 14 01:25:26 embed-certs-901410 kubelet[2950]: E0814 01:25:26.940560    2950 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598726940373475,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 01:25:26 embed-certs-901410 kubelet[2950]: E0814 01:25:26.940601    2950 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598726940373475,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 01:25:29 embed-certs-901410 kubelet[2950]: E0814 01:25:29.691378    2950 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-mwl74" podUID="065b6973-cd9d-4091-96b9-8dff2c5f85eb"
	Aug 14 01:25:36 embed-certs-901410 kubelet[2950]: E0814 01:25:36.942458    2950 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598736941607895,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 01:25:36 embed-certs-901410 kubelet[2950]: E0814 01:25:36.942498    2950 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598736941607895,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 01:25:42 embed-certs-901410 kubelet[2950]: E0814 01:25:42.692254    2950 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-mwl74" podUID="065b6973-cd9d-4091-96b9-8dff2c5f85eb"
	Aug 14 01:25:46 embed-certs-901410 kubelet[2950]: E0814 01:25:46.944515    2950 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598746944100009,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 01:25:46 embed-certs-901410 kubelet[2950]: E0814 01:25:46.944551    2950 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598746944100009,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 01:25:54 embed-certs-901410 kubelet[2950]: E0814 01:25:54.693882    2950 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-mwl74" podUID="065b6973-cd9d-4091-96b9-8dff2c5f85eb"
	Aug 14 01:25:56 embed-certs-901410 kubelet[2950]: E0814 01:25:56.946140    2950 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598756945858786,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 01:25:56 embed-certs-901410 kubelet[2950]: E0814 01:25:56.946178    2950 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598756945858786,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 01:26:06 embed-certs-901410 kubelet[2950]: E0814 01:26:06.951205    2950 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598766950789431,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 01:26:06 embed-certs-901410 kubelet[2950]: E0814 01:26:06.951244    2950 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598766950789431,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 01:26:09 embed-certs-901410 kubelet[2950]: E0814 01:26:09.691320    2950 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-mwl74" podUID="065b6973-cd9d-4091-96b9-8dff2c5f85eb"
	Aug 14 01:26:16 embed-certs-901410 kubelet[2950]: E0814 01:26:16.952461    2950 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598776952227073,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 01:26:16 embed-certs-901410 kubelet[2950]: E0814 01:26:16.952495    2950 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598776952227073,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 01:26:21 embed-certs-901410 kubelet[2950]: E0814 01:26:21.691966    2950 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-mwl74" podUID="065b6973-cd9d-4091-96b9-8dff2c5f85eb"
	Aug 14 01:26:26 embed-certs-901410 kubelet[2950]: E0814 01:26:26.710586    2950 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 14 01:26:26 embed-certs-901410 kubelet[2950]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 14 01:26:26 embed-certs-901410 kubelet[2950]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 14 01:26:26 embed-certs-901410 kubelet[2950]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 14 01:26:26 embed-certs-901410 kubelet[2950]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 14 01:26:26 embed-certs-901410 kubelet[2950]: E0814 01:26:26.953783    2950 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598786953398157,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 01:26:26 embed-certs-901410 kubelet[2950]: E0814 01:26:26.953819    2950 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598786953398157,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [494f8cefbe325338845c1fc777d9263142510e7e99b8ff1217f99009a69f7db0] <==
	I0814 01:11:33.371576       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0814 01:11:33.407831       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0814 01:11:33.407876       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0814 01:11:33.432157       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0814 01:11:33.432300       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-901410_1ea964c2-b206-4cc5-93d4-c9d812387ab1!
	I0814 01:11:33.432356       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"86f447d8-c26e-4e0d-89f9-4906967e1531", APIVersion:"v1", ResourceVersion:"419", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-901410_1ea964c2-b206-4cc5-93d4-c9d812387ab1 became leader
	I0814 01:11:33.533508       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-901410_1ea964c2-b206-4cc5-93d4-c9d812387ab1!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-901410 -n embed-certs-901410
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-901410 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-mwl74
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-901410 describe pod metrics-server-6867b74b74-mwl74
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-901410 describe pod metrics-server-6867b74b74-mwl74: exit status 1 (67.429683ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-mwl74" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-901410 describe pod metrics-server-6867b74b74-mwl74: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (343.69s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (93.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-179312 -n old-k8s-version-179312
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-179312 -n old-k8s-version-179312: exit status 2 (225.151628ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-179312" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-179312 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-179312 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.526µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-179312 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-179312 -n old-k8s-version-179312
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-179312 -n old-k8s-version-179312: exit status 2 (215.475645ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-179312 logs -n 25
E0814 01:25:05.519383   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/addons-937866/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-179312 logs -n 25: (1.51128934s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| unpause | -p pause-074686                                        | pause-074686                 | jenkins | v1.33.1 | 14 Aug 24 00:56 UTC | 14 Aug 24 00:56 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| pause   | -p pause-074686                                        | pause-074686                 | jenkins | v1.33.1 | 14 Aug 24 00:56 UTC | 14 Aug 24 00:56 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| delete  | -p pause-074686                                        | pause-074686                 | jenkins | v1.33.1 | 14 Aug 24 00:56 UTC | 14 Aug 24 00:56 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| delete  | -p pause-074686                                        | pause-074686                 | jenkins | v1.33.1 | 14 Aug 24 00:56 UTC | 14 Aug 24 00:56 UTC |
	| delete  | -p                                                     | disable-driver-mounts-655306 | jenkins | v1.33.1 | 14 Aug 24 00:56 UTC | 14 Aug 24 00:56 UTC |
	|         | disable-driver-mounts-655306                           |                              |         |         |                     |                     |
	| start   | -p no-preload-776907                                   | no-preload-776907            | jenkins | v1.33.1 | 14 Aug 24 00:56 UTC | 14 Aug 24 00:58 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-769488                              | cert-expiration-769488       | jenkins | v1.33.1 | 14 Aug 24 00:57 UTC | 14 Aug 24 00:58 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-769488                              | cert-expiration-769488       | jenkins | v1.33.1 | 14 Aug 24 00:58 UTC | 14 Aug 24 00:58 UTC |
	| start   | -p                                                     | default-k8s-diff-port-585256 | jenkins | v1.33.1 | 14 Aug 24 00:58 UTC | 14 Aug 24 00:58 UTC |
	|         | default-k8s-diff-port-585256                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-901410            | embed-certs-901410           | jenkins | v1.33.1 | 14 Aug 24 00:58 UTC | 14 Aug 24 00:58 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-901410                                  | embed-certs-901410           | jenkins | v1.33.1 | 14 Aug 24 00:58 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-776907             | no-preload-776907            | jenkins | v1.33.1 | 14 Aug 24 00:58 UTC | 14 Aug 24 00:58 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-776907                                   | no-preload-776907            | jenkins | v1.33.1 | 14 Aug 24 00:58 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-585256  | default-k8s-diff-port-585256 | jenkins | v1.33.1 | 14 Aug 24 00:59 UTC | 14 Aug 24 00:59 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-585256 | jenkins | v1.33.1 | 14 Aug 24 00:59 UTC |                     |
	|         | default-k8s-diff-port-585256                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-179312        | old-k8s-version-179312       | jenkins | v1.33.1 | 14 Aug 24 01:00 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-901410                 | embed-certs-901410           | jenkins | v1.33.1 | 14 Aug 24 01:00 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-901410                                  | embed-certs-901410           | jenkins | v1.33.1 | 14 Aug 24 01:00 UTC | 14 Aug 24 01:11 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-776907                  | no-preload-776907            | jenkins | v1.33.1 | 14 Aug 24 01:01 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-776907                                   | no-preload-776907            | jenkins | v1.33.1 | 14 Aug 24 01:01 UTC | 14 Aug 24 01:10 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-585256       | default-k8s-diff-port-585256 | jenkins | v1.33.1 | 14 Aug 24 01:01 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-179312                              | old-k8s-version-179312       | jenkins | v1.33.1 | 14 Aug 24 01:01 UTC | 14 Aug 24 01:01 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-585256 | jenkins | v1.33.1 | 14 Aug 24 01:01 UTC | 14 Aug 24 01:11 UTC |
	|         | default-k8s-diff-port-585256                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-179312             | old-k8s-version-179312       | jenkins | v1.33.1 | 14 Aug 24 01:01 UTC | 14 Aug 24 01:01 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-179312                              | old-k8s-version-179312       | jenkins | v1.33.1 | 14 Aug 24 01:01 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/14 01:01:39
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0814 01:01:39.512898   61804 out.go:291] Setting OutFile to fd 1 ...
	I0814 01:01:39.513038   61804 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 01:01:39.513051   61804 out.go:304] Setting ErrFile to fd 2...
	I0814 01:01:39.513057   61804 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 01:01:39.513259   61804 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19429-9425/.minikube/bin
	I0814 01:01:39.513864   61804 out.go:298] Setting JSON to false
	I0814 01:01:39.514866   61804 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6245,"bootTime":1723591054,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0814 01:01:39.514924   61804 start.go:139] virtualization: kvm guest
	I0814 01:01:39.516858   61804 out.go:177] * [old-k8s-version-179312] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0814 01:01:39.518018   61804 out.go:177]   - MINIKUBE_LOCATION=19429
	I0814 01:01:39.518036   61804 notify.go:220] Checking for updates...
	I0814 01:01:39.520190   61804 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0814 01:01:39.521372   61804 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19429-9425/kubeconfig
	I0814 01:01:39.522536   61804 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19429-9425/.minikube
	I0814 01:01:39.523748   61804 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0814 01:01:39.524905   61804 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0814 01:01:39.526506   61804 config.go:182] Loaded profile config "old-k8s-version-179312": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0814 01:01:39.526919   61804 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:01:39.526976   61804 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:01:39.541877   61804 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35025
	I0814 01:01:39.542250   61804 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:01:39.542776   61804 main.go:141] libmachine: Using API Version  1
	I0814 01:01:39.542796   61804 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:01:39.543149   61804 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:01:39.543304   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .DriverName
	I0814 01:01:39.544990   61804 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0814 01:01:39.546103   61804 driver.go:392] Setting default libvirt URI to qemu:///system
	I0814 01:01:39.546426   61804 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:01:39.546461   61804 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:01:39.561404   61804 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42995
	I0814 01:01:39.561820   61804 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:01:39.562277   61804 main.go:141] libmachine: Using API Version  1
	I0814 01:01:39.562305   61804 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:01:39.562609   61804 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:01:39.562824   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .DriverName
	I0814 01:01:39.598760   61804 out.go:177] * Using the kvm2 driver based on existing profile
	I0814 01:01:39.599899   61804 start.go:297] selected driver: kvm2
	I0814 01:01:39.599912   61804 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-179312 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-179312 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.123 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 01:01:39.600052   61804 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0814 01:01:39.600706   61804 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 01:01:39.600767   61804 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19429-9425/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0814 01:01:39.616316   61804 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0814 01:01:39.616678   61804 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 01:01:39.616712   61804 cni.go:84] Creating CNI manager for ""
	I0814 01:01:39.616719   61804 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 01:01:39.616748   61804 start.go:340] cluster config:
	{Name:old-k8s-version-179312 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-179312 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.123 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 01:01:39.616839   61804 iso.go:125] acquiring lock: {Name:mk654171f0e78c238a265344dbbd1eacb21d0f1b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 01:01:39.618491   61804 out.go:177] * Starting "old-k8s-version-179312" primary control-plane node in "old-k8s-version-179312" cluster
	I0814 01:01:36.022382   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:01:39.094354   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:01:38.136107   61689 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0814 01:01:38.136146   61689 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0814 01:01:38.136159   61689 cache.go:56] Caching tarball of preloaded images
	I0814 01:01:38.136234   61689 preload.go:172] Found /home/jenkins/minikube-integration/19429-9425/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0814 01:01:38.136245   61689 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0814 01:01:38.136360   61689 profile.go:143] Saving config to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/default-k8s-diff-port-585256/config.json ...
	I0814 01:01:38.136567   61689 start.go:360] acquireMachinesLock for default-k8s-diff-port-585256: {Name:mk8ab1b491404dd6cc95a37a50c74a14d6d0e4c9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 01:01:39.619632   61804 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0814 01:01:39.619674   61804 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0814 01:01:39.619694   61804 cache.go:56] Caching tarball of preloaded images
	I0814 01:01:39.619767   61804 preload.go:172] Found /home/jenkins/minikube-integration/19429-9425/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0814 01:01:39.619781   61804 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0814 01:01:39.619899   61804 profile.go:143] Saving config to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312/config.json ...
	I0814 01:01:39.620085   61804 start.go:360] acquireMachinesLock for old-k8s-version-179312: {Name:mk8ab1b491404dd6cc95a37a50c74a14d6d0e4c9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 01:01:45.174229   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:01:48.246337   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:01:54.326275   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:01:57.398310   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:02:03.478349   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:02:06.550262   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:02:12.630330   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:02:15.702383   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:02:21.782321   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:02:24.854346   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:02:30.934349   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:02:34.006298   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:02:40.086361   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:02:43.158326   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:02:49.238298   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:02:52.310357   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:02:58.390361   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:03:01.462356   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:03:07.542292   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:03:10.614310   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:03:16.694325   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:03:19.766305   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:03:25.846331   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:03:28.918369   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:03:34.998360   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:03:38.070357   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:03:44.150338   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:03:47.222336   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:03:53.302301   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:03:56.374355   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:04:02.454379   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:04:05.526325   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:04:11.606322   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:04:14.678359   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:04:20.758332   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:04:23.830339   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:04:29.910318   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:04:32.982355   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:04:39.062376   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:04:42.134351   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:04:48.214321   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:04:51.286357   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:04:57.366282   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:05:00.438378   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:05:06.518254   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:05:09.590272   61115 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.210:22: connect: no route to host
	I0814 01:05:12.594550   61447 start.go:364] duration metric: took 3m55.982517455s to acquireMachinesLock for "no-preload-776907"
	I0814 01:05:12.594617   61447 start.go:96] Skipping create...Using existing machine configuration
	I0814 01:05:12.594639   61447 fix.go:54] fixHost starting: 
	I0814 01:05:12.595017   61447 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:05:12.595051   61447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:05:12.611377   61447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40079
	I0814 01:05:12.611848   61447 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:05:12.612405   61447 main.go:141] libmachine: Using API Version  1
	I0814 01:05:12.612433   61447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:05:12.612810   61447 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:05:12.613004   61447 main.go:141] libmachine: (no-preload-776907) Calling .DriverName
	I0814 01:05:12.613170   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetState
	I0814 01:05:12.614831   61447 fix.go:112] recreateIfNeeded on no-preload-776907: state=Stopped err=<nil>
	I0814 01:05:12.614852   61447 main.go:141] libmachine: (no-preload-776907) Calling .DriverName
	W0814 01:05:12.615027   61447 fix.go:138] unexpected machine state, will restart: <nil>
	I0814 01:05:12.616713   61447 out.go:177] * Restarting existing kvm2 VM for "no-preload-776907" ...
	I0814 01:05:12.591919   61115 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 01:05:12.591979   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetMachineName
	I0814 01:05:12.592302   61115 buildroot.go:166] provisioning hostname "embed-certs-901410"
	I0814 01:05:12.592333   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetMachineName
	I0814 01:05:12.592567   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHHostname
	I0814 01:05:12.594384   61115 machine.go:97] duration metric: took 4m37.436734696s to provisionDockerMachine
	I0814 01:05:12.594452   61115 fix.go:56] duration metric: took 4m37.45620173s for fixHost
	I0814 01:05:12.594468   61115 start.go:83] releasing machines lock for "embed-certs-901410", held for 4m37.456229846s
	W0814 01:05:12.594503   61115 start.go:714] error starting host: provision: host is not running
	W0814 01:05:12.594696   61115 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0814 01:05:12.594717   61115 start.go:729] Will try again in 5 seconds ...
	I0814 01:05:12.617855   61447 main.go:141] libmachine: (no-preload-776907) Calling .Start
	I0814 01:05:12.618047   61447 main.go:141] libmachine: (no-preload-776907) Ensuring networks are active...
	I0814 01:05:12.619058   61447 main.go:141] libmachine: (no-preload-776907) Ensuring network default is active
	I0814 01:05:12.619398   61447 main.go:141] libmachine: (no-preload-776907) Ensuring network mk-no-preload-776907 is active
	I0814 01:05:12.619763   61447 main.go:141] libmachine: (no-preload-776907) Getting domain xml...
	I0814 01:05:12.620437   61447 main.go:141] libmachine: (no-preload-776907) Creating domain...
	I0814 01:05:13.819938   61447 main.go:141] libmachine: (no-preload-776907) Waiting to get IP...
	I0814 01:05:13.820741   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:13.821142   61447 main.go:141] libmachine: (no-preload-776907) DBG | unable to find current IP address of domain no-preload-776907 in network mk-no-preload-776907
	I0814 01:05:13.821244   61447 main.go:141] libmachine: (no-preload-776907) DBG | I0814 01:05:13.821137   62559 retry.go:31] will retry after 224.897937ms: waiting for machine to come up
	I0814 01:05:14.047611   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:14.048046   61447 main.go:141] libmachine: (no-preload-776907) DBG | unable to find current IP address of domain no-preload-776907 in network mk-no-preload-776907
	I0814 01:05:14.048073   61447 main.go:141] libmachine: (no-preload-776907) DBG | I0814 01:05:14.047999   62559 retry.go:31] will retry after 289.797156ms: waiting for machine to come up
	I0814 01:05:14.339577   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:14.339966   61447 main.go:141] libmachine: (no-preload-776907) DBG | unable to find current IP address of domain no-preload-776907 in network mk-no-preload-776907
	I0814 01:05:14.339990   61447 main.go:141] libmachine: (no-preload-776907) DBG | I0814 01:05:14.339923   62559 retry.go:31] will retry after 335.55372ms: waiting for machine to come up
	I0814 01:05:14.677277   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:14.677646   61447 main.go:141] libmachine: (no-preload-776907) DBG | unable to find current IP address of domain no-preload-776907 in network mk-no-preload-776907
	I0814 01:05:14.677850   61447 main.go:141] libmachine: (no-preload-776907) DBG | I0814 01:05:14.677612   62559 retry.go:31] will retry after 376.666569ms: waiting for machine to come up
	I0814 01:05:15.056486   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:15.057008   61447 main.go:141] libmachine: (no-preload-776907) DBG | unable to find current IP address of domain no-preload-776907 in network mk-no-preload-776907
	I0814 01:05:15.057046   61447 main.go:141] libmachine: (no-preload-776907) DBG | I0814 01:05:15.056935   62559 retry.go:31] will retry after 594.277419ms: waiting for machine to come up
	I0814 01:05:15.652571   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:15.653122   61447 main.go:141] libmachine: (no-preload-776907) DBG | unable to find current IP address of domain no-preload-776907 in network mk-no-preload-776907
	I0814 01:05:15.653156   61447 main.go:141] libmachine: (no-preload-776907) DBG | I0814 01:05:15.653066   62559 retry.go:31] will retry after 827.123674ms: waiting for machine to come up
	I0814 01:05:16.482405   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:16.482799   61447 main.go:141] libmachine: (no-preload-776907) DBG | unable to find current IP address of domain no-preload-776907 in network mk-no-preload-776907
	I0814 01:05:16.482827   61447 main.go:141] libmachine: (no-preload-776907) DBG | I0814 01:05:16.482746   62559 retry.go:31] will retry after 897.843008ms: waiting for machine to come up
	I0814 01:05:17.595257   61115 start.go:360] acquireMachinesLock for embed-certs-901410: {Name:mk8ab1b491404dd6cc95a37a50c74a14d6d0e4c9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 01:05:17.381838   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:17.382282   61447 main.go:141] libmachine: (no-preload-776907) DBG | unable to find current IP address of domain no-preload-776907 in network mk-no-preload-776907
	I0814 01:05:17.382309   61447 main.go:141] libmachine: (no-preload-776907) DBG | I0814 01:05:17.382233   62559 retry.go:31] will retry after 1.346474914s: waiting for machine to come up
	I0814 01:05:18.730384   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:18.730837   61447 main.go:141] libmachine: (no-preload-776907) DBG | unable to find current IP address of domain no-preload-776907 in network mk-no-preload-776907
	I0814 01:05:18.730865   61447 main.go:141] libmachine: (no-preload-776907) DBG | I0814 01:05:18.730770   62559 retry.go:31] will retry after 1.755579596s: waiting for machine to come up
	I0814 01:05:20.488719   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:20.489235   61447 main.go:141] libmachine: (no-preload-776907) DBG | unable to find current IP address of domain no-preload-776907 in network mk-no-preload-776907
	I0814 01:05:20.489269   61447 main.go:141] libmachine: (no-preload-776907) DBG | I0814 01:05:20.489180   62559 retry.go:31] will retry after 1.82357845s: waiting for machine to come up
	I0814 01:05:22.315099   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:22.315508   61447 main.go:141] libmachine: (no-preload-776907) DBG | unable to find current IP address of domain no-preload-776907 in network mk-no-preload-776907
	I0814 01:05:22.315543   61447 main.go:141] libmachine: (no-preload-776907) DBG | I0814 01:05:22.315458   62559 retry.go:31] will retry after 1.799604975s: waiting for machine to come up
	I0814 01:05:24.116869   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:24.117361   61447 main.go:141] libmachine: (no-preload-776907) DBG | unable to find current IP address of domain no-preload-776907 in network mk-no-preload-776907
	I0814 01:05:24.117389   61447 main.go:141] libmachine: (no-preload-776907) DBG | I0814 01:05:24.117302   62559 retry.go:31] will retry after 2.588913034s: waiting for machine to come up
	I0814 01:05:26.708996   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:26.709436   61447 main.go:141] libmachine: (no-preload-776907) DBG | unable to find current IP address of domain no-preload-776907 in network mk-no-preload-776907
	I0814 01:05:26.709462   61447 main.go:141] libmachine: (no-preload-776907) DBG | I0814 01:05:26.709395   62559 retry.go:31] will retry after 3.736481406s: waiting for machine to come up
	I0814 01:05:30.449552   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:30.450068   61447 main.go:141] libmachine: (no-preload-776907) Found IP for machine: 192.168.72.94
	I0814 01:05:30.450093   61447 main.go:141] libmachine: (no-preload-776907) Reserving static IP address...
	I0814 01:05:30.450109   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has current primary IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:30.450584   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "no-preload-776907", mac: "52:54:00:96:29:79", ip: "192.168.72.94"} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:30.450609   61447 main.go:141] libmachine: (no-preload-776907) Reserved static IP address: 192.168.72.94
	I0814 01:05:30.450629   61447 main.go:141] libmachine: (no-preload-776907) DBG | skip adding static IP to network mk-no-preload-776907 - found existing host DHCP lease matching {name: "no-preload-776907", mac: "52:54:00:96:29:79", ip: "192.168.72.94"}
	I0814 01:05:30.450640   61447 main.go:141] libmachine: (no-preload-776907) Waiting for SSH to be available...
	I0814 01:05:30.450652   61447 main.go:141] libmachine: (no-preload-776907) DBG | Getting to WaitForSSH function...
	I0814 01:05:30.452908   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:30.453222   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:30.453250   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:30.453351   61447 main.go:141] libmachine: (no-preload-776907) DBG | Using SSH client type: external
	I0814 01:05:30.453380   61447 main.go:141] libmachine: (no-preload-776907) DBG | Using SSH private key: /home/jenkins/minikube-integration/19429-9425/.minikube/machines/no-preload-776907/id_rsa (-rw-------)
	I0814 01:05:30.453413   61447 main.go:141] libmachine: (no-preload-776907) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.94 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19429-9425/.minikube/machines/no-preload-776907/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0814 01:05:30.453430   61447 main.go:141] libmachine: (no-preload-776907) DBG | About to run SSH command:
	I0814 01:05:30.453443   61447 main.go:141] libmachine: (no-preload-776907) DBG | exit 0
	I0814 01:05:30.574126   61447 main.go:141] libmachine: (no-preload-776907) DBG | SSH cmd err, output: <nil>: 
	I0814 01:05:30.574502   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetConfigRaw
	I0814 01:05:30.575125   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetIP
	I0814 01:05:30.577732   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:30.578169   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:30.578203   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:30.578449   61447 profile.go:143] Saving config to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/no-preload-776907/config.json ...
	I0814 01:05:30.578651   61447 machine.go:94] provisionDockerMachine start ...
	I0814 01:05:30.578669   61447 main.go:141] libmachine: (no-preload-776907) Calling .DriverName
	I0814 01:05:30.578916   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHHostname
	I0814 01:05:30.581363   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:30.581653   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:30.581678   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:30.581769   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHPort
	I0814 01:05:30.581944   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 01:05:30.582114   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 01:05:30.582230   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHUsername
	I0814 01:05:30.582389   61447 main.go:141] libmachine: Using SSH client type: native
	I0814 01:05:30.582631   61447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.94 22 <nil> <nil>}
	I0814 01:05:30.582641   61447 main.go:141] libmachine: About to run SSH command:
	hostname
	I0814 01:05:30.678219   61447 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0814 01:05:30.678248   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetMachineName
	I0814 01:05:30.678530   61447 buildroot.go:166] provisioning hostname "no-preload-776907"
	I0814 01:05:30.678560   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetMachineName
	I0814 01:05:30.678736   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHHostname
	I0814 01:05:30.681602   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:30.681914   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:30.681943   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:30.682058   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHPort
	I0814 01:05:30.682224   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 01:05:30.682373   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 01:05:30.682507   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHUsername
	I0814 01:05:30.682662   61447 main.go:141] libmachine: Using SSH client type: native
	I0814 01:05:30.682832   61447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.94 22 <nil> <nil>}
	I0814 01:05:30.682844   61447 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-776907 && echo "no-preload-776907" | sudo tee /etc/hostname
	I0814 01:05:30.790444   61447 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-776907
	
	I0814 01:05:30.790476   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHHostname
	I0814 01:05:30.793090   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:30.793357   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:30.793386   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:30.793503   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHPort
	I0814 01:05:30.793713   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 01:05:30.793885   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 01:05:30.794030   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHUsername
	I0814 01:05:30.794206   61447 main.go:141] libmachine: Using SSH client type: native
	I0814 01:05:30.794390   61447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.94 22 <nil> <nil>}
	I0814 01:05:30.794411   61447 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-776907' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-776907/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-776907' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0814 01:05:30.897761   61447 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 01:05:30.897818   61447 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19429-9425/.minikube CaCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19429-9425/.minikube}
	I0814 01:05:30.897869   61447 buildroot.go:174] setting up certificates
	I0814 01:05:30.897890   61447 provision.go:84] configureAuth start
	I0814 01:05:30.897915   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetMachineName
	I0814 01:05:30.898272   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetIP
	I0814 01:05:30.900961   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:30.901235   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:30.901268   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:30.901432   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHHostname
	I0814 01:05:30.903329   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:30.903604   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:30.903634   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:30.903799   61447 provision.go:143] copyHostCerts
	I0814 01:05:30.903866   61447 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem, removing ...
	I0814 01:05:30.903881   61447 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem
	I0814 01:05:30.903960   61447 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem (1082 bytes)
	I0814 01:05:30.904104   61447 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem, removing ...
	I0814 01:05:30.904126   61447 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem
	I0814 01:05:30.904165   61447 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem (1123 bytes)
	I0814 01:05:30.904259   61447 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem, removing ...
	I0814 01:05:30.904271   61447 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem
	I0814 01:05:30.904304   61447 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem (1675 bytes)
	I0814 01:05:30.904389   61447 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem org=jenkins.no-preload-776907 san=[127.0.0.1 192.168.72.94 localhost minikube no-preload-776907]
	I0814 01:05:31.219047   61447 provision.go:177] copyRemoteCerts
	I0814 01:05:31.219108   61447 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0814 01:05:31.219138   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHHostname
	I0814 01:05:31.222328   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:31.222679   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:31.222719   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:31.222858   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHPort
	I0814 01:05:31.223059   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 01:05:31.223199   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHUsername
	I0814 01:05:31.223368   61447 sshutil.go:53] new ssh client: &{IP:192.168.72.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/no-preload-776907/id_rsa Username:docker}
	I0814 01:05:31.299711   61447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0814 01:05:31.321459   61447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0814 01:05:31.342798   61447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0814 01:05:31.363610   61447 provision.go:87] duration metric: took 465.708315ms to configureAuth
	I0814 01:05:31.363636   61447 buildroot.go:189] setting minikube options for container-runtime
	I0814 01:05:31.363877   61447 config.go:182] Loaded profile config "no-preload-776907": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 01:05:31.363970   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHHostname
	I0814 01:05:31.366458   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:31.366723   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:31.366753   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:31.366948   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHPort
	I0814 01:05:31.367154   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 01:05:31.367300   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 01:05:31.367452   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHUsername
	I0814 01:05:31.367605   61447 main.go:141] libmachine: Using SSH client type: native
	I0814 01:05:31.367826   61447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.94 22 <nil> <nil>}
	I0814 01:05:31.367848   61447 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0814 01:05:31.826307   61689 start.go:364] duration metric: took 3m53.689696917s to acquireMachinesLock for "default-k8s-diff-port-585256"
	I0814 01:05:31.826378   61689 start.go:96] Skipping create...Using existing machine configuration
	I0814 01:05:31.826394   61689 fix.go:54] fixHost starting: 
	I0814 01:05:31.826794   61689 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:05:31.826829   61689 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:05:31.842943   61689 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38143
	I0814 01:05:31.843345   61689 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:05:31.843840   61689 main.go:141] libmachine: Using API Version  1
	I0814 01:05:31.843872   61689 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:05:31.844236   61689 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:05:31.844445   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .DriverName
	I0814 01:05:31.844653   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetState
	I0814 01:05:31.846298   61689 fix.go:112] recreateIfNeeded on default-k8s-diff-port-585256: state=Stopped err=<nil>
	I0814 01:05:31.846319   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .DriverName
	W0814 01:05:31.846504   61689 fix.go:138] unexpected machine state, will restart: <nil>
	I0814 01:05:31.848477   61689 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-585256" ...
	I0814 01:05:31.849592   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .Start
	I0814 01:05:31.849779   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Ensuring networks are active...
	I0814 01:05:31.850320   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Ensuring network default is active
	I0814 01:05:31.850622   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Ensuring network mk-default-k8s-diff-port-585256 is active
	I0814 01:05:31.850949   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Getting domain xml...
	I0814 01:05:31.851706   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Creating domain...
	I0814 01:05:31.612709   61447 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0814 01:05:31.612730   61447 machine.go:97] duration metric: took 1.0340672s to provisionDockerMachine
	I0814 01:05:31.612741   61447 start.go:293] postStartSetup for "no-preload-776907" (driver="kvm2")
	I0814 01:05:31.612763   61447 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0814 01:05:31.612794   61447 main.go:141] libmachine: (no-preload-776907) Calling .DriverName
	I0814 01:05:31.613074   61447 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0814 01:05:31.613098   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHHostname
	I0814 01:05:31.615600   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:31.615957   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:31.615985   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:31.616091   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHPort
	I0814 01:05:31.616244   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 01:05:31.616373   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHUsername
	I0814 01:05:31.616516   61447 sshutil.go:53] new ssh client: &{IP:192.168.72.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/no-preload-776907/id_rsa Username:docker}
	I0814 01:05:31.691987   61447 ssh_runner.go:195] Run: cat /etc/os-release
	I0814 01:05:31.695849   61447 info.go:137] Remote host: Buildroot 2023.02.9
	I0814 01:05:31.695872   61447 filesync.go:126] Scanning /home/jenkins/minikube-integration/19429-9425/.minikube/addons for local assets ...
	I0814 01:05:31.695940   61447 filesync.go:126] Scanning /home/jenkins/minikube-integration/19429-9425/.minikube/files for local assets ...
	I0814 01:05:31.696016   61447 filesync.go:149] local asset: /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem -> 165892.pem in /etc/ssl/certs
	I0814 01:05:31.696099   61447 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0814 01:05:31.704650   61447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem --> /etc/ssl/certs/165892.pem (1708 bytes)
	I0814 01:05:31.725889   61447 start.go:296] duration metric: took 113.131949ms for postStartSetup
	I0814 01:05:31.725939   61447 fix.go:56] duration metric: took 19.131305949s for fixHost
	I0814 01:05:31.725962   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHHostname
	I0814 01:05:31.728613   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:31.729001   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:31.729030   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:31.729178   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHPort
	I0814 01:05:31.729379   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 01:05:31.729556   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 01:05:31.729721   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHUsername
	I0814 01:05:31.729861   61447 main.go:141] libmachine: Using SSH client type: native
	I0814 01:05:31.730062   61447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.94 22 <nil> <nil>}
	I0814 01:05:31.730076   61447 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0814 01:05:31.826139   61447 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723597531.803704808
	
	I0814 01:05:31.826161   61447 fix.go:216] guest clock: 1723597531.803704808
	I0814 01:05:31.826172   61447 fix.go:229] Guest: 2024-08-14 01:05:31.803704808 +0000 UTC Remote: 2024-08-14 01:05:31.72594365 +0000 UTC m=+255.249076472 (delta=77.761158ms)
	I0814 01:05:31.826197   61447 fix.go:200] guest clock delta is within tolerance: 77.761158ms
	I0814 01:05:31.826208   61447 start.go:83] releasing machines lock for "no-preload-776907", held for 19.231627325s
	I0814 01:05:31.826240   61447 main.go:141] libmachine: (no-preload-776907) Calling .DriverName
	I0814 01:05:31.826536   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetIP
	I0814 01:05:31.829417   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:31.829824   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:31.829854   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:31.829986   61447 main.go:141] libmachine: (no-preload-776907) Calling .DriverName
	I0814 01:05:31.830482   61447 main.go:141] libmachine: (no-preload-776907) Calling .DriverName
	I0814 01:05:31.830633   61447 main.go:141] libmachine: (no-preload-776907) Calling .DriverName
	I0814 01:05:31.830697   61447 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0814 01:05:31.830804   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHHostname
	I0814 01:05:31.830894   61447 ssh_runner.go:195] Run: cat /version.json
	I0814 01:05:31.830914   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHHostname
	I0814 01:05:31.833641   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:31.833963   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:31.833992   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:31.834096   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:31.834260   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHPort
	I0814 01:05:31.834427   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 01:05:31.834549   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHUsername
	I0814 01:05:31.834575   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:31.834599   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:31.834696   61447 sshutil.go:53] new ssh client: &{IP:192.168.72.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/no-preload-776907/id_rsa Username:docker}
	I0814 01:05:31.834773   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHPort
	I0814 01:05:31.834917   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 01:05:31.835101   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHUsername
	I0814 01:05:31.835253   61447 sshutil.go:53] new ssh client: &{IP:192.168.72.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/no-preload-776907/id_rsa Username:docker}
	I0814 01:05:31.915928   61447 ssh_runner.go:195] Run: systemctl --version
	I0814 01:05:31.947877   61447 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0814 01:05:32.091869   61447 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0814 01:05:32.097278   61447 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0814 01:05:32.097333   61447 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0814 01:05:32.112225   61447 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0814 01:05:32.112243   61447 start.go:495] detecting cgroup driver to use...
	I0814 01:05:32.112317   61447 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0814 01:05:32.131562   61447 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0814 01:05:32.145858   61447 docker.go:217] disabling cri-docker service (if available) ...
	I0814 01:05:32.145917   61447 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0814 01:05:32.160887   61447 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0814 01:05:32.175742   61447 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0814 01:05:32.290421   61447 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0814 01:05:32.420159   61447 docker.go:233] disabling docker service ...
	I0814 01:05:32.420237   61447 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0814 01:05:32.434020   61447 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0814 01:05:32.451378   61447 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0814 01:05:32.601306   61447 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0814 01:05:32.714480   61447 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0814 01:05:32.727033   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 01:05:32.743611   61447 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0814 01:05:32.743681   61447 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:05:32.753404   61447 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0814 01:05:32.753471   61447 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:05:32.762934   61447 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:05:32.772193   61447 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:05:32.781270   61447 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0814 01:05:32.791271   61447 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:05:32.802788   61447 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:05:32.821431   61447 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:05:32.831529   61447 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0814 01:05:32.840975   61447 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0814 01:05:32.841033   61447 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0814 01:05:32.854037   61447 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0814 01:05:32.863437   61447 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 01:05:32.999601   61447 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0814 01:05:33.152806   61447 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0814 01:05:33.152868   61447 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0814 01:05:33.157209   61447 start.go:563] Will wait 60s for crictl version
	I0814 01:05:33.157266   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:05:33.160792   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0814 01:05:33.196825   61447 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0814 01:05:33.196903   61447 ssh_runner.go:195] Run: crio --version
	I0814 01:05:33.222886   61447 ssh_runner.go:195] Run: crio --version
	I0814 01:05:33.258900   61447 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0814 01:05:33.260059   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetIP
	I0814 01:05:33.263044   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:33.263422   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:33.263449   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:33.263749   61447 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0814 01:05:33.268315   61447 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 01:05:33.282628   61447 kubeadm.go:883] updating cluster {Name:no-preload-776907 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:no-preload-776907 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.94 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0814 01:05:33.282744   61447 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0814 01:05:33.282800   61447 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 01:05:33.319748   61447 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0814 01:05:33.319777   61447 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0814 01:05:33.319875   61447 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0814 01:05:33.319855   61447 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0814 01:05:33.319906   61447 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0814 01:05:33.319846   61447 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 01:05:33.319845   61447 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0814 01:05:33.320006   61447 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0814 01:05:33.320011   61447 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0814 01:05:33.320011   61447 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0814 01:05:33.321705   61447 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0814 01:05:33.321719   61447 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0814 01:05:33.321741   61447 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0814 01:05:33.321800   61447 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0814 01:05:33.321820   61447 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0814 01:05:33.321851   61447 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 01:05:33.321862   61447 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0814 01:05:33.321858   61447 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0814 01:05:33.549228   61447 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0814 01:05:33.558351   61447 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0814 01:05:33.561199   61447 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0814 01:05:33.570929   61447 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0814 01:05:33.573362   61447 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0814 01:05:33.606128   61447 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0814 01:05:33.623839   61447 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0814 01:05:33.721634   61447 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0814 01:05:33.721674   61447 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0814 01:05:33.721695   61447 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0814 01:05:33.721706   61447 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0814 01:05:33.721718   61447 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0814 01:05:33.721743   61447 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0814 01:05:33.721756   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:05:33.721790   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:05:33.721743   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:05:33.721822   61447 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0814 01:05:33.721851   61447 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0814 01:05:33.721904   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:05:33.733731   61447 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0814 01:05:33.733762   61447 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0814 01:05:33.733792   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:05:33.745957   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0814 01:05:33.745957   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0814 01:05:33.746027   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0814 01:05:33.746031   61447 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0814 01:05:33.746075   61447 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0814 01:05:33.746100   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0814 01:05:33.746110   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0814 01:05:33.746128   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:05:33.837313   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0814 01:05:33.837334   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0814 01:05:33.840696   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0814 01:05:33.840751   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0814 01:05:33.840821   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0814 01:05:33.840959   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0814 01:05:33.952383   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0814 01:05:33.952459   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0814 01:05:33.960252   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0814 01:05:33.966935   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0814 01:05:33.966980   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0814 01:05:33.966949   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0814 01:05:34.070125   61447 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0814 01:05:34.070241   61447 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0814 01:05:34.070361   61447 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0814 01:05:34.070427   61447 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0814 01:05:34.070495   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0814 01:05:34.091128   61447 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0814 01:05:34.091240   61447 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0814 01:05:34.092453   61447 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0814 01:05:34.092547   61447 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.15-0
	I0814 01:05:34.092649   61447 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0814 01:05:34.092743   61447 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0814 01:05:34.100595   61447 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0814 01:05:34.100616   61447 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0814 01:05:34.100663   61447 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0814 01:05:34.100799   61447 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0814 01:05:34.130869   61447 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0814 01:05:34.130914   61447 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0814 01:05:34.130931   61447 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0814 01:05:34.130968   61447 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0814 01:05:34.131021   61447 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0814 01:05:34.197462   61447 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 01:05:36.080029   61447 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (1.979348221s)
	I0814 01:05:36.080056   61447 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0814 01:05:36.080081   61447 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0814 01:05:36.080140   61447 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0814 01:05:36.080175   61447 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.882683519s)
	I0814 01:05:36.080139   61447 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0: (1.949094618s)
	I0814 01:05:36.080227   61447 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0814 01:05:36.080270   61447 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 01:05:36.080310   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:05:36.080232   61447 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0814 01:05:33.131411   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting to get IP...
	I0814 01:05:33.132448   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:33.132806   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | unable to find current IP address of domain default-k8s-diff-port-585256 in network mk-default-k8s-diff-port-585256
	I0814 01:05:33.132920   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | I0814 01:05:33.132799   62699 retry.go:31] will retry after 311.730649ms: waiting for machine to come up
	I0814 01:05:33.446380   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:33.446841   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | unable to find current IP address of domain default-k8s-diff-port-585256 in network mk-default-k8s-diff-port-585256
	I0814 01:05:33.446870   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | I0814 01:05:33.446794   62699 retry.go:31] will retry after 383.687115ms: waiting for machine to come up
	I0814 01:05:33.832368   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:33.832974   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | unable to find current IP address of domain default-k8s-diff-port-585256 in network mk-default-k8s-diff-port-585256
	I0814 01:05:33.833008   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | I0814 01:05:33.832808   62699 retry.go:31] will retry after 455.445491ms: waiting for machine to come up
	I0814 01:05:34.289395   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:34.289832   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | unable to find current IP address of domain default-k8s-diff-port-585256 in network mk-default-k8s-diff-port-585256
	I0814 01:05:34.289869   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | I0814 01:05:34.289782   62699 retry.go:31] will retry after 513.174411ms: waiting for machine to come up
	I0814 01:05:34.804399   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:34.804842   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | unable to find current IP address of domain default-k8s-diff-port-585256 in network mk-default-k8s-diff-port-585256
	I0814 01:05:34.804877   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | I0814 01:05:34.804793   62699 retry.go:31] will retry after 497.23394ms: waiting for machine to come up
	I0814 01:05:35.303286   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:35.303809   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | unable to find current IP address of domain default-k8s-diff-port-585256 in network mk-default-k8s-diff-port-585256
	I0814 01:05:35.303839   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | I0814 01:05:35.303757   62699 retry.go:31] will retry after 774.036418ms: waiting for machine to come up
	I0814 01:05:36.080026   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:36.080605   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | unable to find current IP address of domain default-k8s-diff-port-585256 in network mk-default-k8s-diff-port-585256
	I0814 01:05:36.080631   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | I0814 01:05:36.080572   62699 retry.go:31] will retry after 970.636476ms: waiting for machine to come up
	I0814 01:05:37.052546   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:37.052978   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | unable to find current IP address of domain default-k8s-diff-port-585256 in network mk-default-k8s-diff-port-585256
	I0814 01:05:37.053007   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | I0814 01:05:37.052929   62699 retry.go:31] will retry after 1.471882931s: waiting for machine to come up
	I0814 01:05:37.749423   61447 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.669254345s)
	I0814 01:05:37.749462   61447 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0814 01:05:37.749464   61447 ssh_runner.go:235] Completed: which crictl: (1.669139781s)
	I0814 01:05:37.749508   61447 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0814 01:05:37.749520   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 01:05:37.749573   61447 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0814 01:05:40.024973   61447 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.275431609s)
	I0814 01:05:40.024997   61447 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (2.275404079s)
	I0814 01:05:40.025019   61447 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0814 01:05:40.025049   61447 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0814 01:05:40.025050   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 01:05:40.025084   61447 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0814 01:05:38.526491   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:38.527039   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | unable to find current IP address of domain default-k8s-diff-port-585256 in network mk-default-k8s-diff-port-585256
	I0814 01:05:38.527074   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | I0814 01:05:38.526996   62699 retry.go:31] will retry after 1.14308512s: waiting for machine to come up
	I0814 01:05:39.672470   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:39.672869   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | unable to find current IP address of domain default-k8s-diff-port-585256 in network mk-default-k8s-diff-port-585256
	I0814 01:05:39.672893   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | I0814 01:05:39.672812   62699 retry.go:31] will retry after 2.208537111s: waiting for machine to come up
	I0814 01:05:41.883541   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:41.883981   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | unable to find current IP address of domain default-k8s-diff-port-585256 in network mk-default-k8s-diff-port-585256
	I0814 01:05:41.884004   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | I0814 01:05:41.883925   62699 retry.go:31] will retry after 1.996466385s: waiting for machine to come up
	I0814 01:05:43.619471   61447 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.594358195s)
	I0814 01:05:43.619507   61447 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0814 01:05:43.619537   61447 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0814 01:05:43.619541   61447 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.594466847s)
	I0814 01:05:43.619586   61447 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0814 01:05:43.619612   61447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 01:05:44.986974   61447 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (1.367364508s)
	I0814 01:05:44.987013   61447 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0814 01:05:44.987045   61447 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0814 01:05:44.987041   61447 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.367403978s)
	I0814 01:05:44.987087   61447 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0814 01:05:44.987109   61447 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0814 01:05:44.987207   61447 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0814 01:05:44.991463   61447 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0814 01:05:43.882980   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:43.883366   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | unable to find current IP address of domain default-k8s-diff-port-585256 in network mk-default-k8s-diff-port-585256
	I0814 01:05:43.883395   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | I0814 01:05:43.883327   62699 retry.go:31] will retry after 3.565128765s: waiting for machine to come up
	I0814 01:05:47.449997   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:47.450447   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | unable to find current IP address of domain default-k8s-diff-port-585256 in network mk-default-k8s-diff-port-585256
	I0814 01:05:47.450477   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | I0814 01:05:47.450398   62699 retry.go:31] will retry after 3.284570516s: waiting for machine to come up
	I0814 01:05:46.846330   61447 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (1.859214752s)
	I0814 01:05:46.846363   61447 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0814 01:05:46.846397   61447 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0814 01:05:46.846448   61447 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0814 01:05:47.484561   61447 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0814 01:05:47.484612   61447 cache_images.go:123] Successfully loaded all cached images
	I0814 01:05:47.484618   61447 cache_images.go:92] duration metric: took 14.164829321s to LoadCachedImages
	I0814 01:05:47.484632   61447 kubeadm.go:934] updating node { 192.168.72.94 8443 v1.31.0 crio true true} ...
	I0814 01:05:47.484813   61447 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-776907 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.94
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-776907 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0814 01:05:47.484897   61447 ssh_runner.go:195] Run: crio config
	I0814 01:05:47.530082   61447 cni.go:84] Creating CNI manager for ""
	I0814 01:05:47.530105   61447 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 01:05:47.530120   61447 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0814 01:05:47.530143   61447 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.94 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-776907 NodeName:no-preload-776907 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.94"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.94 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0814 01:05:47.530285   61447 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.94
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-776907"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.94
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.94"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0814 01:05:47.530350   61447 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0814 01:05:47.540091   61447 binaries.go:44] Found k8s binaries, skipping transfer
	I0814 01:05:47.540155   61447 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0814 01:05:47.548445   61447 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0814 01:05:47.563668   61447 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0814 01:05:47.578184   61447 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0814 01:05:47.593013   61447 ssh_runner.go:195] Run: grep 192.168.72.94	control-plane.minikube.internal$ /etc/hosts
	I0814 01:05:47.596371   61447 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.94	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 01:05:47.606895   61447 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 01:05:47.711714   61447 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 01:05:47.726979   61447 certs.go:68] Setting up /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/no-preload-776907 for IP: 192.168.72.94
	I0814 01:05:47.727006   61447 certs.go:194] generating shared ca certs ...
	I0814 01:05:47.727027   61447 certs.go:226] acquiring lock for ca certs: {Name:mke321171338291cf6d66f3acbfe43d3fabcafa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 01:05:47.727236   61447 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19429-9425/.minikube/ca.key
	I0814 01:05:47.727305   61447 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.key
	I0814 01:05:47.727321   61447 certs.go:256] generating profile certs ...
	I0814 01:05:47.727446   61447 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/no-preload-776907/client.key
	I0814 01:05:47.727532   61447 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/no-preload-776907/apiserver.key.b2b1ec25
	I0814 01:05:47.727583   61447 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/no-preload-776907/proxy-client.key
	I0814 01:05:47.727745   61447 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589.pem (1338 bytes)
	W0814 01:05:47.727796   61447 certs.go:480] ignoring /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589_empty.pem, impossibly tiny 0 bytes
	I0814 01:05:47.727811   61447 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem (1679 bytes)
	I0814 01:05:47.727846   61447 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem (1082 bytes)
	I0814 01:05:47.727882   61447 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem (1123 bytes)
	I0814 01:05:47.727907   61447 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem (1675 bytes)
	I0814 01:05:47.727948   61447 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem (1708 bytes)
	I0814 01:05:47.728598   61447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0814 01:05:47.758661   61447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0814 01:05:47.790036   61447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0814 01:05:47.814323   61447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0814 01:05:47.839537   61447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/no-preload-776907/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0814 01:05:47.867466   61447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/no-preload-776907/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0814 01:05:47.898996   61447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/no-preload-776907/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0814 01:05:47.923051   61447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/no-preload-776907/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0814 01:05:47.946004   61447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0814 01:05:47.967147   61447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589.pem --> /usr/share/ca-certificates/16589.pem (1338 bytes)
	I0814 01:05:47.988005   61447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem --> /usr/share/ca-certificates/165892.pem (1708 bytes)
	I0814 01:05:48.009704   61447 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0814 01:05:48.024096   61447 ssh_runner.go:195] Run: openssl version
	I0814 01:05:48.029499   61447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0814 01:05:48.038961   61447 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0814 01:05:48.042928   61447 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 13 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0814 01:05:48.042967   61447 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0814 01:05:48.048101   61447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0814 01:05:48.057498   61447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16589.pem && ln -fs /usr/share/ca-certificates/16589.pem /etc/ssl/certs/16589.pem"
	I0814 01:05:48.067275   61447 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16589.pem
	I0814 01:05:48.071457   61447 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 13 23:59 /usr/share/ca-certificates/16589.pem
	I0814 01:05:48.071503   61447 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16589.pem
	I0814 01:05:48.076924   61447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16589.pem /etc/ssl/certs/51391683.0"
	I0814 01:05:48.086951   61447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/165892.pem && ln -fs /usr/share/ca-certificates/165892.pem /etc/ssl/certs/165892.pem"
	I0814 01:05:48.097071   61447 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/165892.pem
	I0814 01:05:48.101070   61447 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 13 23:59 /usr/share/ca-certificates/165892.pem
	I0814 01:05:48.101116   61447 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/165892.pem
	I0814 01:05:48.106289   61447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/165892.pem /etc/ssl/certs/3ec20f2e.0"
	I0814 01:05:48.116109   61447 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0814 01:05:48.119931   61447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0814 01:05:48.124976   61447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0814 01:05:48.129900   61447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0814 01:05:48.135041   61447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0814 01:05:48.140528   61447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0814 01:05:48.145653   61447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0814 01:05:48.150733   61447 kubeadm.go:392] StartCluster: {Name:no-preload-776907 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:no-preload-776907 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.94 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 01:05:48.150833   61447 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0814 01:05:48.150869   61447 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 01:05:48.184513   61447 cri.go:89] found id: ""
	I0814 01:05:48.184585   61447 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0814 01:05:48.194089   61447 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0814 01:05:48.194107   61447 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0814 01:05:48.194145   61447 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0814 01:05:48.202993   61447 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0814 01:05:48.203917   61447 kubeconfig.go:125] found "no-preload-776907" server: "https://192.168.72.94:8443"
	I0814 01:05:48.205929   61447 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0814 01:05:48.214947   61447 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.94
	I0814 01:05:48.214974   61447 kubeadm.go:1160] stopping kube-system containers ...
	I0814 01:05:48.214985   61447 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0814 01:05:48.215023   61447 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 01:05:48.247731   61447 cri.go:89] found id: ""
	I0814 01:05:48.247803   61447 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0814 01:05:48.262901   61447 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 01:05:48.271600   61447 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 01:05:48.271616   61447 kubeadm.go:157] found existing configuration files:
	
	I0814 01:05:48.271652   61447 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 01:05:48.279915   61447 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 01:05:48.279963   61447 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 01:05:48.288458   61447 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 01:05:48.296996   61447 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 01:05:48.297049   61447 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 01:05:48.305625   61447 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 01:05:48.313796   61447 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 01:05:48.313837   61447 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 01:05:48.322211   61447 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 01:05:48.330289   61447 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 01:05:48.330350   61447 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 01:05:48.338604   61447 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 01:05:48.347106   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:05:48.452598   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:05:49.345180   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:05:49.535832   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:05:49.597770   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:05:49.711880   61447 api_server.go:52] waiting for apiserver process to appear ...
	I0814 01:05:49.711964   61447 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:05:50.212332   61447 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:05:50.712073   61447 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:05:50.726301   61447 api_server.go:72] duration metric: took 1.014425118s to wait for apiserver process to appear ...
	I0814 01:05:50.726335   61447 api_server.go:88] waiting for apiserver healthz status ...
	I0814 01:05:50.726369   61447 api_server.go:253] Checking apiserver healthz at https://192.168.72.94:8443/healthz ...
	I0814 01:05:52.086727   61804 start.go:364] duration metric: took 4m12.466611913s to acquireMachinesLock for "old-k8s-version-179312"
	I0814 01:05:52.086801   61804 start.go:96] Skipping create...Using existing machine configuration
	I0814 01:05:52.086811   61804 fix.go:54] fixHost starting: 
	I0814 01:05:52.087240   61804 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:05:52.087282   61804 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:05:52.104210   61804 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42343
	I0814 01:05:52.104679   61804 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:05:52.105122   61804 main.go:141] libmachine: Using API Version  1
	I0814 01:05:52.105146   61804 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:05:52.105462   61804 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:05:52.105656   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .DriverName
	I0814 01:05:52.105804   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetState
	I0814 01:05:52.107362   61804 fix.go:112] recreateIfNeeded on old-k8s-version-179312: state=Stopped err=<nil>
	I0814 01:05:52.107399   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .DriverName
	W0814 01:05:52.107543   61804 fix.go:138] unexpected machine state, will restart: <nil>
	I0814 01:05:52.109460   61804 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-179312" ...
	I0814 01:05:50.738825   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:50.739311   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Found IP for machine: 192.168.39.110
	I0814 01:05:50.739333   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Reserving static IP address...
	I0814 01:05:50.739353   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has current primary IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:50.739784   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-585256", mac: "52:54:00:00:bd:a3", ip: "192.168.39.110"} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:05:50.739819   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Reserved static IP address: 192.168.39.110
	I0814 01:05:50.739844   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | skip adding static IP to network mk-default-k8s-diff-port-585256 - found existing host DHCP lease matching {name: "default-k8s-diff-port-585256", mac: "52:54:00:00:bd:a3", ip: "192.168.39.110"}
	I0814 01:05:50.739871   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | Getting to WaitForSSH function...
	I0814 01:05:50.739888   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Waiting for SSH to be available...
	I0814 01:05:50.742187   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:50.742563   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:05:50.742597   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:50.742696   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | Using SSH client type: external
	I0814 01:05:50.742726   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | Using SSH private key: /home/jenkins/minikube-integration/19429-9425/.minikube/machines/default-k8s-diff-port-585256/id_rsa (-rw-------)
	I0814 01:05:50.742755   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.110 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19429-9425/.minikube/machines/default-k8s-diff-port-585256/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0814 01:05:50.742769   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | About to run SSH command:
	I0814 01:05:50.742784   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | exit 0
	I0814 01:05:50.870185   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | SSH cmd err, output: <nil>: 
	I0814 01:05:50.870601   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetConfigRaw
	I0814 01:05:50.871331   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetIP
	I0814 01:05:50.873990   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:50.874371   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:05:50.874401   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:50.874720   61689 profile.go:143] Saving config to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/default-k8s-diff-port-585256/config.json ...
	I0814 01:05:50.874962   61689 machine.go:94] provisionDockerMachine start ...
	I0814 01:05:50.874984   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .DriverName
	I0814 01:05:50.875223   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHHostname
	I0814 01:05:50.877460   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:50.877829   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:05:50.877868   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:50.877958   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHPort
	I0814 01:05:50.878140   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 01:05:50.878274   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 01:05:50.878440   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHUsername
	I0814 01:05:50.878596   61689 main.go:141] libmachine: Using SSH client type: native
	I0814 01:05:50.878828   61689 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0814 01:05:50.878844   61689 main.go:141] libmachine: About to run SSH command:
	hostname
	I0814 01:05:50.990920   61689 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0814 01:05:50.990952   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetMachineName
	I0814 01:05:50.991216   61689 buildroot.go:166] provisioning hostname "default-k8s-diff-port-585256"
	I0814 01:05:50.991244   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetMachineName
	I0814 01:05:50.991445   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHHostname
	I0814 01:05:50.994031   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:50.994353   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:05:50.994384   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:50.994595   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHPort
	I0814 01:05:50.994785   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 01:05:50.994936   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 01:05:50.995105   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHUsername
	I0814 01:05:50.995273   61689 main.go:141] libmachine: Using SSH client type: native
	I0814 01:05:50.995458   61689 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0814 01:05:50.995475   61689 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-585256 && echo "default-k8s-diff-port-585256" | sudo tee /etc/hostname
	I0814 01:05:51.115106   61689 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-585256
	
	I0814 01:05:51.115141   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHHostname
	I0814 01:05:51.118113   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:51.118480   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:05:51.118509   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:51.118726   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHPort
	I0814 01:05:51.118932   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 01:05:51.119097   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 01:05:51.119218   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHUsername
	I0814 01:05:51.119418   61689 main.go:141] libmachine: Using SSH client type: native
	I0814 01:05:51.119594   61689 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0814 01:05:51.119619   61689 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-585256' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-585256/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-585256' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0814 01:05:51.239368   61689 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 01:05:51.239404   61689 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19429-9425/.minikube CaCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19429-9425/.minikube}
	I0814 01:05:51.239430   61689 buildroot.go:174] setting up certificates
	I0814 01:05:51.239438   61689 provision.go:84] configureAuth start
	I0814 01:05:51.239450   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetMachineName
	I0814 01:05:51.239744   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetIP
	I0814 01:05:51.242426   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:51.242864   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:05:51.242894   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:51.243061   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHHostname
	I0814 01:05:51.245385   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:51.245774   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:05:51.245802   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:51.245950   61689 provision.go:143] copyHostCerts
	I0814 01:05:51.246001   61689 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem, removing ...
	I0814 01:05:51.246012   61689 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem
	I0814 01:05:51.246090   61689 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem (1082 bytes)
	I0814 01:05:51.246184   61689 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem, removing ...
	I0814 01:05:51.246192   61689 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem
	I0814 01:05:51.246211   61689 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem (1123 bytes)
	I0814 01:05:51.246268   61689 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem, removing ...
	I0814 01:05:51.246274   61689 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem
	I0814 01:05:51.246291   61689 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem (1675 bytes)
	I0814 01:05:51.246345   61689 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-585256 san=[127.0.0.1 192.168.39.110 default-k8s-diff-port-585256 localhost minikube]
	I0814 01:05:51.390720   61689 provision.go:177] copyRemoteCerts
	I0814 01:05:51.390779   61689 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0814 01:05:51.390828   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHHostname
	I0814 01:05:51.393583   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:51.394011   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:05:51.394065   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:51.394311   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHPort
	I0814 01:05:51.394493   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 01:05:51.394648   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHUsername
	I0814 01:05:51.394774   61689 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/default-k8s-diff-port-585256/id_rsa Username:docker}
	I0814 01:05:51.479700   61689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0814 01:05:51.501643   61689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0814 01:05:51.523469   61689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0814 01:05:51.548552   61689 provision.go:87] duration metric: took 309.100404ms to configureAuth
	I0814 01:05:51.548579   61689 buildroot.go:189] setting minikube options for container-runtime
	I0814 01:05:51.548811   61689 config.go:182] Loaded profile config "default-k8s-diff-port-585256": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 01:05:51.548902   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHHostname
	I0814 01:05:51.551955   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:51.552410   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:05:51.552439   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:51.552657   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHPort
	I0814 01:05:51.552846   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 01:05:51.553007   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 01:05:51.553131   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHUsername
	I0814 01:05:51.553293   61689 main.go:141] libmachine: Using SSH client type: native
	I0814 01:05:51.553506   61689 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0814 01:05:51.553536   61689 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0814 01:05:51.836027   61689 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0814 01:05:51.836048   61689 machine.go:97] duration metric: took 961.072984ms to provisionDockerMachine
	I0814 01:05:51.836060   61689 start.go:293] postStartSetup for "default-k8s-diff-port-585256" (driver="kvm2")
	I0814 01:05:51.836075   61689 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0814 01:05:51.836092   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .DriverName
	I0814 01:05:51.836448   61689 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0814 01:05:51.836483   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHHostname
	I0814 01:05:51.839252   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:51.839608   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:05:51.839634   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:51.839785   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHPort
	I0814 01:05:51.839998   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 01:05:51.840158   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHUsername
	I0814 01:05:51.840306   61689 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/default-k8s-diff-port-585256/id_rsa Username:docker}
	I0814 01:05:51.928323   61689 ssh_runner.go:195] Run: cat /etc/os-release
	I0814 01:05:51.932227   61689 info.go:137] Remote host: Buildroot 2023.02.9
	I0814 01:05:51.932252   61689 filesync.go:126] Scanning /home/jenkins/minikube-integration/19429-9425/.minikube/addons for local assets ...
	I0814 01:05:51.932331   61689 filesync.go:126] Scanning /home/jenkins/minikube-integration/19429-9425/.minikube/files for local assets ...
	I0814 01:05:51.932417   61689 filesync.go:149] local asset: /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem -> 165892.pem in /etc/ssl/certs
	I0814 01:05:51.932539   61689 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0814 01:05:51.941299   61689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem --> /etc/ssl/certs/165892.pem (1708 bytes)
	I0814 01:05:51.966445   61689 start.go:296] duration metric: took 130.370634ms for postStartSetup
	I0814 01:05:51.966488   61689 fix.go:56] duration metric: took 20.140102397s for fixHost
	I0814 01:05:51.966509   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHHostname
	I0814 01:05:51.969169   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:51.969542   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:05:51.969574   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:51.970716   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHPort
	I0814 01:05:51.970923   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 01:05:51.971093   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 01:05:51.971233   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHUsername
	I0814 01:05:51.971411   61689 main.go:141] libmachine: Using SSH client type: native
	I0814 01:05:51.971649   61689 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0814 01:05:51.971663   61689 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0814 01:05:52.086583   61689 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723597552.047212997
	
	I0814 01:05:52.086611   61689 fix.go:216] guest clock: 1723597552.047212997
	I0814 01:05:52.086621   61689 fix.go:229] Guest: 2024-08-14 01:05:52.047212997 +0000 UTC Remote: 2024-08-14 01:05:51.966492542 +0000 UTC m=+253.980961749 (delta=80.720455ms)
	I0814 01:05:52.086647   61689 fix.go:200] guest clock delta is within tolerance: 80.720455ms
	I0814 01:05:52.086653   61689 start.go:83] releasing machines lock for "default-k8s-diff-port-585256", held for 20.260304872s
	I0814 01:05:52.086686   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .DriverName
	I0814 01:05:52.086988   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetIP
	I0814 01:05:52.089862   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:52.090237   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:05:52.090269   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:52.090388   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .DriverName
	I0814 01:05:52.090896   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .DriverName
	I0814 01:05:52.091065   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .DriverName
	I0814 01:05:52.091161   61689 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0814 01:05:52.091208   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHHostname
	I0814 01:05:52.091307   61689 ssh_runner.go:195] Run: cat /version.json
	I0814 01:05:52.091327   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHHostname
	I0814 01:05:52.094188   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:52.094456   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:52.094520   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:05:52.094539   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:52.094722   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHPort
	I0814 01:05:52.094906   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 01:05:52.095028   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:05:52.095052   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:52.095095   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHUsername
	I0814 01:05:52.095210   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHPort
	I0814 01:05:52.095290   61689 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/default-k8s-diff-port-585256/id_rsa Username:docker}
	I0814 01:05:52.095355   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 01:05:52.095505   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHUsername
	I0814 01:05:52.095657   61689 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/default-k8s-diff-port-585256/id_rsa Username:docker}
	I0814 01:05:52.214838   61689 ssh_runner.go:195] Run: systemctl --version
	I0814 01:05:52.222204   61689 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0814 01:05:52.375439   61689 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0814 01:05:52.381523   61689 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0814 01:05:52.381609   61689 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0814 01:05:52.401552   61689 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0814 01:05:52.401582   61689 start.go:495] detecting cgroup driver to use...
	I0814 01:05:52.401651   61689 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0814 01:05:52.417919   61689 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0814 01:05:52.437217   61689 docker.go:217] disabling cri-docker service (if available) ...
	I0814 01:05:52.437288   61689 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0814 01:05:52.453875   61689 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0814 01:05:52.470300   61689 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0814 01:05:52.595346   61689 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0814 01:05:52.762539   61689 docker.go:233] disabling docker service ...
	I0814 01:05:52.762616   61689 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0814 01:05:52.778328   61689 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0814 01:05:52.791736   61689 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0814 01:05:52.935414   61689 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0814 01:05:53.120909   61689 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0814 01:05:53.134424   61689 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 01:05:53.152618   61689 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0814 01:05:53.152693   61689 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:05:53.164847   61689 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0814 01:05:53.164922   61689 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:05:53.176337   61689 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:05:53.187338   61689 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:05:53.198573   61689 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0814 01:05:53.208385   61689 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:05:53.218220   61689 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:05:53.234795   61689 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:05:53.251006   61689 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0814 01:05:53.265820   61689 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0814 01:05:53.265883   61689 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0814 01:05:53.285753   61689 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0814 01:05:53.298127   61689 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 01:05:53.458646   61689 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0814 01:05:53.610690   61689 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0814 01:05:53.610765   61689 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0814 01:05:53.615292   61689 start.go:563] Will wait 60s for crictl version
	I0814 01:05:53.615348   61689 ssh_runner.go:195] Run: which crictl
	I0814 01:05:53.618756   61689 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0814 01:05:53.658450   61689 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0814 01:05:53.658551   61689 ssh_runner.go:195] Run: crio --version
	I0814 01:05:53.685316   61689 ssh_runner.go:195] Run: crio --version
	I0814 01:05:53.715106   61689 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0814 01:05:52.110579   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .Start
	I0814 01:05:52.110744   61804 main.go:141] libmachine: (old-k8s-version-179312) Ensuring networks are active...
	I0814 01:05:52.111309   61804 main.go:141] libmachine: (old-k8s-version-179312) Ensuring network default is active
	I0814 01:05:52.111709   61804 main.go:141] libmachine: (old-k8s-version-179312) Ensuring network mk-old-k8s-version-179312 is active
	I0814 01:05:52.112094   61804 main.go:141] libmachine: (old-k8s-version-179312) Getting domain xml...
	I0814 01:05:52.112845   61804 main.go:141] libmachine: (old-k8s-version-179312) Creating domain...
	I0814 01:05:53.502995   61804 main.go:141] libmachine: (old-k8s-version-179312) Waiting to get IP...
	I0814 01:05:53.504003   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:05:53.504428   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 01:05:53.504496   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 01:05:53.504392   62858 retry.go:31] will retry after 197.24813ms: waiting for machine to come up
	I0814 01:05:53.702874   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:05:53.703413   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 01:05:53.703435   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 01:05:53.703362   62858 retry.go:31] will retry after 310.273767ms: waiting for machine to come up
	I0814 01:05:54.015867   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:05:54.016309   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 01:05:54.016343   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 01:05:54.016247   62858 retry.go:31] will retry after 401.494411ms: waiting for machine to come up
	I0814 01:05:54.419847   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:05:54.420305   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 01:05:54.420330   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 01:05:54.420256   62858 retry.go:31] will retry after 407.322632ms: waiting for machine to come up
	I0814 01:05:53.379895   61447 api_server.go:279] https://192.168.72.94:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0814 01:05:53.379926   61447 api_server.go:103] status: https://192.168.72.94:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0814 01:05:53.379939   61447 api_server.go:253] Checking apiserver healthz at https://192.168.72.94:8443/healthz ...
	I0814 01:05:53.410913   61447 api_server.go:279] https://192.168.72.94:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0814 01:05:53.410945   61447 api_server.go:103] status: https://192.168.72.94:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0814 01:05:53.727193   61447 api_server.go:253] Checking apiserver healthz at https://192.168.72.94:8443/healthz ...
	I0814 01:05:53.740840   61447 api_server.go:279] https://192.168.72.94:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0814 01:05:53.740877   61447 api_server.go:103] status: https://192.168.72.94:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0814 01:05:54.227186   61447 api_server.go:253] Checking apiserver healthz at https://192.168.72.94:8443/healthz ...
	I0814 01:05:54.238685   61447 api_server.go:279] https://192.168.72.94:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0814 01:05:54.238721   61447 api_server.go:103] status: https://192.168.72.94:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0814 01:05:54.727193   61447 api_server.go:253] Checking apiserver healthz at https://192.168.72.94:8443/healthz ...
	I0814 01:05:54.733996   61447 api_server.go:279] https://192.168.72.94:8443/healthz returned 200:
	ok
	I0814 01:05:54.744409   61447 api_server.go:141] control plane version: v1.31.0
	I0814 01:05:54.744439   61447 api_server.go:131] duration metric: took 4.018095644s to wait for apiserver health ...
	I0814 01:05:54.744455   61447 cni.go:84] Creating CNI manager for ""
	I0814 01:05:54.744495   61447 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 01:05:54.746461   61447 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0814 01:05:54.748115   61447 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0814 01:05:54.764310   61447 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0814 01:05:54.794096   61447 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 01:05:54.818989   61447 system_pods.go:59] 8 kube-system pods found
	I0814 01:05:54.819032   61447 system_pods.go:61] "coredns-6f6b679f8f-dz9zk" [67e29ce3-7f67-4b96-8030-c980773b5772] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0814 01:05:54.819042   61447 system_pods.go:61] "etcd-no-preload-776907" [b81b7341-dcd8-4374-8241-8797eb33d707] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0814 01:05:54.819081   61447 system_pods.go:61] "kube-apiserver-no-preload-776907" [33b066e2-28ef-46a7-95d7-b17806cdbde6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0814 01:05:54.819094   61447 system_pods.go:61] "kube-controller-manager-no-preload-776907" [1de07b1f-7e0d-4704-84dc-fbb1280fc3bf] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0814 01:05:54.819106   61447 system_pods.go:61] "kube-proxy-pgm9t" [efad60b0-c62e-4c47-974b-98fdca9d3496] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0814 01:05:54.819119   61447 system_pods.go:61] "kube-scheduler-no-preload-776907" [6a57c2f5-6194-4e84-bfd3-985a6ff2333d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0814 01:05:54.819136   61447 system_pods.go:61] "metrics-server-6867b74b74-gb2dt" [c950c58e-c5c3-4535-b10f-f4379ff03409] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 01:05:54.819157   61447 system_pods.go:61] "storage-provisioner" [d0ba9510-e0a5-4558-98e3-a9510920f93a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0814 01:05:54.819172   61447 system_pods.go:74] duration metric: took 25.05113ms to wait for pod list to return data ...
	I0814 01:05:54.819195   61447 node_conditions.go:102] verifying NodePressure condition ...
	I0814 01:05:54.826286   61447 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0814 01:05:54.826394   61447 node_conditions.go:123] node cpu capacity is 2
	I0814 01:05:54.826437   61447 node_conditions.go:105] duration metric: took 7.224617ms to run NodePressure ...
	I0814 01:05:54.826473   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:05:55.135886   61447 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0814 01:05:55.142122   61447 kubeadm.go:739] kubelet initialised
	I0814 01:05:55.142142   61447 kubeadm.go:740] duration metric: took 6.231178ms waiting for restarted kubelet to initialise ...
	I0814 01:05:55.142157   61447 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 01:05:55.147513   61447 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6f6b679f8f-dz9zk" in "kube-system" namespace to be "Ready" ...
	I0814 01:05:55.153178   61447 pod_ready.go:97] node "no-preload-776907" hosting pod "coredns-6f6b679f8f-dz9zk" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-776907" has status "Ready":"False"
	I0814 01:05:55.153200   61447 pod_ready.go:81] duration metric: took 5.659541ms for pod "coredns-6f6b679f8f-dz9zk" in "kube-system" namespace to be "Ready" ...
	E0814 01:05:55.153208   61447 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-776907" hosting pod "coredns-6f6b679f8f-dz9zk" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-776907" has status "Ready":"False"
	I0814 01:05:55.153215   61447 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-776907" in "kube-system" namespace to be "Ready" ...
	I0814 01:05:55.158158   61447 pod_ready.go:97] node "no-preload-776907" hosting pod "etcd-no-preload-776907" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-776907" has status "Ready":"False"
	I0814 01:05:55.158182   61447 pod_ready.go:81] duration metric: took 4.958453ms for pod "etcd-no-preload-776907" in "kube-system" namespace to be "Ready" ...
	E0814 01:05:55.158192   61447 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-776907" hosting pod "etcd-no-preload-776907" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-776907" has status "Ready":"False"
	I0814 01:05:55.158199   61447 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-776907" in "kube-system" namespace to be "Ready" ...
	I0814 01:05:55.164468   61447 pod_ready.go:97] node "no-preload-776907" hosting pod "kube-apiserver-no-preload-776907" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-776907" has status "Ready":"False"
	I0814 01:05:55.164490   61447 pod_ready.go:81] duration metric: took 6.286201ms for pod "kube-apiserver-no-preload-776907" in "kube-system" namespace to be "Ready" ...
	E0814 01:05:55.164499   61447 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-776907" hosting pod "kube-apiserver-no-preload-776907" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-776907" has status "Ready":"False"
	I0814 01:05:55.164506   61447 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-776907" in "kube-system" namespace to be "Ready" ...
	I0814 01:05:55.198966   61447 pod_ready.go:97] node "no-preload-776907" hosting pod "kube-controller-manager-no-preload-776907" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-776907" has status "Ready":"False"
	I0814 01:05:55.199003   61447 pod_ready.go:81] duration metric: took 34.484311ms for pod "kube-controller-manager-no-preload-776907" in "kube-system" namespace to be "Ready" ...
	E0814 01:05:55.199017   61447 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-776907" hosting pod "kube-controller-manager-no-preload-776907" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-776907" has status "Ready":"False"
	I0814 01:05:55.199026   61447 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-pgm9t" in "kube-system" namespace to be "Ready" ...
	I0814 01:05:55.598334   61447 pod_ready.go:97] node "no-preload-776907" hosting pod "kube-proxy-pgm9t" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-776907" has status "Ready":"False"
	I0814 01:05:55.598365   61447 pod_ready.go:81] duration metric: took 399.329275ms for pod "kube-proxy-pgm9t" in "kube-system" namespace to be "Ready" ...
	E0814 01:05:55.598377   61447 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-776907" hosting pod "kube-proxy-pgm9t" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-776907" has status "Ready":"False"
	I0814 01:05:55.598386   61447 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-776907" in "kube-system" namespace to be "Ready" ...
	I0814 01:05:55.998091   61447 pod_ready.go:97] node "no-preload-776907" hosting pod "kube-scheduler-no-preload-776907" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-776907" has status "Ready":"False"
	I0814 01:05:55.998127   61447 pod_ready.go:81] duration metric: took 399.731033ms for pod "kube-scheduler-no-preload-776907" in "kube-system" namespace to be "Ready" ...
	E0814 01:05:55.998142   61447 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-776907" hosting pod "kube-scheduler-no-preload-776907" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-776907" has status "Ready":"False"
	I0814 01:05:55.998152   61447 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace to be "Ready" ...
	I0814 01:05:56.397421   61447 pod_ready.go:97] node "no-preload-776907" hosting pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-776907" has status "Ready":"False"
	I0814 01:05:56.397448   61447 pod_ready.go:81] duration metric: took 399.277712ms for pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace to be "Ready" ...
	E0814 01:05:56.397458   61447 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-776907" hosting pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-776907" has status "Ready":"False"
	I0814 01:05:56.397465   61447 pod_ready.go:38] duration metric: took 1.255299191s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 01:05:56.397481   61447 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0814 01:05:56.409600   61447 ops.go:34] apiserver oom_adj: -16
	I0814 01:05:56.409643   61447 kubeadm.go:597] duration metric: took 8.215521031s to restartPrimaryControlPlane
	I0814 01:05:56.409656   61447 kubeadm.go:394] duration metric: took 8.258927601s to StartCluster
	I0814 01:05:56.409677   61447 settings.go:142] acquiring lock: {Name:mkb0f793aa2a6618ff3457f9cd2d34beec5f1b5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 01:05:56.409769   61447 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19429-9425/kubeconfig
	I0814 01:05:56.411135   61447 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19429-9425/kubeconfig: {Name:mkd99f966962f24d3ec49056c344f5320df43dba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 01:05:56.411434   61447 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.94 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0814 01:05:56.411510   61447 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0814 01:05:56.411605   61447 addons.go:69] Setting storage-provisioner=true in profile "no-preload-776907"
	I0814 01:05:56.411639   61447 addons.go:234] Setting addon storage-provisioner=true in "no-preload-776907"
	W0814 01:05:56.411651   61447 addons.go:243] addon storage-provisioner should already be in state true
	I0814 01:05:56.411692   61447 host.go:66] Checking if "no-preload-776907" exists ...
	I0814 01:05:56.411702   61447 config.go:182] Loaded profile config "no-preload-776907": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 01:05:56.411755   61447 addons.go:69] Setting default-storageclass=true in profile "no-preload-776907"
	I0814 01:05:56.411792   61447 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-776907"
	I0814 01:05:56.412127   61447 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:05:56.412169   61447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:05:56.412221   61447 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:05:56.412238   61447 addons.go:69] Setting metrics-server=true in profile "no-preload-776907"
	I0814 01:05:56.412249   61447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:05:56.412272   61447 addons.go:234] Setting addon metrics-server=true in "no-preload-776907"
	W0814 01:05:56.412289   61447 addons.go:243] addon metrics-server should already be in state true
	I0814 01:05:56.412325   61447 host.go:66] Checking if "no-preload-776907" exists ...
	I0814 01:05:56.412679   61447 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:05:56.412726   61447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:05:56.413470   61447 out.go:177] * Verifying Kubernetes components...
	I0814 01:05:56.414907   61447 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 01:05:56.432617   61447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40991
	I0814 01:05:56.433633   61447 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:05:56.433655   61447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42565
	I0814 01:05:56.433682   61447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33323
	I0814 01:05:56.434304   61447 main.go:141] libmachine: Using API Version  1
	I0814 01:05:56.434325   61447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:05:56.434348   61447 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:05:56.434768   61447 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:05:56.434828   61447 main.go:141] libmachine: Using API Version  1
	I0814 01:05:56.434849   61447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:05:56.435292   61447 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:05:56.435318   61447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:05:56.435500   61447 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:05:56.436085   61447 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:05:56.436133   61447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:05:56.436678   61447 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:05:56.438722   61447 main.go:141] libmachine: Using API Version  1
	I0814 01:05:56.438744   61447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:05:56.439300   61447 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:05:56.442254   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetState
	I0814 01:05:56.445951   61447 addons.go:234] Setting addon default-storageclass=true in "no-preload-776907"
	W0814 01:05:56.445969   61447 addons.go:243] addon default-storageclass should already be in state true
	I0814 01:05:56.445997   61447 host.go:66] Checking if "no-preload-776907" exists ...
	I0814 01:05:56.446331   61447 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:05:56.446364   61447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:05:56.457855   61447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36297
	I0814 01:05:56.459973   61447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40635
	I0814 01:05:56.460484   61447 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:05:56.461068   61447 main.go:141] libmachine: Using API Version  1
	I0814 01:05:56.461089   61447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:05:56.461565   61447 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:05:56.462741   61447 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:05:56.462899   61447 main.go:141] libmachine: Using API Version  1
	I0814 01:05:56.462913   61447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:05:56.463577   61447 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:05:56.463640   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetState
	I0814 01:05:56.464100   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetState
	I0814 01:05:56.464341   61447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38841
	I0814 01:05:56.465394   61447 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:05:56.465878   61447 main.go:141] libmachine: (no-preload-776907) Calling .DriverName
	I0814 01:05:56.465995   61447 main.go:141] libmachine: Using API Version  1
	I0814 01:05:56.466007   61447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:05:56.466617   61447 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:05:56.466684   61447 main.go:141] libmachine: (no-preload-776907) Calling .DriverName
	I0814 01:05:56.467327   61447 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:05:56.467367   61447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:05:56.468708   61447 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0814 01:05:56.468802   61447 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 01:05:56.469927   61447 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0814 01:05:56.469944   61447 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0814 01:05:56.469963   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHHostname
	I0814 01:05:56.473235   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:56.473684   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:56.473705   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:56.473879   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHPort
	I0814 01:05:56.474052   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 01:05:56.474176   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHUsername
	I0814 01:05:56.474181   61447 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 01:05:56.474230   61447 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0814 01:05:56.474244   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHHostname
	I0814 01:05:56.474328   61447 sshutil.go:53] new ssh client: &{IP:192.168.72.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/no-preload-776907/id_rsa Username:docker}
	I0814 01:05:56.477789   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:56.478291   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:56.478307   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:56.478643   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHPort
	I0814 01:05:56.478813   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 01:05:56.478932   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHUsername
	I0814 01:05:56.479056   61447 sshutil.go:53] new ssh client: &{IP:192.168.72.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/no-preload-776907/id_rsa Username:docker}
	I0814 01:05:56.506690   61447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40059
	I0814 01:05:56.507196   61447 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:05:56.507726   61447 main.go:141] libmachine: Using API Version  1
	I0814 01:05:56.507750   61447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:05:56.508129   61447 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:05:56.508352   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetState
	I0814 01:05:53.716678   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetIP
	I0814 01:05:53.719662   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:53.720132   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:05:53.720161   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:05:53.720382   61689 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0814 01:05:53.724276   61689 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 01:05:53.736896   61689 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-585256 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:default-k8s-diff-port-585256 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0814 01:05:53.737033   61689 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0814 01:05:53.737090   61689 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 01:05:53.786464   61689 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0814 01:05:53.786549   61689 ssh_runner.go:195] Run: which lz4
	I0814 01:05:53.791254   61689 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0814 01:05:53.796216   61689 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0814 01:05:53.796251   61689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0814 01:05:55.074296   61689 crio.go:462] duration metric: took 1.283077887s to copy over tarball
	I0814 01:05:55.074381   61689 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0814 01:05:57.330151   61689 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.255736783s)
	I0814 01:05:57.330183   61689 crio.go:469] duration metric: took 2.255855524s to extract the tarball
	I0814 01:05:57.330193   61689 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0814 01:05:57.390001   61689 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 01:05:57.438765   61689 crio.go:514] all images are preloaded for cri-o runtime.
	I0814 01:05:57.438795   61689 cache_images.go:84] Images are preloaded, skipping loading
	I0814 01:05:57.438804   61689 kubeadm.go:934] updating node { 192.168.39.110 8444 v1.31.0 crio true true} ...
	I0814 01:05:57.438939   61689 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-585256 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.110
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-585256 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0814 01:05:57.439019   61689 ssh_runner.go:195] Run: crio config
	I0814 01:05:57.487432   61689 cni.go:84] Creating CNI manager for ""
	I0814 01:05:57.487456   61689 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 01:05:57.487468   61689 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0814 01:05:57.487488   61689 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.110 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-585256 NodeName:default-k8s-diff-port-585256 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.110"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.110 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0814 01:05:57.487628   61689 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.110
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-585256"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.110
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.110"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0814 01:05:57.487683   61689 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0814 01:05:57.499806   61689 binaries.go:44] Found k8s binaries, skipping transfer
	I0814 01:05:57.499875   61689 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0814 01:05:57.508987   61689 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0814 01:05:57.527561   61689 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0814 01:05:57.546193   61689 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0814 01:05:57.566209   61689 ssh_runner.go:195] Run: grep 192.168.39.110	control-plane.minikube.internal$ /etc/hosts
	I0814 01:05:57.569852   61689 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.110	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 01:05:57.584800   61689 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 01:05:57.718643   61689 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 01:05:57.739124   61689 certs.go:68] Setting up /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/default-k8s-diff-port-585256 for IP: 192.168.39.110
	I0814 01:05:57.739153   61689 certs.go:194] generating shared ca certs ...
	I0814 01:05:57.739174   61689 certs.go:226] acquiring lock for ca certs: {Name:mke321171338291cf6d66f3acbfe43d3fabcafa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 01:05:57.739390   61689 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19429-9425/.minikube/ca.key
	I0814 01:05:57.739461   61689 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.key
	I0814 01:05:57.739476   61689 certs.go:256] generating profile certs ...
	I0814 01:05:57.739607   61689 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/default-k8s-diff-port-585256/client.key
	I0814 01:05:57.739700   61689 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/default-k8s-diff-port-585256/apiserver.key.7cbada89
	I0814 01:05:57.739764   61689 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/default-k8s-diff-port-585256/proxy-client.key
	I0814 01:05:57.739951   61689 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589.pem (1338 bytes)
	W0814 01:05:57.740000   61689 certs.go:480] ignoring /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589_empty.pem, impossibly tiny 0 bytes
	I0814 01:05:57.740017   61689 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem (1679 bytes)
	I0814 01:05:57.740054   61689 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem (1082 bytes)
	I0814 01:05:57.740096   61689 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem (1123 bytes)
	I0814 01:05:57.740128   61689 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem (1675 bytes)
	I0814 01:05:57.740198   61689 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem (1708 bytes)
	I0814 01:05:57.740914   61689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0814 01:05:57.776830   61689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0814 01:05:57.805557   61689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0814 01:05:57.838303   61689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0814 01:05:57.878807   61689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/default-k8s-diff-port-585256/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0814 01:05:57.918149   61689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/default-k8s-diff-port-585256/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0814 01:05:57.951098   61689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/default-k8s-diff-port-585256/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0814 01:05:57.979966   61689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/default-k8s-diff-port-585256/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0814 01:05:58.008045   61689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem --> /usr/share/ca-certificates/165892.pem (1708 bytes)
	I0814 01:05:56.510326   61447 main.go:141] libmachine: (no-preload-776907) Calling .DriverName
	I0814 01:05:56.510711   61447 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0814 01:05:56.510727   61447 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0814 01:05:56.510746   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHHostname
	I0814 01:05:56.513933   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:56.514347   61447 main.go:141] libmachine: (no-preload-776907) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:29:79", ip: ""} in network mk-no-preload-776907: {Iface:virbr3 ExpiryTime:2024-08-14 02:05:22 +0000 UTC Type:0 Mac:52:54:00:96:29:79 Iaid: IPaddr:192.168.72.94 Prefix:24 Hostname:no-preload-776907 Clientid:01:52:54:00:96:29:79}
	I0814 01:05:56.514366   61447 main.go:141] libmachine: (no-preload-776907) DBG | domain no-preload-776907 has defined IP address 192.168.72.94 and MAC address 52:54:00:96:29:79 in network mk-no-preload-776907
	I0814 01:05:56.514640   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHPort
	I0814 01:05:56.514790   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHKeyPath
	I0814 01:05:56.514921   61447 main.go:141] libmachine: (no-preload-776907) Calling .GetSSHUsername
	I0814 01:05:56.515041   61447 sshutil.go:53] new ssh client: &{IP:192.168.72.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/no-preload-776907/id_rsa Username:docker}
	I0814 01:05:56.648210   61447 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 01:05:56.669968   61447 node_ready.go:35] waiting up to 6m0s for node "no-preload-776907" to be "Ready" ...
	I0814 01:05:56.752258   61447 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0814 01:05:56.752282   61447 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0814 01:05:56.784534   61447 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0814 01:05:56.784570   61447 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0814 01:05:56.797555   61447 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0814 01:05:56.811711   61447 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 01:05:56.852143   61447 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0814 01:05:56.852222   61447 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0814 01:05:56.896802   61447 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0814 01:05:57.332181   61447 main.go:141] libmachine: Making call to close driver server
	I0814 01:05:57.332207   61447 main.go:141] libmachine: (no-preload-776907) Calling .Close
	I0814 01:05:57.332534   61447 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:05:57.332552   61447 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:05:57.332562   61447 main.go:141] libmachine: Making call to close driver server
	I0814 01:05:57.332570   61447 main.go:141] libmachine: (no-preload-776907) Calling .Close
	I0814 01:05:57.332892   61447 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:05:57.332908   61447 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:05:57.332999   61447 main.go:141] libmachine: (no-preload-776907) DBG | Closing plugin on server side
	I0814 01:05:57.377695   61447 main.go:141] libmachine: Making call to close driver server
	I0814 01:05:57.377726   61447 main.go:141] libmachine: (no-preload-776907) Calling .Close
	I0814 01:05:57.378310   61447 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:05:57.378335   61447 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:05:57.378307   61447 main.go:141] libmachine: (no-preload-776907) DBG | Closing plugin on server side
	I0814 01:05:58.285384   61447 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.388491618s)
	I0814 01:05:58.285399   61447 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.473604802s)
	I0814 01:05:58.285438   61447 main.go:141] libmachine: Making call to close driver server
	I0814 01:05:58.285466   61447 main.go:141] libmachine: (no-preload-776907) Calling .Close
	I0814 01:05:58.285438   61447 main.go:141] libmachine: Making call to close driver server
	I0814 01:05:58.285542   61447 main.go:141] libmachine: (no-preload-776907) Calling .Close
	I0814 01:05:58.285816   61447 main.go:141] libmachine: (no-preload-776907) DBG | Closing plugin on server side
	I0814 01:05:58.285858   61447 main.go:141] libmachine: (no-preload-776907) DBG | Closing plugin on server side
	I0814 01:05:58.285874   61447 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:05:58.285881   61447 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:05:58.285890   61447 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:05:58.285897   61447 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:05:58.285903   61447 main.go:141] libmachine: Making call to close driver server
	I0814 01:05:58.285908   61447 main.go:141] libmachine: Making call to close driver server
	I0814 01:05:58.285915   61447 main.go:141] libmachine: (no-preload-776907) Calling .Close
	I0814 01:05:58.285934   61447 main.go:141] libmachine: (no-preload-776907) Calling .Close
	I0814 01:05:58.286168   61447 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:05:58.286180   61447 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:05:58.287529   61447 main.go:141] libmachine: (no-preload-776907) DBG | Closing plugin on server side
	I0814 01:05:58.287541   61447 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:05:58.287560   61447 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:05:58.287576   61447 addons.go:475] Verifying addon metrics-server=true in "no-preload-776907"
	I0814 01:05:58.289411   61447 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0814 01:05:54.828943   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:05:54.829542   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 01:05:54.829567   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 01:05:54.829451   62858 retry.go:31] will retry after 761.368258ms: waiting for machine to come up
	I0814 01:05:55.592398   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:05:55.593051   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 01:05:55.593077   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 01:05:55.592959   62858 retry.go:31] will retry after 776.526082ms: waiting for machine to come up
	I0814 01:05:56.370701   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:05:56.371193   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 01:05:56.371214   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 01:05:56.371176   62858 retry.go:31] will retry after 1.033572565s: waiting for machine to come up
	I0814 01:05:57.407052   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:05:57.407572   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 01:05:57.407608   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 01:05:57.407514   62858 retry.go:31] will retry after 1.075443116s: waiting for machine to come up
	I0814 01:05:58.484020   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:05:58.484428   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 01:05:58.484450   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 01:05:58.484400   62858 retry.go:31] will retry after 1.753983606s: waiting for machine to come up
	I0814 01:05:58.290516   61447 addons.go:510] duration metric: took 1.879011423s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0814 01:05:58.674495   61447 node_ready.go:53] node "no-preload-776907" has status "Ready":"False"
	I0814 01:06:00.726396   61447 node_ready.go:53] node "no-preload-776907" has status "Ready":"False"
	I0814 01:05:58.035164   61689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0814 01:05:58.062151   61689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589.pem --> /usr/share/ca-certificates/16589.pem (1338 bytes)
	I0814 01:05:58.088779   61689 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0814 01:05:58.104815   61689 ssh_runner.go:195] Run: openssl version
	I0814 01:05:58.111743   61689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0814 01:05:58.122523   61689 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0814 01:05:58.126771   61689 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 13 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0814 01:05:58.126827   61689 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0814 01:05:58.132103   61689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0814 01:05:58.143604   61689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16589.pem && ln -fs /usr/share/ca-certificates/16589.pem /etc/ssl/certs/16589.pem"
	I0814 01:05:58.155065   61689 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16589.pem
	I0814 01:05:58.160457   61689 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 13 23:59 /usr/share/ca-certificates/16589.pem
	I0814 01:05:58.160511   61689 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16589.pem
	I0814 01:05:58.167417   61689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16589.pem /etc/ssl/certs/51391683.0"
	I0814 01:05:58.180825   61689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/165892.pem && ln -fs /usr/share/ca-certificates/165892.pem /etc/ssl/certs/165892.pem"
	I0814 01:05:58.193263   61689 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/165892.pem
	I0814 01:05:58.198571   61689 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 13 23:59 /usr/share/ca-certificates/165892.pem
	I0814 01:05:58.198637   61689 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/165892.pem
	I0814 01:05:58.205645   61689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/165892.pem /etc/ssl/certs/3ec20f2e.0"
	I0814 01:05:58.219088   61689 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0814 01:05:58.224431   61689 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0814 01:05:58.231762   61689 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0814 01:05:58.238996   61689 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0814 01:05:58.244758   61689 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0814 01:05:58.250112   61689 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0814 01:05:58.257224   61689 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0814 01:05:58.262563   61689 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-585256 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:default-k8s-diff-port-585256 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 01:05:58.262677   61689 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0814 01:05:58.262745   61689 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 01:05:58.309680   61689 cri.go:89] found id: ""
	I0814 01:05:58.309753   61689 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0814 01:05:58.319775   61689 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0814 01:05:58.319796   61689 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0814 01:05:58.319852   61689 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0814 01:05:58.329093   61689 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0814 01:05:58.330026   61689 kubeconfig.go:125] found "default-k8s-diff-port-585256" server: "https://192.168.39.110:8444"
	I0814 01:05:58.332001   61689 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0814 01:05:58.341206   61689 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.110
	I0814 01:05:58.341235   61689 kubeadm.go:1160] stopping kube-system containers ...
	I0814 01:05:58.341247   61689 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0814 01:05:58.341311   61689 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 01:05:58.376929   61689 cri.go:89] found id: ""
	I0814 01:05:58.376991   61689 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0814 01:05:58.393789   61689 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 01:05:58.402954   61689 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 01:05:58.402979   61689 kubeadm.go:157] found existing configuration files:
	
	I0814 01:05:58.403032   61689 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0814 01:05:58.412025   61689 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 01:05:58.412081   61689 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 01:05:58.421031   61689 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0814 01:05:58.429702   61689 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 01:05:58.429774   61689 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 01:05:58.438859   61689 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0814 01:05:58.447047   61689 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 01:05:58.447106   61689 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 01:05:58.455697   61689 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0814 01:05:58.463942   61689 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 01:05:58.464004   61689 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 01:05:58.472399   61689 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 01:05:58.481173   61689 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:05:58.591187   61689 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:05:59.150641   61689 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:05:59.356842   61689 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:05:59.416846   61689 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:05:59.500693   61689 api_server.go:52] waiting for apiserver process to appear ...
	I0814 01:05:59.500779   61689 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:00.001860   61689 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:00.500969   61689 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:01.001662   61689 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:01.030737   61689 api_server.go:72] duration metric: took 1.530044643s to wait for apiserver process to appear ...
	I0814 01:06:01.030766   61689 api_server.go:88] waiting for apiserver healthz status ...
	I0814 01:06:01.030790   61689 api_server.go:253] Checking apiserver healthz at https://192.168.39.110:8444/healthz ...
	I0814 01:06:01.031270   61689 api_server.go:269] stopped: https://192.168.39.110:8444/healthz: Get "https://192.168.39.110:8444/healthz": dial tcp 192.168.39.110:8444: connect: connection refused
	I0814 01:06:01.530913   61689 api_server.go:253] Checking apiserver healthz at https://192.168.39.110:8444/healthz ...
	I0814 01:06:00.239701   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:00.240210   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 01:06:00.240234   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 01:06:00.240157   62858 retry.go:31] will retry after 1.471169968s: waiting for machine to come up
	I0814 01:06:01.713921   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:01.714410   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 01:06:01.714449   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 01:06:01.714385   62858 retry.go:31] will retry after 2.509653415s: waiting for machine to come up
	I0814 01:06:04.225883   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:04.226391   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 01:06:04.226417   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 01:06:04.226346   62858 retry.go:31] will retry after 3.61921572s: waiting for machine to come up
	I0814 01:06:04.011296   61689 api_server.go:279] https://192.168.39.110:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0814 01:06:04.011342   61689 api_server.go:103] status: https://192.168.39.110:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0814 01:06:04.011359   61689 api_server.go:253] Checking apiserver healthz at https://192.168.39.110:8444/healthz ...
	I0814 01:06:04.030095   61689 api_server.go:279] https://192.168.39.110:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0814 01:06:04.030128   61689 api_server.go:103] status: https://192.168.39.110:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0814 01:06:04.031159   61689 api_server.go:253] Checking apiserver healthz at https://192.168.39.110:8444/healthz ...
	I0814 01:06:04.149715   61689 api_server.go:279] https://192.168.39.110:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0814 01:06:04.149760   61689 api_server.go:103] status: https://192.168.39.110:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0814 01:06:04.530942   61689 api_server.go:253] Checking apiserver healthz at https://192.168.39.110:8444/healthz ...
	I0814 01:06:04.541074   61689 api_server.go:279] https://192.168.39.110:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0814 01:06:04.541119   61689 api_server.go:103] status: https://192.168.39.110:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0814 01:06:05.031232   61689 api_server.go:253] Checking apiserver healthz at https://192.168.39.110:8444/healthz ...
	I0814 01:06:05.036252   61689 api_server.go:279] https://192.168.39.110:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0814 01:06:05.036278   61689 api_server.go:103] status: https://192.168.39.110:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0814 01:06:05.531902   61689 api_server.go:253] Checking apiserver healthz at https://192.168.39.110:8444/healthz ...
	I0814 01:06:05.536016   61689 api_server.go:279] https://192.168.39.110:8444/healthz returned 200:
	ok
	I0814 01:06:05.542693   61689 api_server.go:141] control plane version: v1.31.0
	I0814 01:06:05.542718   61689 api_server.go:131] duration metric: took 4.511944733s to wait for apiserver health ...
	I0814 01:06:05.542728   61689 cni.go:84] Creating CNI manager for ""
	I0814 01:06:05.542736   61689 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 01:06:05.544557   61689 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0814 01:06:03.174271   61447 node_ready.go:53] node "no-preload-776907" has status "Ready":"False"
	I0814 01:06:04.174287   61447 node_ready.go:49] node "no-preload-776907" has status "Ready":"True"
	I0814 01:06:04.174312   61447 node_ready.go:38] duration metric: took 7.504312709s for node "no-preload-776907" to be "Ready" ...
	I0814 01:06:04.174324   61447 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 01:06:04.181275   61447 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-dz9zk" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:04.187150   61447 pod_ready.go:92] pod "coredns-6f6b679f8f-dz9zk" in "kube-system" namespace has status "Ready":"True"
	I0814 01:06:04.187171   61447 pod_ready.go:81] duration metric: took 5.866488ms for pod "coredns-6f6b679f8f-dz9zk" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:04.187180   61447 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-776907" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:04.192673   61447 pod_ready.go:92] pod "etcd-no-preload-776907" in "kube-system" namespace has status "Ready":"True"
	I0814 01:06:04.192694   61447 pod_ready.go:81] duration metric: took 5.50752ms for pod "etcd-no-preload-776907" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:04.192705   61447 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-776907" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:06.199283   61447 pod_ready.go:102] pod "kube-apiserver-no-preload-776907" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:05.545819   61689 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0814 01:06:05.556019   61689 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0814 01:06:05.598403   61689 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 01:06:05.608687   61689 system_pods.go:59] 8 kube-system pods found
	I0814 01:06:05.608718   61689 system_pods.go:61] "coredns-6f6b679f8f-7vdsf" [ea069874-e3a9-41a4-b038-cfca429e60cc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0814 01:06:05.608730   61689 system_pods.go:61] "etcd-default-k8s-diff-port-585256" [922a7db1-2b4d-4f7b-af08-3ed730f1d6e3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0814 01:06:05.608737   61689 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-585256" [2db632ae-aaf3-4df4-85b2-7ba505297efb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0814 01:06:05.608743   61689 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-585256" [d9cc182b-9153-4606-a719-465aed72c481] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0814 01:06:05.608747   61689 system_pods.go:61] "kube-proxy-cz77l" [67d1af69-ecbd-4564-be50-f96936604345] Running
	I0814 01:06:05.608751   61689 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-585256" [f0e99120-b573-4eb6-909f-a9b79886ec47] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0814 01:06:05.608755   61689 system_pods.go:61] "metrics-server-6867b74b74-6cql9" [f1213ad4-770d-4b81-96b9-7b5e10f2a23a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 01:06:05.608760   61689 system_pods.go:61] "storage-provisioner" [589b83be-2ad6-4b16-829f-cb944487303c] Running
	I0814 01:06:05.608766   61689 system_pods.go:74] duration metric: took 10.339955ms to wait for pod list to return data ...
	I0814 01:06:05.608772   61689 node_conditions.go:102] verifying NodePressure condition ...
	I0814 01:06:05.612993   61689 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0814 01:06:05.613024   61689 node_conditions.go:123] node cpu capacity is 2
	I0814 01:06:05.613037   61689 node_conditions.go:105] duration metric: took 4.259435ms to run NodePressure ...
	I0814 01:06:05.613055   61689 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:06:05.884859   61689 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0814 01:06:05.889608   61689 kubeadm.go:739] kubelet initialised
	I0814 01:06:05.889636   61689 kubeadm.go:740] duration metric: took 4.742229ms waiting for restarted kubelet to initialise ...
	I0814 01:06:05.889644   61689 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 01:06:05.991222   61689 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6f6b679f8f-7vdsf" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:05.997411   61689 pod_ready.go:97] node "default-k8s-diff-port-585256" hosting pod "coredns-6f6b679f8f-7vdsf" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-585256" has status "Ready":"False"
	I0814 01:06:05.997442   61689 pod_ready.go:81] duration metric: took 6.186188ms for pod "coredns-6f6b679f8f-7vdsf" in "kube-system" namespace to be "Ready" ...
	E0814 01:06:05.997455   61689 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-585256" hosting pod "coredns-6f6b679f8f-7vdsf" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-585256" has status "Ready":"False"
	I0814 01:06:05.997463   61689 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-585256" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:06.008153   61689 pod_ready.go:97] node "default-k8s-diff-port-585256" hosting pod "etcd-default-k8s-diff-port-585256" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-585256" has status "Ready":"False"
	I0814 01:06:06.008188   61689 pod_ready.go:81] duration metric: took 10.714691ms for pod "etcd-default-k8s-diff-port-585256" in "kube-system" namespace to be "Ready" ...
	E0814 01:06:06.008204   61689 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-585256" hosting pod "etcd-default-k8s-diff-port-585256" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-585256" has status "Ready":"False"
	I0814 01:06:06.008213   61689 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-585256" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:06.013480   61689 pod_ready.go:97] node "default-k8s-diff-port-585256" hosting pod "kube-apiserver-default-k8s-diff-port-585256" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-585256" has status "Ready":"False"
	I0814 01:06:06.013500   61689 pod_ready.go:81] duration metric: took 5.279106ms for pod "kube-apiserver-default-k8s-diff-port-585256" in "kube-system" namespace to be "Ready" ...
	E0814 01:06:06.013510   61689 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-585256" hosting pod "kube-apiserver-default-k8s-diff-port-585256" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-585256" has status "Ready":"False"
	I0814 01:06:06.013517   61689 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-585256" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:06.022821   61689 pod_ready.go:97] node "default-k8s-diff-port-585256" hosting pod "kube-controller-manager-default-k8s-diff-port-585256" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-585256" has status "Ready":"False"
	I0814 01:06:06.022841   61689 pod_ready.go:81] duration metric: took 9.318586ms for pod "kube-controller-manager-default-k8s-diff-port-585256" in "kube-system" namespace to be "Ready" ...
	E0814 01:06:06.022851   61689 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-585256" hosting pod "kube-controller-manager-default-k8s-diff-port-585256" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-585256" has status "Ready":"False"
	I0814 01:06:06.022857   61689 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-cz77l" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:06.402225   61689 pod_ready.go:92] pod "kube-proxy-cz77l" in "kube-system" namespace has status "Ready":"True"
	I0814 01:06:06.402251   61689 pod_ready.go:81] duration metric: took 379.387097ms for pod "kube-proxy-cz77l" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:06.402267   61689 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-585256" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:07.847343   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:07.847844   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | unable to find current IP address of domain old-k8s-version-179312 in network mk-old-k8s-version-179312
	I0814 01:06:07.847879   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | I0814 01:06:07.847800   62858 retry.go:31] will retry after 2.983420512s: waiting for machine to come up
	I0814 01:06:07.699362   61447 pod_ready.go:92] pod "kube-apiserver-no-preload-776907" in "kube-system" namespace has status "Ready":"True"
	I0814 01:06:07.699393   61447 pod_ready.go:81] duration metric: took 3.506678951s for pod "kube-apiserver-no-preload-776907" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:07.699407   61447 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-776907" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:07.704007   61447 pod_ready.go:92] pod "kube-controller-manager-no-preload-776907" in "kube-system" namespace has status "Ready":"True"
	I0814 01:06:07.704028   61447 pod_ready.go:81] duration metric: took 4.613152ms for pod "kube-controller-manager-no-preload-776907" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:07.704038   61447 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pgm9t" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:07.708027   61447 pod_ready.go:92] pod "kube-proxy-pgm9t" in "kube-system" namespace has status "Ready":"True"
	I0814 01:06:07.708044   61447 pod_ready.go:81] duration metric: took 3.999792ms for pod "kube-proxy-pgm9t" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:07.708052   61447 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-776907" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:07.774591   61447 pod_ready.go:92] pod "kube-scheduler-no-preload-776907" in "kube-system" namespace has status "Ready":"True"
	I0814 01:06:07.774621   61447 pod_ready.go:81] duration metric: took 66.56102ms for pod "kube-scheduler-no-preload-776907" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:07.774642   61447 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:09.781156   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:12.050400   61115 start.go:364] duration metric: took 54.455049928s to acquireMachinesLock for "embed-certs-901410"
	I0814 01:06:12.050448   61115 start.go:96] Skipping create...Using existing machine configuration
	I0814 01:06:12.050458   61115 fix.go:54] fixHost starting: 
	I0814 01:06:12.050897   61115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:06:12.050932   61115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:06:12.067865   61115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41559
	I0814 01:06:12.068209   61115 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:06:12.068726   61115 main.go:141] libmachine: Using API Version  1
	I0814 01:06:12.068757   61115 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:06:12.069116   61115 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:06:12.069354   61115 main.go:141] libmachine: (embed-certs-901410) Calling .DriverName
	I0814 01:06:12.069516   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetState
	I0814 01:06:12.070994   61115 fix.go:112] recreateIfNeeded on embed-certs-901410: state=Stopped err=<nil>
	I0814 01:06:12.071029   61115 main.go:141] libmachine: (embed-certs-901410) Calling .DriverName
	W0814 01:06:12.071156   61115 fix.go:138] unexpected machine state, will restart: <nil>
	I0814 01:06:12.072932   61115 out.go:177] * Restarting existing kvm2 VM for "embed-certs-901410" ...
	I0814 01:06:08.410114   61689 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-585256" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:10.909528   61689 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-585256" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:12.911385   61689 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-585256" in "kube-system" namespace has status "Ready":"True"
	I0814 01:06:12.911416   61689 pod_ready.go:81] duration metric: took 6.509140238s for pod "kube-scheduler-default-k8s-diff-port-585256" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:12.911432   61689 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:10.834861   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:10.835358   61804 main.go:141] libmachine: (old-k8s-version-179312) Found IP for machine: 192.168.61.123
	I0814 01:06:10.835381   61804 main.go:141] libmachine: (old-k8s-version-179312) Reserving static IP address...
	I0814 01:06:10.835396   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has current primary IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:10.835795   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "old-k8s-version-179312", mac: "52:54:00:b2:76:73", ip: "192.168.61.123"} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:10.835827   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | skip adding static IP to network mk-old-k8s-version-179312 - found existing host DHCP lease matching {name: "old-k8s-version-179312", mac: "52:54:00:b2:76:73", ip: "192.168.61.123"}
	I0814 01:06:10.835846   61804 main.go:141] libmachine: (old-k8s-version-179312) Reserved static IP address: 192.168.61.123
	I0814 01:06:10.835866   61804 main.go:141] libmachine: (old-k8s-version-179312) Waiting for SSH to be available...
	I0814 01:06:10.835880   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | Getting to WaitForSSH function...
	I0814 01:06:10.837965   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:10.838336   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:10.838379   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:10.838482   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | Using SSH client type: external
	I0814 01:06:10.838520   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | Using SSH private key: /home/jenkins/minikube-integration/19429-9425/.minikube/machines/old-k8s-version-179312/id_rsa (-rw-------)
	I0814 01:06:10.838549   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.123 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19429-9425/.minikube/machines/old-k8s-version-179312/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0814 01:06:10.838568   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | About to run SSH command:
	I0814 01:06:10.838578   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | exit 0
	I0814 01:06:10.965836   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | SSH cmd err, output: <nil>: 
	I0814 01:06:10.966231   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetConfigRaw
	I0814 01:06:10.966912   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetIP
	I0814 01:06:10.969194   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:10.969535   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:10.969560   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:10.969789   61804 profile.go:143] Saving config to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312/config.json ...
	I0814 01:06:10.969969   61804 machine.go:94] provisionDockerMachine start ...
	I0814 01:06:10.969987   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .DriverName
	I0814 01:06:10.970183   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHHostname
	I0814 01:06:10.972010   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:10.972332   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:10.972361   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:10.972476   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHPort
	I0814 01:06:10.972658   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 01:06:10.972807   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 01:06:10.972942   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHUsername
	I0814 01:06:10.973088   61804 main.go:141] libmachine: Using SSH client type: native
	I0814 01:06:10.973257   61804 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I0814 01:06:10.973267   61804 main.go:141] libmachine: About to run SSH command:
	hostname
	I0814 01:06:11.074077   61804 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0814 01:06:11.074111   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetMachineName
	I0814 01:06:11.074328   61804 buildroot.go:166] provisioning hostname "old-k8s-version-179312"
	I0814 01:06:11.074364   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetMachineName
	I0814 01:06:11.074666   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHHostname
	I0814 01:06:11.077309   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.077697   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:11.077730   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.077803   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHPort
	I0814 01:06:11.077990   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 01:06:11.078161   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 01:06:11.078304   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHUsername
	I0814 01:06:11.078510   61804 main.go:141] libmachine: Using SSH client type: native
	I0814 01:06:11.078729   61804 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I0814 01:06:11.078743   61804 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-179312 && echo "old-k8s-version-179312" | sudo tee /etc/hostname
	I0814 01:06:11.193209   61804 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-179312
	
	I0814 01:06:11.193241   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHHostname
	I0814 01:06:11.195907   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.196315   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:11.196342   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.196569   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHPort
	I0814 01:06:11.196774   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 01:06:11.196936   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 01:06:11.197079   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHUsername
	I0814 01:06:11.197234   61804 main.go:141] libmachine: Using SSH client type: native
	I0814 01:06:11.197448   61804 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I0814 01:06:11.197477   61804 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-179312' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-179312/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-179312' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0814 01:06:11.312005   61804 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 01:06:11.312037   61804 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19429-9425/.minikube CaCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19429-9425/.minikube}
	I0814 01:06:11.312082   61804 buildroot.go:174] setting up certificates
	I0814 01:06:11.312093   61804 provision.go:84] configureAuth start
	I0814 01:06:11.312103   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetMachineName
	I0814 01:06:11.312396   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetIP
	I0814 01:06:11.315412   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.315909   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:11.315952   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.316043   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHHostname
	I0814 01:06:11.318283   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.318603   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:11.318630   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.318791   61804 provision.go:143] copyHostCerts
	I0814 01:06:11.318852   61804 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem, removing ...
	I0814 01:06:11.318875   61804 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem
	I0814 01:06:11.318944   61804 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem (1082 bytes)
	I0814 01:06:11.319073   61804 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem, removing ...
	I0814 01:06:11.319085   61804 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem
	I0814 01:06:11.319115   61804 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem (1123 bytes)
	I0814 01:06:11.319199   61804 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem, removing ...
	I0814 01:06:11.319209   61804 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem
	I0814 01:06:11.319262   61804 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem (1675 bytes)
	I0814 01:06:11.319351   61804 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-179312 san=[127.0.0.1 192.168.61.123 localhost minikube old-k8s-version-179312]
	I0814 01:06:11.396260   61804 provision.go:177] copyRemoteCerts
	I0814 01:06:11.396338   61804 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0814 01:06:11.396372   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHHostname
	I0814 01:06:11.399365   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.399788   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:11.399824   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.399989   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHPort
	I0814 01:06:11.400186   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 01:06:11.400349   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHUsername
	I0814 01:06:11.400555   61804 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/old-k8s-version-179312/id_rsa Username:docker}
	I0814 01:06:11.483862   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0814 01:06:11.506282   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0814 01:06:11.529014   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0814 01:06:11.550986   61804 provision.go:87] duration metric: took 238.880389ms to configureAuth
	I0814 01:06:11.551022   61804 buildroot.go:189] setting minikube options for container-runtime
	I0814 01:06:11.551253   61804 config.go:182] Loaded profile config "old-k8s-version-179312": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0814 01:06:11.551330   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHHostname
	I0814 01:06:11.554244   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.554622   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:11.554655   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.554880   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHPort
	I0814 01:06:11.555073   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 01:06:11.555249   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 01:06:11.555402   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHUsername
	I0814 01:06:11.555590   61804 main.go:141] libmachine: Using SSH client type: native
	I0814 01:06:11.555834   61804 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I0814 01:06:11.555856   61804 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0814 01:06:11.824529   61804 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0814 01:06:11.824553   61804 machine.go:97] duration metric: took 854.572333ms to provisionDockerMachine
	I0814 01:06:11.824569   61804 start.go:293] postStartSetup for "old-k8s-version-179312" (driver="kvm2")
	I0814 01:06:11.824581   61804 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0814 01:06:11.824626   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .DriverName
	I0814 01:06:11.824929   61804 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0814 01:06:11.824952   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHHostname
	I0814 01:06:11.828165   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.828510   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:11.828545   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.828693   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHPort
	I0814 01:06:11.828883   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 01:06:11.829032   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHUsername
	I0814 01:06:11.829206   61804 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/old-k8s-version-179312/id_rsa Username:docker}
	I0814 01:06:11.909667   61804 ssh_runner.go:195] Run: cat /etc/os-release
	I0814 01:06:11.913426   61804 info.go:137] Remote host: Buildroot 2023.02.9
	I0814 01:06:11.913452   61804 filesync.go:126] Scanning /home/jenkins/minikube-integration/19429-9425/.minikube/addons for local assets ...
	I0814 01:06:11.913530   61804 filesync.go:126] Scanning /home/jenkins/minikube-integration/19429-9425/.minikube/files for local assets ...
	I0814 01:06:11.913630   61804 filesync.go:149] local asset: /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem -> 165892.pem in /etc/ssl/certs
	I0814 01:06:11.913753   61804 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0814 01:06:11.923687   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem --> /etc/ssl/certs/165892.pem (1708 bytes)
	I0814 01:06:11.946123   61804 start.go:296] duration metric: took 121.53594ms for postStartSetup
	I0814 01:06:11.946172   61804 fix.go:56] duration metric: took 19.859362691s for fixHost
	I0814 01:06:11.946192   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHHostname
	I0814 01:06:11.948880   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.949241   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:11.949264   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:11.949490   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHPort
	I0814 01:06:11.949702   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 01:06:11.949889   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 01:06:11.950031   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHUsername
	I0814 01:06:11.950210   61804 main.go:141] libmachine: Using SSH client type: native
	I0814 01:06:11.950390   61804 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I0814 01:06:11.950403   61804 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0814 01:06:12.050230   61804 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723597572.007643909
	
	I0814 01:06:12.050252   61804 fix.go:216] guest clock: 1723597572.007643909
	I0814 01:06:12.050259   61804 fix.go:229] Guest: 2024-08-14 01:06:12.007643909 +0000 UTC Remote: 2024-08-14 01:06:11.946176003 +0000 UTC m=+272.466568091 (delta=61.467906ms)
	I0814 01:06:12.050292   61804 fix.go:200] guest clock delta is within tolerance: 61.467906ms
	I0814 01:06:12.050297   61804 start.go:83] releasing machines lock for "old-k8s-version-179312", held for 19.963518958s
	I0814 01:06:12.050328   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .DriverName
	I0814 01:06:12.050593   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetIP
	I0814 01:06:12.053723   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:12.054140   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:12.054170   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:12.054376   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .DriverName
	I0814 01:06:12.054804   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .DriverName
	I0814 01:06:12.054992   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .DriverName
	I0814 01:06:12.055076   61804 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0814 01:06:12.055137   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHHostname
	I0814 01:06:12.055191   61804 ssh_runner.go:195] Run: cat /version.json
	I0814 01:06:12.055216   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHHostname
	I0814 01:06:12.058027   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:12.058378   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:12.058404   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:12.058455   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:12.058684   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHPort
	I0814 01:06:12.058796   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:12.058828   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:12.058874   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 01:06:12.059041   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHUsername
	I0814 01:06:12.059107   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHPort
	I0814 01:06:12.059179   61804 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/old-k8s-version-179312/id_rsa Username:docker}
	I0814 01:06:12.059276   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHKeyPath
	I0814 01:06:12.059582   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetSSHUsername
	I0814 01:06:12.059721   61804 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/old-k8s-version-179312/id_rsa Username:docker}
	I0814 01:06:12.169671   61804 ssh_runner.go:195] Run: systemctl --version
	I0814 01:06:12.175640   61804 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0814 01:06:12.326156   61804 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0814 01:06:12.332951   61804 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0814 01:06:12.333015   61804 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0814 01:06:12.351706   61804 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0814 01:06:12.351737   61804 start.go:495] detecting cgroup driver to use...
	I0814 01:06:12.351808   61804 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0814 01:06:12.367945   61804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0814 01:06:12.381540   61804 docker.go:217] disabling cri-docker service (if available) ...
	I0814 01:06:12.381607   61804 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0814 01:06:12.394497   61804 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0814 01:06:12.408848   61804 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0814 01:06:12.530080   61804 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0814 01:06:12.705566   61804 docker.go:233] disabling docker service ...
	I0814 01:06:12.705627   61804 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0814 01:06:12.721274   61804 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0814 01:06:12.736855   61804 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0814 01:06:12.851178   61804 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0814 01:06:12.973876   61804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0814 01:06:12.987600   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 01:06:13.004553   61804 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0814 01:06:13.004656   61804 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:06:13.014424   61804 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0814 01:06:13.014507   61804 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:06:13.024038   61804 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:06:13.033588   61804 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:06:13.043124   61804 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0814 01:06:13.052585   61804 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0814 01:06:13.061221   61804 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0814 01:06:13.061308   61804 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0814 01:06:13.075277   61804 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0814 01:06:13.087018   61804 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 01:06:13.227288   61804 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0814 01:06:13.372753   61804 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0814 01:06:13.372848   61804 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0814 01:06:13.377444   61804 start.go:563] Will wait 60s for crictl version
	I0814 01:06:13.377499   61804 ssh_runner.go:195] Run: which crictl
	I0814 01:06:13.381068   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0814 01:06:13.430604   61804 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0814 01:06:13.430694   61804 ssh_runner.go:195] Run: crio --version
	I0814 01:06:13.460827   61804 ssh_runner.go:195] Run: crio --version
	I0814 01:06:13.491550   61804 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0814 01:06:13.492760   61804 main.go:141] libmachine: (old-k8s-version-179312) Calling .GetIP
	I0814 01:06:13.495846   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:13.496218   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:76:73", ip: ""} in network mk-old-k8s-version-179312: {Iface:virbr4 ExpiryTime:2024-08-14 02:06:03 +0000 UTC Type:0 Mac:52:54:00:b2:76:73 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-179312 Clientid:01:52:54:00:b2:76:73}
	I0814 01:06:13.496255   61804 main.go:141] libmachine: (old-k8s-version-179312) DBG | domain old-k8s-version-179312 has defined IP address 192.168.61.123 and MAC address 52:54:00:b2:76:73 in network mk-old-k8s-version-179312
	I0814 01:06:13.496435   61804 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0814 01:06:13.500489   61804 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 01:06:13.512643   61804 kubeadm.go:883] updating cluster {Name:old-k8s-version-179312 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-179312 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.123 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0814 01:06:13.512785   61804 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0814 01:06:13.512842   61804 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 01:06:13.560050   61804 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0814 01:06:13.560112   61804 ssh_runner.go:195] Run: which lz4
	I0814 01:06:13.564105   61804 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0814 01:06:13.567985   61804 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0814 01:06:13.568014   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0814 01:06:12.074155   61115 main.go:141] libmachine: (embed-certs-901410) Calling .Start
	I0814 01:06:12.074285   61115 main.go:141] libmachine: (embed-certs-901410) Ensuring networks are active...
	I0814 01:06:12.074948   61115 main.go:141] libmachine: (embed-certs-901410) Ensuring network default is active
	I0814 01:06:12.075282   61115 main.go:141] libmachine: (embed-certs-901410) Ensuring network mk-embed-certs-901410 is active
	I0814 01:06:12.075694   61115 main.go:141] libmachine: (embed-certs-901410) Getting domain xml...
	I0814 01:06:12.076354   61115 main.go:141] libmachine: (embed-certs-901410) Creating domain...
	I0814 01:06:13.425468   61115 main.go:141] libmachine: (embed-certs-901410) Waiting to get IP...
	I0814 01:06:13.426367   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:13.426876   61115 main.go:141] libmachine: (embed-certs-901410) DBG | unable to find current IP address of domain embed-certs-901410 in network mk-embed-certs-901410
	I0814 01:06:13.426936   61115 main.go:141] libmachine: (embed-certs-901410) DBG | I0814 01:06:13.426842   63044 retry.go:31] will retry after 280.861769ms: waiting for machine to come up
	I0814 01:06:13.709645   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:13.710369   61115 main.go:141] libmachine: (embed-certs-901410) DBG | unable to find current IP address of domain embed-certs-901410 in network mk-embed-certs-901410
	I0814 01:06:13.710524   61115 main.go:141] libmachine: (embed-certs-901410) DBG | I0814 01:06:13.710442   63044 retry.go:31] will retry after 316.02196ms: waiting for machine to come up
	I0814 01:06:14.028197   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:14.028722   61115 main.go:141] libmachine: (embed-certs-901410) DBG | unable to find current IP address of domain embed-certs-901410 in network mk-embed-certs-901410
	I0814 01:06:14.028751   61115 main.go:141] libmachine: (embed-certs-901410) DBG | I0814 01:06:14.028683   63044 retry.go:31] will retry after 317.388844ms: waiting for machine to come up
	I0814 01:06:14.347390   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:14.347888   61115 main.go:141] libmachine: (embed-certs-901410) DBG | unable to find current IP address of domain embed-certs-901410 in network mk-embed-certs-901410
	I0814 01:06:14.347917   61115 main.go:141] libmachine: (embed-certs-901410) DBG | I0814 01:06:14.347834   63044 retry.go:31] will retry after 422.687955ms: waiting for machine to come up
	I0814 01:06:14.772182   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:14.772756   61115 main.go:141] libmachine: (embed-certs-901410) DBG | unable to find current IP address of domain embed-certs-901410 in network mk-embed-certs-901410
	I0814 01:06:14.772785   61115 main.go:141] libmachine: (embed-certs-901410) DBG | I0814 01:06:14.772704   63044 retry.go:31] will retry after 517.722001ms: waiting for machine to come up
	I0814 01:06:11.781300   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:13.782226   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:15.782509   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:14.919068   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:16.920536   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:15.010425   61804 crio.go:462] duration metric: took 1.446361159s to copy over tarball
	I0814 01:06:15.010503   61804 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0814 01:06:17.960543   61804 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.950002604s)
	I0814 01:06:17.960583   61804 crio.go:469] duration metric: took 2.950131362s to extract the tarball
	I0814 01:06:17.960595   61804 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0814 01:06:18.002898   61804 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 01:06:18.039862   61804 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0814 01:06:18.039887   61804 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0814 01:06:18.039949   61804 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 01:06:18.039976   61804 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0814 01:06:18.040029   61804 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0814 01:06:18.040037   61804 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 01:06:18.040076   61804 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0814 01:06:18.040092   61804 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0814 01:06:18.040279   61804 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0814 01:06:18.040285   61804 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0814 01:06:18.041502   61804 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 01:06:18.041605   61804 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0814 01:06:18.041642   61804 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 01:06:18.041655   61804 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0814 01:06:18.041683   61804 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0814 01:06:18.041709   61804 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0814 01:06:18.041712   61804 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0814 01:06:18.041643   61804 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0814 01:06:18.267865   61804 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0814 01:06:18.300630   61804 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0814 01:06:18.309691   61804 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0814 01:06:18.312711   61804 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0814 01:06:18.319830   61804 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 01:06:18.333483   61804 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0814 01:06:18.333571   61804 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0814 01:06:18.333617   61804 ssh_runner.go:195] Run: which crictl
	I0814 01:06:18.333854   61804 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0814 01:06:18.355530   61804 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0814 01:06:18.460940   61804 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0814 01:06:18.460989   61804 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0814 01:06:18.460991   61804 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0814 01:06:18.461028   61804 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0814 01:06:18.461038   61804 ssh_runner.go:195] Run: which crictl
	I0814 01:06:18.461072   61804 ssh_runner.go:195] Run: which crictl
	I0814 01:06:18.466105   61804 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0814 01:06:18.466146   61804 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 01:06:18.466158   61804 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0814 01:06:18.466194   61804 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0814 01:06:18.466200   61804 ssh_runner.go:195] Run: which crictl
	I0814 01:06:18.466232   61804 ssh_runner.go:195] Run: which crictl
	I0814 01:06:18.466109   61804 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0814 01:06:18.466290   61804 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0814 01:06:18.466163   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0814 01:06:18.466338   61804 ssh_runner.go:195] Run: which crictl
	I0814 01:06:18.471203   61804 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0814 01:06:18.471244   61804 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0814 01:06:18.471327   61804 ssh_runner.go:195] Run: which crictl
	I0814 01:06:18.477596   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0814 01:06:18.477709   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 01:06:18.477741   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0814 01:06:18.536417   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0814 01:06:18.536483   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0814 01:06:18.536443   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0814 01:06:18.536516   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0814 01:06:18.560937   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0814 01:06:18.560979   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0814 01:06:18.571932   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 01:06:18.690215   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0814 01:06:18.690271   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0814 01:06:18.690385   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0814 01:06:18.690416   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0814 01:06:18.710801   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0814 01:06:18.722130   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0814 01:06:18.722180   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 01:06:18.854942   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0814 01:06:18.854975   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0814 01:06:18.855019   61804 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0814 01:06:18.855064   61804 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0814 01:06:18.855069   61804 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0814 01:06:18.855143   61804 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0814 01:06:18.855197   61804 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0814 01:06:18.917832   61804 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0814 01:06:18.917892   61804 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0814 01:06:18.919778   61804 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0814 01:06:18.937014   61804 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 01:06:19.077956   61804 cache_images.go:92] duration metric: took 1.038051355s to LoadCachedImages
	W0814 01:06:19.078050   61804 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19429-9425/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0814 01:06:19.078068   61804 kubeadm.go:934] updating node { 192.168.61.123 8443 v1.20.0 crio true true} ...
	I0814 01:06:19.078198   61804 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-179312 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.123
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-179312 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0814 01:06:19.078309   61804 ssh_runner.go:195] Run: crio config
	I0814 01:06:19.126091   61804 cni.go:84] Creating CNI manager for ""
	I0814 01:06:19.126114   61804 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 01:06:19.126129   61804 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0814 01:06:19.126159   61804 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.123 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-179312 NodeName:old-k8s-version-179312 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.123"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.123 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0814 01:06:19.126325   61804 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.123
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-179312"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.123
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.123"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0814 01:06:19.126402   61804 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0814 01:06:19.136422   61804 binaries.go:44] Found k8s binaries, skipping transfer
	I0814 01:06:19.136481   61804 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0814 01:06:19.145476   61804 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0814 01:06:19.161780   61804 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0814 01:06:19.178893   61804 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0814 01:06:19.196515   61804 ssh_runner.go:195] Run: grep 192.168.61.123	control-plane.minikube.internal$ /etc/hosts
	I0814 01:06:19.200204   61804 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.123	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 01:06:19.211943   61804 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 01:06:19.333517   61804 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 01:06:19.350008   61804 certs.go:68] Setting up /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312 for IP: 192.168.61.123
	I0814 01:06:19.350055   61804 certs.go:194] generating shared ca certs ...
	I0814 01:06:19.350094   61804 certs.go:226] acquiring lock for ca certs: {Name:mke321171338291cf6d66f3acbfe43d3fabcafa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 01:06:19.350294   61804 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19429-9425/.minikube/ca.key
	I0814 01:06:19.350371   61804 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.key
	I0814 01:06:19.350387   61804 certs.go:256] generating profile certs ...
	I0814 01:06:19.350530   61804 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312/client.key
	I0814 01:06:19.350603   61804 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312/apiserver.key.6e56bf34
	I0814 01:06:19.350667   61804 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312/proxy-client.key
	I0814 01:06:19.350846   61804 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589.pem (1338 bytes)
	W0814 01:06:19.350928   61804 certs.go:480] ignoring /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589_empty.pem, impossibly tiny 0 bytes
	I0814 01:06:19.350958   61804 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem (1679 bytes)
	I0814 01:06:19.350995   61804 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem (1082 bytes)
	I0814 01:06:19.351032   61804 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem (1123 bytes)
	I0814 01:06:19.351076   61804 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem (1675 bytes)
	I0814 01:06:19.351152   61804 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem (1708 bytes)
	I0814 01:06:19.352060   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0814 01:06:19.400249   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0814 01:06:19.430497   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0814 01:06:19.478315   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0814 01:06:19.507327   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0814 01:06:15.292336   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:15.292816   61115 main.go:141] libmachine: (embed-certs-901410) DBG | unable to find current IP address of domain embed-certs-901410 in network mk-embed-certs-901410
	I0814 01:06:15.292847   61115 main.go:141] libmachine: (embed-certs-901410) DBG | I0814 01:06:15.292765   63044 retry.go:31] will retry after 585.844986ms: waiting for machine to come up
	I0814 01:06:15.880233   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:15.880833   61115 main.go:141] libmachine: (embed-certs-901410) DBG | unable to find current IP address of domain embed-certs-901410 in network mk-embed-certs-901410
	I0814 01:06:15.880903   61115 main.go:141] libmachine: (embed-certs-901410) DBG | I0814 01:06:15.880810   63044 retry.go:31] will retry after 827.81891ms: waiting for machine to come up
	I0814 01:06:16.710168   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:16.710630   61115 main.go:141] libmachine: (embed-certs-901410) DBG | unable to find current IP address of domain embed-certs-901410 in network mk-embed-certs-901410
	I0814 01:06:16.710671   61115 main.go:141] libmachine: (embed-certs-901410) DBG | I0814 01:06:16.710577   63044 retry.go:31] will retry after 1.430172339s: waiting for machine to come up
	I0814 01:06:18.142094   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:18.142557   61115 main.go:141] libmachine: (embed-certs-901410) DBG | unable to find current IP address of domain embed-certs-901410 in network mk-embed-certs-901410
	I0814 01:06:18.142604   61115 main.go:141] libmachine: (embed-certs-901410) DBG | I0814 01:06:18.142477   63044 retry.go:31] will retry after 1.240583508s: waiting for machine to come up
	I0814 01:06:19.384686   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:19.385102   61115 main.go:141] libmachine: (embed-certs-901410) DBG | unable to find current IP address of domain embed-certs-901410 in network mk-embed-certs-901410
	I0814 01:06:19.385132   61115 main.go:141] libmachine: (embed-certs-901410) DBG | I0814 01:06:19.385044   63044 retry.go:31] will retry after 2.005758756s: waiting for machine to come up
	I0814 01:06:18.281722   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:20.571594   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:19.619695   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:21.918897   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:19.535095   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0814 01:06:19.564128   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0814 01:06:19.600227   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0814 01:06:19.624712   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0814 01:06:19.649975   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589.pem --> /usr/share/ca-certificates/16589.pem (1338 bytes)
	I0814 01:06:19.673278   61804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem --> /usr/share/ca-certificates/165892.pem (1708 bytes)
	I0814 01:06:19.697408   61804 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0814 01:06:19.716197   61804 ssh_runner.go:195] Run: openssl version
	I0814 01:06:19.723669   61804 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/165892.pem && ln -fs /usr/share/ca-certificates/165892.pem /etc/ssl/certs/165892.pem"
	I0814 01:06:19.737165   61804 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/165892.pem
	I0814 01:06:19.742731   61804 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 13 23:59 /usr/share/ca-certificates/165892.pem
	I0814 01:06:19.742778   61804 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/165892.pem
	I0814 01:06:19.750009   61804 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/165892.pem /etc/ssl/certs/3ec20f2e.0"
	I0814 01:06:19.761830   61804 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0814 01:06:19.772601   61804 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0814 01:06:19.777222   61804 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 13 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0814 01:06:19.777311   61804 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0814 01:06:19.784554   61804 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0814 01:06:19.794731   61804 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16589.pem && ln -fs /usr/share/ca-certificates/16589.pem /etc/ssl/certs/16589.pem"
	I0814 01:06:19.804326   61804 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16589.pem
	I0814 01:06:19.808528   61804 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 13 23:59 /usr/share/ca-certificates/16589.pem
	I0814 01:06:19.808589   61804 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16589.pem
	I0814 01:06:19.815518   61804 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16589.pem /etc/ssl/certs/51391683.0"
	I0814 01:06:19.828687   61804 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0814 01:06:19.833943   61804 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0814 01:06:19.839826   61804 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0814 01:06:19.845576   61804 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0814 01:06:19.851700   61804 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0814 01:06:19.857179   61804 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0814 01:06:19.862728   61804 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0814 01:06:19.868172   61804 kubeadm.go:392] StartCluster: {Name:old-k8s-version-179312 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-179312 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.123 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 01:06:19.868280   61804 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0814 01:06:19.868327   61804 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 01:06:19.905130   61804 cri.go:89] found id: ""
	I0814 01:06:19.905208   61804 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0814 01:06:19.915743   61804 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0814 01:06:19.915763   61804 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0814 01:06:19.915812   61804 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0814 01:06:19.926673   61804 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0814 01:06:19.928112   61804 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-179312" does not appear in /home/jenkins/minikube-integration/19429-9425/kubeconfig
	I0814 01:06:19.929057   61804 kubeconfig.go:62] /home/jenkins/minikube-integration/19429-9425/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-179312" cluster setting kubeconfig missing "old-k8s-version-179312" context setting]
	I0814 01:06:19.931588   61804 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19429-9425/kubeconfig: {Name:mkd99f966962f24d3ec49056c344f5320df43dba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 01:06:19.938507   61804 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0814 01:06:19.947574   61804 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.123
	I0814 01:06:19.947601   61804 kubeadm.go:1160] stopping kube-system containers ...
	I0814 01:06:19.947641   61804 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0814 01:06:19.947698   61804 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 01:06:19.986219   61804 cri.go:89] found id: ""
	I0814 01:06:19.986301   61804 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0814 01:06:20.001325   61804 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 01:06:20.010260   61804 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 01:06:20.010278   61804 kubeadm.go:157] found existing configuration files:
	
	I0814 01:06:20.010320   61804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 01:06:20.018691   61804 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 01:06:20.018753   61804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 01:06:20.027627   61804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 01:06:20.035892   61804 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 01:06:20.035948   61804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 01:06:20.044508   61804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 01:06:20.052714   61804 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 01:06:20.052760   61804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 01:06:20.062524   61804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 01:06:20.070978   61804 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 01:06:20.071037   61804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 01:06:20.079423   61804 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 01:06:20.088368   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:06:20.206955   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:06:21.197237   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:06:21.439928   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:06:21.552279   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:06:21.636249   61804 api_server.go:52] waiting for apiserver process to appear ...
	I0814 01:06:21.636337   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:22.136661   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:22.636861   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:23.136511   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:23.636583   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:24.136899   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:21.392188   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:21.392717   61115 main.go:141] libmachine: (embed-certs-901410) DBG | unable to find current IP address of domain embed-certs-901410 in network mk-embed-certs-901410
	I0814 01:06:21.392744   61115 main.go:141] libmachine: (embed-certs-901410) DBG | I0814 01:06:21.392636   63044 retry.go:31] will retry after 2.297974145s: waiting for machine to come up
	I0814 01:06:23.692024   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:23.692545   61115 main.go:141] libmachine: (embed-certs-901410) DBG | unable to find current IP address of domain embed-certs-901410 in network mk-embed-certs-901410
	I0814 01:06:23.692574   61115 main.go:141] libmachine: (embed-certs-901410) DBG | I0814 01:06:23.692496   63044 retry.go:31] will retry after 2.273164713s: waiting for machine to come up
	I0814 01:06:22.780588   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:24.781349   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:23.919847   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:26.417563   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:24.636605   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:25.136809   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:25.636474   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:26.137253   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:26.636758   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:27.137184   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:27.637201   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:28.137082   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:28.637409   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:29.136794   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:25.967275   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:25.967771   61115 main.go:141] libmachine: (embed-certs-901410) DBG | unable to find current IP address of domain embed-certs-901410 in network mk-embed-certs-901410
	I0814 01:06:25.967799   61115 main.go:141] libmachine: (embed-certs-901410) DBG | I0814 01:06:25.967714   63044 retry.go:31] will retry after 3.279375715s: waiting for machine to come up
	I0814 01:06:29.249387   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.249873   61115 main.go:141] libmachine: (embed-certs-901410) Found IP for machine: 192.168.50.210
	I0814 01:06:29.249893   61115 main.go:141] libmachine: (embed-certs-901410) Reserving static IP address...
	I0814 01:06:29.249911   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has current primary IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.250345   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "embed-certs-901410", mac: "52:54:00:fa:4e:56", ip: "192.168.50.210"} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:06:29.250380   61115 main.go:141] libmachine: (embed-certs-901410) DBG | skip adding static IP to network mk-embed-certs-901410 - found existing host DHCP lease matching {name: "embed-certs-901410", mac: "52:54:00:fa:4e:56", ip: "192.168.50.210"}
	I0814 01:06:29.250394   61115 main.go:141] libmachine: (embed-certs-901410) Reserved static IP address: 192.168.50.210
	I0814 01:06:29.250409   61115 main.go:141] libmachine: (embed-certs-901410) Waiting for SSH to be available...
	I0814 01:06:29.250425   61115 main.go:141] libmachine: (embed-certs-901410) DBG | Getting to WaitForSSH function...
	I0814 01:06:29.252472   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.252801   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:06:29.252825   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.252933   61115 main.go:141] libmachine: (embed-certs-901410) DBG | Using SSH client type: external
	I0814 01:06:29.252973   61115 main.go:141] libmachine: (embed-certs-901410) DBG | Using SSH private key: /home/jenkins/minikube-integration/19429-9425/.minikube/machines/embed-certs-901410/id_rsa (-rw-------)
	I0814 01:06:29.253015   61115 main.go:141] libmachine: (embed-certs-901410) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.210 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19429-9425/.minikube/machines/embed-certs-901410/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0814 01:06:29.253031   61115 main.go:141] libmachine: (embed-certs-901410) DBG | About to run SSH command:
	I0814 01:06:29.253044   61115 main.go:141] libmachine: (embed-certs-901410) DBG | exit 0
	I0814 01:06:29.381821   61115 main.go:141] libmachine: (embed-certs-901410) DBG | SSH cmd err, output: <nil>: 
	I0814 01:06:29.382216   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetConfigRaw
	I0814 01:06:29.382909   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetIP
	I0814 01:06:29.385247   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.385611   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:06:29.385648   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.385918   61115 profile.go:143] Saving config to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/embed-certs-901410/config.json ...
	I0814 01:06:29.386116   61115 machine.go:94] provisionDockerMachine start ...
	I0814 01:06:29.386151   61115 main.go:141] libmachine: (embed-certs-901410) Calling .DriverName
	I0814 01:06:29.386370   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHHostname
	I0814 01:06:29.388690   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.389026   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:06:29.389054   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.389185   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHPort
	I0814 01:06:29.389353   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 01:06:29.389510   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 01:06:29.389658   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHUsername
	I0814 01:06:29.389812   61115 main.go:141] libmachine: Using SSH client type: native
	I0814 01:06:29.390022   61115 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.210 22 <nil> <nil>}
	I0814 01:06:29.390033   61115 main.go:141] libmachine: About to run SSH command:
	hostname
	I0814 01:06:29.502650   61115 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0814 01:06:29.502704   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetMachineName
	I0814 01:06:29.502923   61115 buildroot.go:166] provisioning hostname "embed-certs-901410"
	I0814 01:06:29.502947   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetMachineName
	I0814 01:06:29.503141   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHHostname
	I0814 01:06:29.505440   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.505866   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:06:29.505903   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.506078   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHPort
	I0814 01:06:29.506278   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 01:06:29.506425   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 01:06:29.506558   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHUsername
	I0814 01:06:29.506733   61115 main.go:141] libmachine: Using SSH client type: native
	I0814 01:06:29.506942   61115 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.210 22 <nil> <nil>}
	I0814 01:06:29.506961   61115 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-901410 && echo "embed-certs-901410" | sudo tee /etc/hostname
	I0814 01:06:29.632717   61115 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-901410
	
	I0814 01:06:29.632749   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHHostname
	I0814 01:06:29.635919   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.636318   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:06:29.636346   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.636582   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHPort
	I0814 01:06:29.636804   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 01:06:29.637010   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 01:06:29.637205   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHUsername
	I0814 01:06:29.637413   61115 main.go:141] libmachine: Using SSH client type: native
	I0814 01:06:29.637574   61115 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.210 22 <nil> <nil>}
	I0814 01:06:29.637590   61115 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-901410' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-901410/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-901410' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0814 01:06:29.759030   61115 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 01:06:29.759059   61115 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19429-9425/.minikube CaCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19429-9425/.minikube}
	I0814 01:06:29.759100   61115 buildroot.go:174] setting up certificates
	I0814 01:06:29.759114   61115 provision.go:84] configureAuth start
	I0814 01:06:29.759126   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetMachineName
	I0814 01:06:29.759412   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetIP
	I0814 01:06:29.761597   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.761918   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:06:29.761946   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.762095   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHHostname
	I0814 01:06:29.763969   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.764320   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:06:29.764353   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.764497   61115 provision.go:143] copyHostCerts
	I0814 01:06:29.764568   61115 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem, removing ...
	I0814 01:06:29.764582   61115 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem
	I0814 01:06:29.764653   61115 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/ca.pem (1082 bytes)
	I0814 01:06:29.764781   61115 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem, removing ...
	I0814 01:06:29.764791   61115 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem
	I0814 01:06:29.764814   61115 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/cert.pem (1123 bytes)
	I0814 01:06:29.764875   61115 exec_runner.go:144] found /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem, removing ...
	I0814 01:06:29.764882   61115 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem
	I0814 01:06:29.764899   61115 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19429-9425/.minikube/key.pem (1675 bytes)
	I0814 01:06:29.764954   61115 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem org=jenkins.embed-certs-901410 san=[127.0.0.1 192.168.50.210 embed-certs-901410 localhost minikube]
	I0814 01:06:29.870234   61115 provision.go:177] copyRemoteCerts
	I0814 01:06:29.870290   61115 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0814 01:06:29.870314   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHHostname
	I0814 01:06:29.872903   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.873188   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:06:29.873220   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:29.873388   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHPort
	I0814 01:06:29.873582   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 01:06:29.873748   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHUsername
	I0814 01:06:29.873849   61115 sshutil.go:53] new ssh client: &{IP:192.168.50.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/embed-certs-901410/id_rsa Username:docker}
	I0814 01:06:29.959592   61115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0814 01:06:29.982484   61115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0814 01:06:30.005257   61115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0814 01:06:30.029571   61115 provision.go:87] duration metric: took 270.444778ms to configureAuth
	I0814 01:06:30.029595   61115 buildroot.go:189] setting minikube options for container-runtime
	I0814 01:06:30.029773   61115 config.go:182] Loaded profile config "embed-certs-901410": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 01:06:30.029836   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHHostname
	I0814 01:06:30.032696   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:30.033078   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:06:30.033115   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:30.033301   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHPort
	I0814 01:06:30.033492   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 01:06:30.033658   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 01:06:30.033798   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHUsername
	I0814 01:06:30.033953   61115 main.go:141] libmachine: Using SSH client type: native
	I0814 01:06:30.034162   61115 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.210 22 <nil> <nil>}
	I0814 01:06:30.034182   61115 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0814 01:06:27.281267   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:29.284406   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:30.310330   61115 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0814 01:06:30.310362   61115 machine.go:97] duration metric: took 924.221855ms to provisionDockerMachine
	I0814 01:06:30.310376   61115 start.go:293] postStartSetup for "embed-certs-901410" (driver="kvm2")
	I0814 01:06:30.310391   61115 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0814 01:06:30.310412   61115 main.go:141] libmachine: (embed-certs-901410) Calling .DriverName
	I0814 01:06:30.310792   61115 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0814 01:06:30.310829   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHHostname
	I0814 01:06:30.313781   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:30.314184   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:06:30.314211   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:30.314417   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHPort
	I0814 01:06:30.314605   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 01:06:30.314775   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHUsername
	I0814 01:06:30.314921   61115 sshutil.go:53] new ssh client: &{IP:192.168.50.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/embed-certs-901410/id_rsa Username:docker}
	I0814 01:06:30.400094   61115 ssh_runner.go:195] Run: cat /etc/os-release
	I0814 01:06:30.403861   61115 info.go:137] Remote host: Buildroot 2023.02.9
	I0814 01:06:30.403879   61115 filesync.go:126] Scanning /home/jenkins/minikube-integration/19429-9425/.minikube/addons for local assets ...
	I0814 01:06:30.403936   61115 filesync.go:126] Scanning /home/jenkins/minikube-integration/19429-9425/.minikube/files for local assets ...
	I0814 01:06:30.404014   61115 filesync.go:149] local asset: /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem -> 165892.pem in /etc/ssl/certs
	I0814 01:06:30.404128   61115 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0814 01:06:30.412469   61115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem --> /etc/ssl/certs/165892.pem (1708 bytes)
	I0814 01:06:30.434728   61115 start.go:296] duration metric: took 124.33735ms for postStartSetup
	I0814 01:06:30.434768   61115 fix.go:56] duration metric: took 18.384308902s for fixHost
	I0814 01:06:30.434792   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHHostname
	I0814 01:06:30.437730   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:30.438155   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:06:30.438177   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:30.438320   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHPort
	I0814 01:06:30.438510   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 01:06:30.438677   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 01:06:30.438818   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHUsername
	I0814 01:06:30.439014   61115 main.go:141] libmachine: Using SSH client type: native
	I0814 01:06:30.439219   61115 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.210 22 <nil> <nil>}
	I0814 01:06:30.439234   61115 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0814 01:06:30.550947   61115 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723597590.505165718
	
	I0814 01:06:30.550974   61115 fix.go:216] guest clock: 1723597590.505165718
	I0814 01:06:30.550984   61115 fix.go:229] Guest: 2024-08-14 01:06:30.505165718 +0000 UTC Remote: 2024-08-14 01:06:30.434773276 +0000 UTC m=+355.429845421 (delta=70.392442ms)
	I0814 01:06:30.551009   61115 fix.go:200] guest clock delta is within tolerance: 70.392442ms
	I0814 01:06:30.551018   61115 start.go:83] releasing machines lock for "embed-certs-901410", held for 18.500591627s
	I0814 01:06:30.551046   61115 main.go:141] libmachine: (embed-certs-901410) Calling .DriverName
	I0814 01:06:30.551330   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetIP
	I0814 01:06:30.553946   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:30.554367   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:06:30.554403   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:30.554586   61115 main.go:141] libmachine: (embed-certs-901410) Calling .DriverName
	I0814 01:06:30.555088   61115 main.go:141] libmachine: (embed-certs-901410) Calling .DriverName
	I0814 01:06:30.555280   61115 main.go:141] libmachine: (embed-certs-901410) Calling .DriverName
	I0814 01:06:30.555371   61115 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0814 01:06:30.555415   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHHostname
	I0814 01:06:30.555523   61115 ssh_runner.go:195] Run: cat /version.json
	I0814 01:06:30.555549   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHHostname
	I0814 01:06:30.558280   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:30.558369   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:30.558704   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:06:30.558730   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:30.558909   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHPort
	I0814 01:06:30.558922   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:06:30.558945   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:30.559110   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHPort
	I0814 01:06:30.559121   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 01:06:30.559307   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHUsername
	I0814 01:06:30.559319   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 01:06:30.559477   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHUsername
	I0814 01:06:30.559473   61115 sshutil.go:53] new ssh client: &{IP:192.168.50.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/embed-certs-901410/id_rsa Username:docker}
	I0814 01:06:30.559633   61115 sshutil.go:53] new ssh client: &{IP:192.168.50.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/embed-certs-901410/id_rsa Username:docker}
	I0814 01:06:30.650942   61115 ssh_runner.go:195] Run: systemctl --version
	I0814 01:06:30.686931   61115 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0814 01:06:30.834893   61115 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0814 01:06:30.840573   61115 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0814 01:06:30.840644   61115 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0814 01:06:30.856179   61115 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0814 01:06:30.856200   61115 start.go:495] detecting cgroup driver to use...
	I0814 01:06:30.856268   61115 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0814 01:06:30.872056   61115 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0814 01:06:30.884525   61115 docker.go:217] disabling cri-docker service (if available) ...
	I0814 01:06:30.884604   61115 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0814 01:06:30.897219   61115 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0814 01:06:30.910649   61115 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0814 01:06:31.031843   61115 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0814 01:06:31.170959   61115 docker.go:233] disabling docker service ...
	I0814 01:06:31.171034   61115 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0814 01:06:31.185812   61115 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0814 01:06:31.198349   61115 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0814 01:06:31.334492   61115 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0814 01:06:31.448638   61115 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0814 01:06:31.462494   61115 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 01:06:31.479307   61115 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0814 01:06:31.479376   61115 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:06:31.489135   61115 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0814 01:06:31.489202   61115 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:06:31.500174   61115 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:06:31.509884   61115 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:06:31.519412   61115 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0814 01:06:31.529352   61115 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:06:31.539360   61115 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:06:31.555761   61115 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 01:06:31.566278   61115 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0814 01:06:31.575191   61115 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0814 01:06:31.575242   61115 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0814 01:06:31.587429   61115 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0814 01:06:31.596637   61115 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 01:06:31.702555   61115 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0814 01:06:31.836836   61115 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0814 01:06:31.836908   61115 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0814 01:06:31.841202   61115 start.go:563] Will wait 60s for crictl version
	I0814 01:06:31.841272   61115 ssh_runner.go:195] Run: which crictl
	I0814 01:06:31.844681   61115 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0814 01:06:31.882260   61115 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0814 01:06:31.882348   61115 ssh_runner.go:195] Run: crio --version
	I0814 01:06:31.908181   61115 ssh_runner.go:195] Run: crio --version
	I0814 01:06:31.938158   61115 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0814 01:06:28.917018   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:30.917940   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:32.919466   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:29.636401   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:30.136547   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:30.636748   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:31.136557   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:31.636752   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:32.137082   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:32.637429   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:33.136895   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:33.636703   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:34.136811   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:31.939399   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetIP
	I0814 01:06:31.942325   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:31.942622   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:06:31.942660   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:06:31.942828   61115 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0814 01:06:31.947071   61115 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 01:06:31.958632   61115 kubeadm.go:883] updating cluster {Name:embed-certs-901410 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:embed-certs-901410 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.210 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0814 01:06:31.958783   61115 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0814 01:06:31.958853   61115 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 01:06:31.996526   61115 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0814 01:06:31.996602   61115 ssh_runner.go:195] Run: which lz4
	I0814 01:06:32.000322   61115 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0814 01:06:32.004629   61115 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0814 01:06:32.004661   61115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0814 01:06:33.171433   61115 crio.go:462] duration metric: took 1.171173942s to copy over tarball
	I0814 01:06:33.171504   61115 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0814 01:06:31.781468   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:33.781547   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:35.781641   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:35.418170   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:37.920694   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:34.637429   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:35.137322   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:35.636955   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:36.136713   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:36.636457   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:37.137396   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:37.637271   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:38.137099   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:38.637303   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:39.136673   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:35.285022   61115 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.11348357s)
	I0814 01:06:35.285047   61115 crio.go:469] duration metric: took 2.113589929s to extract the tarball
	I0814 01:06:35.285054   61115 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0814 01:06:35.320814   61115 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 01:06:35.362145   61115 crio.go:514] all images are preloaded for cri-o runtime.
	I0814 01:06:35.362169   61115 cache_images.go:84] Images are preloaded, skipping loading
	I0814 01:06:35.362177   61115 kubeadm.go:934] updating node { 192.168.50.210 8443 v1.31.0 crio true true} ...
	I0814 01:06:35.362289   61115 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-901410 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.210
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-901410 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0814 01:06:35.362359   61115 ssh_runner.go:195] Run: crio config
	I0814 01:06:35.413412   61115 cni.go:84] Creating CNI manager for ""
	I0814 01:06:35.413433   61115 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 01:06:35.413442   61115 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0814 01:06:35.413461   61115 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.210 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-901410 NodeName:embed-certs-901410 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.210"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.210 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0814 01:06:35.413620   61115 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.210
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-901410"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.210
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.210"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0814 01:06:35.413681   61115 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0814 01:06:35.424217   61115 binaries.go:44] Found k8s binaries, skipping transfer
	I0814 01:06:35.424287   61115 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0814 01:06:35.433358   61115 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0814 01:06:35.448828   61115 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0814 01:06:35.463579   61115 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0814 01:06:35.478423   61115 ssh_runner.go:195] Run: grep 192.168.50.210	control-plane.minikube.internal$ /etc/hosts
	I0814 01:06:35.482005   61115 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.210	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 01:06:35.493411   61115 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 01:06:35.625613   61115 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 01:06:35.642901   61115 certs.go:68] Setting up /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/embed-certs-901410 for IP: 192.168.50.210
	I0814 01:06:35.642927   61115 certs.go:194] generating shared ca certs ...
	I0814 01:06:35.642955   61115 certs.go:226] acquiring lock for ca certs: {Name:mke321171338291cf6d66f3acbfe43d3fabcafa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 01:06:35.643119   61115 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19429-9425/.minikube/ca.key
	I0814 01:06:35.643172   61115 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.key
	I0814 01:06:35.643184   61115 certs.go:256] generating profile certs ...
	I0814 01:06:35.643301   61115 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/embed-certs-901410/client.key
	I0814 01:06:35.643390   61115 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/embed-certs-901410/apiserver.key.0b2ea541
	I0814 01:06:35.643439   61115 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/embed-certs-901410/proxy-client.key
	I0814 01:06:35.643591   61115 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589.pem (1338 bytes)
	W0814 01:06:35.643630   61115 certs.go:480] ignoring /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589_empty.pem, impossibly tiny 0 bytes
	I0814 01:06:35.643648   61115 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca-key.pem (1679 bytes)
	I0814 01:06:35.643682   61115 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/ca.pem (1082 bytes)
	I0814 01:06:35.643727   61115 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/cert.pem (1123 bytes)
	I0814 01:06:35.643768   61115 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/certs/key.pem (1675 bytes)
	I0814 01:06:35.643825   61115 certs.go:484] found cert: /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem (1708 bytes)
	I0814 01:06:35.644478   61115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0814 01:06:35.681297   61115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0814 01:06:35.730067   61115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0814 01:06:35.763133   61115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0814 01:06:35.790593   61115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/embed-certs-901410/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0814 01:06:35.815663   61115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/embed-certs-901410/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0814 01:06:35.840763   61115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/embed-certs-901410/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0814 01:06:35.863820   61115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/embed-certs-901410/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0814 01:06:35.887018   61115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/certs/16589.pem --> /usr/share/ca-certificates/16589.pem (1338 bytes)
	I0814 01:06:35.909408   61115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/ssl/certs/165892.pem --> /usr/share/ca-certificates/165892.pem (1708 bytes)
	I0814 01:06:35.934175   61115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19429-9425/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0814 01:06:35.957179   61115 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0814 01:06:35.972922   61115 ssh_runner.go:195] Run: openssl version
	I0814 01:06:35.978523   61115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16589.pem && ln -fs /usr/share/ca-certificates/16589.pem /etc/ssl/certs/16589.pem"
	I0814 01:06:35.987896   61115 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16589.pem
	I0814 01:06:35.991861   61115 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 13 23:59 /usr/share/ca-certificates/16589.pem
	I0814 01:06:35.991922   61115 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16589.pem
	I0814 01:06:35.997354   61115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16589.pem /etc/ssl/certs/51391683.0"
	I0814 01:06:36.007366   61115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/165892.pem && ln -fs /usr/share/ca-certificates/165892.pem /etc/ssl/certs/165892.pem"
	I0814 01:06:36.017502   61115 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/165892.pem
	I0814 01:06:36.021456   61115 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 13 23:59 /usr/share/ca-certificates/165892.pem
	I0814 01:06:36.021506   61115 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/165892.pem
	I0814 01:06:36.026605   61115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/165892.pem /etc/ssl/certs/3ec20f2e.0"
	I0814 01:06:36.035758   61115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0814 01:06:36.044976   61115 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0814 01:06:36.048866   61115 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 13 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0814 01:06:36.048905   61115 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0814 01:06:36.053841   61115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0814 01:06:36.062901   61115 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0814 01:06:36.066905   61115 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0814 01:06:36.072359   61115 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0814 01:06:36.077384   61115 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0814 01:06:36.082634   61115 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0814 01:06:36.087734   61115 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0814 01:06:36.093076   61115 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0814 01:06:36.098239   61115 kubeadm.go:392] StartCluster: {Name:embed-certs-901410 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:embed-certs-901410 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.210 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 01:06:36.098366   61115 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0814 01:06:36.098414   61115 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 01:06:36.137745   61115 cri.go:89] found id: ""
	I0814 01:06:36.137812   61115 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0814 01:06:36.151288   61115 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0814 01:06:36.151304   61115 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0814 01:06:36.151346   61115 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0814 01:06:36.160854   61115 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0814 01:06:36.162454   61115 kubeconfig.go:125] found "embed-certs-901410" server: "https://192.168.50.210:8443"
	I0814 01:06:36.165608   61115 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0814 01:06:36.174251   61115 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.210
	I0814 01:06:36.174272   61115 kubeadm.go:1160] stopping kube-system containers ...
	I0814 01:06:36.174307   61115 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0814 01:06:36.174355   61115 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 01:06:36.208617   61115 cri.go:89] found id: ""
	I0814 01:06:36.208689   61115 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0814 01:06:36.223217   61115 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 01:06:36.231791   61115 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 01:06:36.231807   61115 kubeadm.go:157] found existing configuration files:
	
	I0814 01:06:36.231846   61115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 01:06:36.239738   61115 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 01:06:36.239779   61115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 01:06:36.248183   61115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 01:06:36.256052   61115 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 01:06:36.256099   61115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 01:06:36.264174   61115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 01:06:36.271909   61115 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 01:06:36.271951   61115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 01:06:36.280467   61115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 01:06:36.288795   61115 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 01:06:36.288841   61115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 01:06:36.297142   61115 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 01:06:36.305326   61115 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:06:36.419654   61115 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:06:37.266994   61115 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:06:37.469417   61115 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:06:37.544102   61115 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:06:37.616596   61115 api_server.go:52] waiting for apiserver process to appear ...
	I0814 01:06:37.616684   61115 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:38.117278   61115 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:38.616805   61115 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:39.117789   61115 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:39.616986   61115 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:39.684640   61115 api_server.go:72] duration metric: took 2.068036759s to wait for apiserver process to appear ...
	I0814 01:06:39.684668   61115 api_server.go:88] waiting for apiserver healthz status ...
	I0814 01:06:39.684690   61115 api_server.go:253] Checking apiserver healthz at https://192.168.50.210:8443/healthz ...
	I0814 01:06:39.685138   61115 api_server.go:269] stopped: https://192.168.50.210:8443/healthz: Get "https://192.168.50.210:8443/healthz": dial tcp 192.168.50.210:8443: connect: connection refused
	I0814 01:06:37.782873   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:40.281438   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:40.418079   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:42.418440   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:40.184807   61115 api_server.go:253] Checking apiserver healthz at https://192.168.50.210:8443/healthz ...
	I0814 01:06:42.435930   61115 api_server.go:279] https://192.168.50.210:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0814 01:06:42.435960   61115 api_server.go:103] status: https://192.168.50.210:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0814 01:06:42.435997   61115 api_server.go:253] Checking apiserver healthz at https://192.168.50.210:8443/healthz ...
	I0814 01:06:42.464919   61115 api_server.go:279] https://192.168.50.210:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0814 01:06:42.464949   61115 api_server.go:103] status: https://192.168.50.210:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0814 01:06:42.685218   61115 api_server.go:253] Checking apiserver healthz at https://192.168.50.210:8443/healthz ...
	I0814 01:06:42.691065   61115 api_server.go:279] https://192.168.50.210:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0814 01:06:42.691089   61115 api_server.go:103] status: https://192.168.50.210:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0814 01:06:43.185274   61115 api_server.go:253] Checking apiserver healthz at https://192.168.50.210:8443/healthz ...
	I0814 01:06:43.191160   61115 api_server.go:279] https://192.168.50.210:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0814 01:06:43.191189   61115 api_server.go:103] status: https://192.168.50.210:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0814 01:06:43.685407   61115 api_server.go:253] Checking apiserver healthz at https://192.168.50.210:8443/healthz ...
	I0814 01:06:43.689515   61115 api_server.go:279] https://192.168.50.210:8443/healthz returned 200:
	ok
	I0814 01:06:43.695408   61115 api_server.go:141] control plane version: v1.31.0
	I0814 01:06:43.695435   61115 api_server.go:131] duration metric: took 4.010759094s to wait for apiserver health ...
	I0814 01:06:43.695445   61115 cni.go:84] Creating CNI manager for ""
	I0814 01:06:43.695454   61115 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 01:06:43.696966   61115 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0814 01:06:39.637384   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:40.136562   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:40.637447   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:41.137212   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:41.636824   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:42.136790   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:42.637352   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:43.137237   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:43.637327   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:44.136777   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:43.698444   61115 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0814 01:06:43.713840   61115 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0814 01:06:43.754611   61115 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 01:06:43.765369   61115 system_pods.go:59] 8 kube-system pods found
	I0814 01:06:43.765402   61115 system_pods.go:61] "coredns-6f6b679f8f-fpz8f" [0fae381f-1394-4a55-9735-61197051e0da] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0814 01:06:43.765410   61115 system_pods.go:61] "etcd-embed-certs-901410" [238a87a0-88ab-4663-bc2f-6bf2cb641902] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0814 01:06:43.765421   61115 system_pods.go:61] "kube-apiserver-embed-certs-901410" [0847b62e-42c4-4616-9412-a1547f991ea5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0814 01:06:43.765427   61115 system_pods.go:61] "kube-controller-manager-embed-certs-901410" [868c288a-504f-4bc6-9af3-8d3eff0a4e66] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0814 01:06:43.765431   61115 system_pods.go:61] "kube-proxy-gtr77" [f7b7a6b1-e47f-4982-8247-2adf9ce6690b] Running
	I0814 01:06:43.765436   61115 system_pods.go:61] "kube-scheduler-embed-certs-901410" [803a8501-9a24-436d-8439-2e05ed2b6e2c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0814 01:06:43.765443   61115 system_pods.go:61] "metrics-server-6867b74b74-82tmq" [4683e8c4-92a5-4b81-86c8-55da6044e780] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 01:06:43.765447   61115 system_pods.go:61] "storage-provisioner" [796497c7-c7b4-4207-9dbb-970702bab314] Running
	I0814 01:06:43.765453   61115 system_pods.go:74] duration metric: took 10.823914ms to wait for pod list to return data ...
	I0814 01:06:43.765468   61115 node_conditions.go:102] verifying NodePressure condition ...
	I0814 01:06:43.769292   61115 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0814 01:06:43.769319   61115 node_conditions.go:123] node cpu capacity is 2
	I0814 01:06:43.769334   61115 node_conditions.go:105] duration metric: took 3.855137ms to run NodePressure ...
	I0814 01:06:43.769355   61115 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 01:06:44.041384   61115 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0814 01:06:44.045549   61115 kubeadm.go:739] kubelet initialised
	I0814 01:06:44.045569   61115 kubeadm.go:740] duration metric: took 4.15887ms waiting for restarted kubelet to initialise ...
	I0814 01:06:44.045576   61115 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 01:06:44.050480   61115 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6f6b679f8f-fpz8f" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:42.281812   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:44.795089   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:44.917037   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:46.918399   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:44.636971   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:45.137082   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:45.636661   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:46.136690   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:46.636597   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:47.136601   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:47.636799   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:48.136486   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:48.637415   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:49.136703   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:46.057380   61115 pod_ready.go:102] pod "coredns-6f6b679f8f-fpz8f" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:48.556914   61115 pod_ready.go:102] pod "coredns-6f6b679f8f-fpz8f" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:49.561672   61115 pod_ready.go:92] pod "coredns-6f6b679f8f-fpz8f" in "kube-system" namespace has status "Ready":"True"
	I0814 01:06:49.561693   61115 pod_ready.go:81] duration metric: took 5.511190087s for pod "coredns-6f6b679f8f-fpz8f" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:49.561705   61115 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-901410" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:47.281700   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:49.780884   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:49.418739   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:51.918181   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:49.636646   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:50.137134   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:50.637310   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:51.136913   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:51.636930   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:52.137158   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:52.636489   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:53.137140   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:53.637032   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:54.137345   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:51.567510   61115 pod_ready.go:102] pod "etcd-embed-certs-901410" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:52.567550   61115 pod_ready.go:92] pod "etcd-embed-certs-901410" in "kube-system" namespace has status "Ready":"True"
	I0814 01:06:52.567575   61115 pod_ready.go:81] duration metric: took 3.005862861s for pod "etcd-embed-certs-901410" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:52.567584   61115 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-901410" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:52.572128   61115 pod_ready.go:92] pod "kube-apiserver-embed-certs-901410" in "kube-system" namespace has status "Ready":"True"
	I0814 01:06:52.572150   61115 pod_ready.go:81] duration metric: took 4.558756ms for pod "kube-apiserver-embed-certs-901410" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:52.572160   61115 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-901410" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:52.575875   61115 pod_ready.go:92] pod "kube-controller-manager-embed-certs-901410" in "kube-system" namespace has status "Ready":"True"
	I0814 01:06:52.575894   61115 pod_ready.go:81] duration metric: took 3.728258ms for pod "kube-controller-manager-embed-certs-901410" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:52.575903   61115 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-gtr77" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:52.579889   61115 pod_ready.go:92] pod "kube-proxy-gtr77" in "kube-system" namespace has status "Ready":"True"
	I0814 01:06:52.579908   61115 pod_ready.go:81] duration metric: took 3.999715ms for pod "kube-proxy-gtr77" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:52.579916   61115 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-901410" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:52.583481   61115 pod_ready.go:92] pod "kube-scheduler-embed-certs-901410" in "kube-system" namespace has status "Ready":"True"
	I0814 01:06:52.583499   61115 pod_ready.go:81] duration metric: took 3.577393ms for pod "kube-scheduler-embed-certs-901410" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:52.583508   61115 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace to be "Ready" ...
	I0814 01:06:54.590479   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:51.781057   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:54.280478   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:54.418737   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:56.917785   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:54.636613   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:55.137191   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:55.637149   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:56.137437   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:56.637155   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:57.136629   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:57.636616   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:58.136691   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:58.637180   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:59.137246   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:06:57.091108   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:59.590751   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:56.781427   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:59.280620   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:01.281835   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:58.918424   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:01.418091   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:06:59.636603   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:00.137399   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:00.636477   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:01.136689   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:01.636867   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:02.136874   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:02.636850   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:03.136568   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:03.636915   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:04.137185   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:02.090113   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:04.589929   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:03.780774   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:05.781084   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:03.918432   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:06.417245   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:04.636433   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:05.136514   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:05.637177   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:06.136522   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:06.636384   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:07.136753   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:07.636417   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:08.137158   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:08.636665   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:09.137281   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:07.089678   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:09.590309   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:07.781208   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:10.281385   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:08.917707   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:10.917814   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:09.637102   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:10.136575   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:10.637290   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:11.136999   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:11.636523   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:12.136756   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:12.637369   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:13.136763   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:13.637275   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:14.137363   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:12.090323   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:14.092742   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:12.780837   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:14.781484   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:13.424099   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:15.917599   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:17.918631   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:14.636871   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:15.136819   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:15.636660   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:16.136568   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:16.637322   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:17.137088   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:17.637082   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:18.136469   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:18.637351   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:19.136899   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:16.589319   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:18.590539   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:17.279827   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:19.280727   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:20.418308   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:22.418709   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:19.636984   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:20.137256   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:20.636678   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:21.136871   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:21.637264   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:07:21.637336   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:07:21.674035   61804 cri.go:89] found id: ""
	I0814 01:07:21.674081   61804 logs.go:276] 0 containers: []
	W0814 01:07:21.674091   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:07:21.674100   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:07:21.674150   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:07:21.706567   61804 cri.go:89] found id: ""
	I0814 01:07:21.706594   61804 logs.go:276] 0 containers: []
	W0814 01:07:21.706602   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:07:21.706608   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:07:21.706670   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:07:21.744892   61804 cri.go:89] found id: ""
	I0814 01:07:21.744917   61804 logs.go:276] 0 containers: []
	W0814 01:07:21.744927   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:07:21.744933   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:07:21.744987   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:07:21.780766   61804 cri.go:89] found id: ""
	I0814 01:07:21.780791   61804 logs.go:276] 0 containers: []
	W0814 01:07:21.780799   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:07:21.780805   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:07:21.780861   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:07:21.813710   61804 cri.go:89] found id: ""
	I0814 01:07:21.813737   61804 logs.go:276] 0 containers: []
	W0814 01:07:21.813744   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:07:21.813750   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:07:21.813800   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:07:21.851621   61804 cri.go:89] found id: ""
	I0814 01:07:21.851649   61804 logs.go:276] 0 containers: []
	W0814 01:07:21.851657   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:07:21.851663   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:07:21.851713   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:07:21.885176   61804 cri.go:89] found id: ""
	I0814 01:07:21.885207   61804 logs.go:276] 0 containers: []
	W0814 01:07:21.885218   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:07:21.885226   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:07:21.885293   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:07:21.922273   61804 cri.go:89] found id: ""
	I0814 01:07:21.922303   61804 logs.go:276] 0 containers: []
	W0814 01:07:21.922319   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:07:21.922330   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:07:21.922344   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:07:21.975619   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:07:21.975657   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:07:21.989295   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:07:21.989330   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:07:22.117376   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:07:22.117406   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:07:22.117421   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:07:22.190366   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:07:22.190407   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:07:21.094685   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:23.592014   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:21.781584   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:24.281405   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:24.919338   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:27.417053   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:24.727910   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:24.741649   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:07:24.741722   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:07:24.778658   61804 cri.go:89] found id: ""
	I0814 01:07:24.778684   61804 logs.go:276] 0 containers: []
	W0814 01:07:24.778693   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:07:24.778699   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:07:24.778761   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:07:24.811263   61804 cri.go:89] found id: ""
	I0814 01:07:24.811290   61804 logs.go:276] 0 containers: []
	W0814 01:07:24.811314   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:07:24.811321   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:07:24.811385   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:07:24.847414   61804 cri.go:89] found id: ""
	I0814 01:07:24.847442   61804 logs.go:276] 0 containers: []
	W0814 01:07:24.847450   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:07:24.847456   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:07:24.847512   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:07:24.888714   61804 cri.go:89] found id: ""
	I0814 01:07:24.888737   61804 logs.go:276] 0 containers: []
	W0814 01:07:24.888745   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:07:24.888750   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:07:24.888828   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:07:24.937957   61804 cri.go:89] found id: ""
	I0814 01:07:24.937983   61804 logs.go:276] 0 containers: []
	W0814 01:07:24.937994   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:07:24.938002   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:07:24.938086   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:07:24.990489   61804 cri.go:89] found id: ""
	I0814 01:07:24.990514   61804 logs.go:276] 0 containers: []
	W0814 01:07:24.990522   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:07:24.990530   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:07:24.990592   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:07:25.033458   61804 cri.go:89] found id: ""
	I0814 01:07:25.033489   61804 logs.go:276] 0 containers: []
	W0814 01:07:25.033500   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:07:25.033508   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:07:25.033594   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:07:25.065009   61804 cri.go:89] found id: ""
	I0814 01:07:25.065039   61804 logs.go:276] 0 containers: []
	W0814 01:07:25.065049   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:07:25.065062   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:07:25.065074   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:07:25.116806   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:07:25.116841   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:07:25.131759   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:07:25.131790   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:07:25.206389   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:07:25.206415   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:07:25.206435   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:07:25.284603   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:07:25.284632   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:07:27.823371   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:27.836369   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:07:27.836452   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:07:27.876906   61804 cri.go:89] found id: ""
	I0814 01:07:27.876937   61804 logs.go:276] 0 containers: []
	W0814 01:07:27.876950   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:07:27.876960   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:07:27.877039   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:07:27.912449   61804 cri.go:89] found id: ""
	I0814 01:07:27.912481   61804 logs.go:276] 0 containers: []
	W0814 01:07:27.912494   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:07:27.912501   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:07:27.912568   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:07:27.945584   61804 cri.go:89] found id: ""
	I0814 01:07:27.945611   61804 logs.go:276] 0 containers: []
	W0814 01:07:27.945620   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:07:27.945628   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:07:27.945693   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:07:27.982470   61804 cri.go:89] found id: ""
	I0814 01:07:27.982498   61804 logs.go:276] 0 containers: []
	W0814 01:07:27.982508   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:07:27.982517   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:07:27.982592   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:07:28.020494   61804 cri.go:89] found id: ""
	I0814 01:07:28.020521   61804 logs.go:276] 0 containers: []
	W0814 01:07:28.020529   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:07:28.020535   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:07:28.020604   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:07:28.055810   61804 cri.go:89] found id: ""
	I0814 01:07:28.055835   61804 logs.go:276] 0 containers: []
	W0814 01:07:28.055846   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:07:28.055854   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:07:28.055917   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:07:28.092241   61804 cri.go:89] found id: ""
	I0814 01:07:28.092266   61804 logs.go:276] 0 containers: []
	W0814 01:07:28.092273   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:07:28.092279   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:07:28.092336   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:07:28.128234   61804 cri.go:89] found id: ""
	I0814 01:07:28.128259   61804 logs.go:276] 0 containers: []
	W0814 01:07:28.128266   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:07:28.128275   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:07:28.128292   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:07:28.169651   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:07:28.169682   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:07:28.223578   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:07:28.223614   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:07:28.237283   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:07:28.237317   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:07:28.310610   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:07:28.310633   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:07:28.310657   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:07:26.090425   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:28.090637   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:26.781404   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:29.280644   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:31.281808   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:29.917201   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:31.918087   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:30.892125   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:30.904416   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:07:30.904487   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:07:30.938158   61804 cri.go:89] found id: ""
	I0814 01:07:30.938186   61804 logs.go:276] 0 containers: []
	W0814 01:07:30.938197   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:07:30.938204   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:07:30.938273   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:07:30.969960   61804 cri.go:89] found id: ""
	I0814 01:07:30.969990   61804 logs.go:276] 0 containers: []
	W0814 01:07:30.970000   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:07:30.970006   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:07:30.970094   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:07:31.003442   61804 cri.go:89] found id: ""
	I0814 01:07:31.003472   61804 logs.go:276] 0 containers: []
	W0814 01:07:31.003484   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:07:31.003492   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:07:31.003547   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:07:31.036819   61804 cri.go:89] found id: ""
	I0814 01:07:31.036852   61804 logs.go:276] 0 containers: []
	W0814 01:07:31.036866   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:07:31.036874   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:07:31.036943   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:07:31.070521   61804 cri.go:89] found id: ""
	I0814 01:07:31.070546   61804 logs.go:276] 0 containers: []
	W0814 01:07:31.070556   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:07:31.070570   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:07:31.070627   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:07:31.111200   61804 cri.go:89] found id: ""
	I0814 01:07:31.111223   61804 logs.go:276] 0 containers: []
	W0814 01:07:31.111230   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:07:31.111236   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:07:31.111299   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:07:31.143931   61804 cri.go:89] found id: ""
	I0814 01:07:31.143965   61804 logs.go:276] 0 containers: []
	W0814 01:07:31.143973   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:07:31.143978   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:07:31.144027   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:07:31.176742   61804 cri.go:89] found id: ""
	I0814 01:07:31.176765   61804 logs.go:276] 0 containers: []
	W0814 01:07:31.176773   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:07:31.176782   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:07:31.176800   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:07:31.247117   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:07:31.247145   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:07:31.247159   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:07:31.327763   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:07:31.327797   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:07:31.368715   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:07:31.368753   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:07:31.421802   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:07:31.421833   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:07:33.936162   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:33.949580   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:07:33.949647   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:07:33.982423   61804 cri.go:89] found id: ""
	I0814 01:07:33.982452   61804 logs.go:276] 0 containers: []
	W0814 01:07:33.982464   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:07:33.982472   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:07:33.982532   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:07:34.015547   61804 cri.go:89] found id: ""
	I0814 01:07:34.015580   61804 logs.go:276] 0 containers: []
	W0814 01:07:34.015591   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:07:34.015598   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:07:34.015660   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:07:34.047814   61804 cri.go:89] found id: ""
	I0814 01:07:34.047837   61804 logs.go:276] 0 containers: []
	W0814 01:07:34.047845   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:07:34.047851   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:07:34.047914   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:07:34.080509   61804 cri.go:89] found id: ""
	I0814 01:07:34.080539   61804 logs.go:276] 0 containers: []
	W0814 01:07:34.080552   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:07:34.080561   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:07:34.080629   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:07:34.114693   61804 cri.go:89] found id: ""
	I0814 01:07:34.114723   61804 logs.go:276] 0 containers: []
	W0814 01:07:34.114735   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:07:34.114742   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:07:34.114812   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:07:34.148294   61804 cri.go:89] found id: ""
	I0814 01:07:34.148321   61804 logs.go:276] 0 containers: []
	W0814 01:07:34.148334   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:07:34.148344   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:07:34.148410   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:07:34.182913   61804 cri.go:89] found id: ""
	I0814 01:07:34.182938   61804 logs.go:276] 0 containers: []
	W0814 01:07:34.182947   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:07:34.182953   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:07:34.183002   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:07:34.215609   61804 cri.go:89] found id: ""
	I0814 01:07:34.215639   61804 logs.go:276] 0 containers: []
	W0814 01:07:34.215649   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:07:34.215662   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:07:34.215688   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:07:34.278627   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:07:34.278657   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:07:34.278674   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:07:34.353824   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:07:34.353863   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:07:34.390511   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:07:34.390551   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:07:34.440170   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:07:34.440205   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:07:30.589452   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:33.089231   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:33.780724   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:35.781648   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:34.417300   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:36.418300   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:36.955228   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:36.968676   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:07:36.968752   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:07:37.005738   61804 cri.go:89] found id: ""
	I0814 01:07:37.005770   61804 logs.go:276] 0 containers: []
	W0814 01:07:37.005781   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:07:37.005800   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:07:37.005876   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:07:37.038556   61804 cri.go:89] found id: ""
	I0814 01:07:37.038586   61804 logs.go:276] 0 containers: []
	W0814 01:07:37.038594   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:07:37.038599   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:07:37.038659   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:07:37.073835   61804 cri.go:89] found id: ""
	I0814 01:07:37.073870   61804 logs.go:276] 0 containers: []
	W0814 01:07:37.073881   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:07:37.073890   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:07:37.073952   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:07:37.109720   61804 cri.go:89] found id: ""
	I0814 01:07:37.109754   61804 logs.go:276] 0 containers: []
	W0814 01:07:37.109766   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:07:37.109774   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:07:37.109837   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:07:37.141361   61804 cri.go:89] found id: ""
	I0814 01:07:37.141391   61804 logs.go:276] 0 containers: []
	W0814 01:07:37.141401   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:07:37.141409   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:07:37.141460   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:07:37.172803   61804 cri.go:89] found id: ""
	I0814 01:07:37.172833   61804 logs.go:276] 0 containers: []
	W0814 01:07:37.172841   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:07:37.172847   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:07:37.172898   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:07:37.205074   61804 cri.go:89] found id: ""
	I0814 01:07:37.205101   61804 logs.go:276] 0 containers: []
	W0814 01:07:37.205110   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:07:37.205116   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:07:37.205172   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:07:37.237440   61804 cri.go:89] found id: ""
	I0814 01:07:37.237462   61804 logs.go:276] 0 containers: []
	W0814 01:07:37.237472   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:07:37.237484   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:07:37.237499   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:07:37.286411   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:07:37.286442   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:07:37.299649   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:07:37.299673   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:07:37.363165   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:07:37.363188   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:07:37.363209   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:07:37.440551   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:07:37.440589   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:07:35.090686   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:37.091438   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:39.590158   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:38.281686   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:40.780496   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:38.919024   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:41.417327   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:39.980740   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:39.992656   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:07:39.992724   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:07:40.026980   61804 cri.go:89] found id: ""
	I0814 01:07:40.027009   61804 logs.go:276] 0 containers: []
	W0814 01:07:40.027020   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:07:40.027027   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:07:40.027093   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:07:40.059474   61804 cri.go:89] found id: ""
	I0814 01:07:40.059509   61804 logs.go:276] 0 containers: []
	W0814 01:07:40.059521   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:07:40.059528   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:07:40.059602   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:07:40.092222   61804 cri.go:89] found id: ""
	I0814 01:07:40.092251   61804 logs.go:276] 0 containers: []
	W0814 01:07:40.092260   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:07:40.092265   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:07:40.092314   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:07:40.123458   61804 cri.go:89] found id: ""
	I0814 01:07:40.123487   61804 logs.go:276] 0 containers: []
	W0814 01:07:40.123495   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:07:40.123501   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:07:40.123557   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:07:40.155410   61804 cri.go:89] found id: ""
	I0814 01:07:40.155433   61804 logs.go:276] 0 containers: []
	W0814 01:07:40.155461   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:07:40.155467   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:07:40.155517   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:07:40.186726   61804 cri.go:89] found id: ""
	I0814 01:07:40.186750   61804 logs.go:276] 0 containers: []
	W0814 01:07:40.186774   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:07:40.186782   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:07:40.186842   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:07:40.223940   61804 cri.go:89] found id: ""
	I0814 01:07:40.223964   61804 logs.go:276] 0 containers: []
	W0814 01:07:40.223974   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:07:40.223981   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:07:40.224039   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:07:40.255483   61804 cri.go:89] found id: ""
	I0814 01:07:40.255511   61804 logs.go:276] 0 containers: []
	W0814 01:07:40.255520   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:07:40.255532   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:07:40.255547   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:07:40.307368   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:07:40.307400   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:07:40.320297   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:07:40.320323   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:07:40.382358   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:07:40.382390   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:07:40.382406   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:07:40.464226   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:07:40.464312   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:07:43.001144   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:43.015011   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:07:43.015090   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:07:43.047581   61804 cri.go:89] found id: ""
	I0814 01:07:43.047617   61804 logs.go:276] 0 containers: []
	W0814 01:07:43.047629   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:07:43.047636   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:07:43.047709   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:07:43.081737   61804 cri.go:89] found id: ""
	I0814 01:07:43.081769   61804 logs.go:276] 0 containers: []
	W0814 01:07:43.081780   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:07:43.081788   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:07:43.081858   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:07:43.116828   61804 cri.go:89] found id: ""
	I0814 01:07:43.116851   61804 logs.go:276] 0 containers: []
	W0814 01:07:43.116860   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:07:43.116865   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:07:43.116918   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:07:43.149154   61804 cri.go:89] found id: ""
	I0814 01:07:43.149183   61804 logs.go:276] 0 containers: []
	W0814 01:07:43.149195   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:07:43.149203   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:07:43.149270   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:07:43.183298   61804 cri.go:89] found id: ""
	I0814 01:07:43.183327   61804 logs.go:276] 0 containers: []
	W0814 01:07:43.183335   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:07:43.183341   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:07:43.183402   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:07:43.217844   61804 cri.go:89] found id: ""
	I0814 01:07:43.217875   61804 logs.go:276] 0 containers: []
	W0814 01:07:43.217885   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:07:43.217894   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:07:43.217957   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:07:43.254501   61804 cri.go:89] found id: ""
	I0814 01:07:43.254529   61804 logs.go:276] 0 containers: []
	W0814 01:07:43.254540   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:07:43.254549   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:07:43.254621   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:07:43.288499   61804 cri.go:89] found id: ""
	I0814 01:07:43.288520   61804 logs.go:276] 0 containers: []
	W0814 01:07:43.288528   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:07:43.288538   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:07:43.288553   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:07:43.364920   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:07:43.364957   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:07:43.402536   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:07:43.402563   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:07:43.454370   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:07:43.454403   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:07:43.467972   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:07:43.468000   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:07:43.541823   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:07:42.089879   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:44.090254   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:42.781141   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:45.280856   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:43.418435   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:45.918224   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:47.918468   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:46.042614   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:46.055014   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:07:46.055074   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:07:46.088632   61804 cri.go:89] found id: ""
	I0814 01:07:46.088664   61804 logs.go:276] 0 containers: []
	W0814 01:07:46.088676   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:07:46.088684   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:07:46.088755   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:07:46.121747   61804 cri.go:89] found id: ""
	I0814 01:07:46.121774   61804 logs.go:276] 0 containers: []
	W0814 01:07:46.121782   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:07:46.121788   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:07:46.121837   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:07:46.157301   61804 cri.go:89] found id: ""
	I0814 01:07:46.157329   61804 logs.go:276] 0 containers: []
	W0814 01:07:46.157340   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:07:46.157348   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:07:46.157412   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:07:46.188543   61804 cri.go:89] found id: ""
	I0814 01:07:46.188575   61804 logs.go:276] 0 containers: []
	W0814 01:07:46.188586   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:07:46.188594   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:07:46.188657   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:07:46.219762   61804 cri.go:89] found id: ""
	I0814 01:07:46.219787   61804 logs.go:276] 0 containers: []
	W0814 01:07:46.219795   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:07:46.219801   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:07:46.219849   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:07:46.253187   61804 cri.go:89] found id: ""
	I0814 01:07:46.253223   61804 logs.go:276] 0 containers: []
	W0814 01:07:46.253234   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:07:46.253242   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:07:46.253326   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:07:46.287614   61804 cri.go:89] found id: ""
	I0814 01:07:46.287647   61804 logs.go:276] 0 containers: []
	W0814 01:07:46.287656   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:07:46.287662   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:07:46.287716   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:07:46.323558   61804 cri.go:89] found id: ""
	I0814 01:07:46.323588   61804 logs.go:276] 0 containers: []
	W0814 01:07:46.323599   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:07:46.323611   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:07:46.323628   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:07:46.336110   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:07:46.336139   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:07:46.398541   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:07:46.398568   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:07:46.398584   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:07:46.476132   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:07:46.476166   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:07:46.521433   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:07:46.521470   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:07:49.071324   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:49.083741   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:07:49.083816   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:07:49.117788   61804 cri.go:89] found id: ""
	I0814 01:07:49.117816   61804 logs.go:276] 0 containers: []
	W0814 01:07:49.117828   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:07:49.117836   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:07:49.117903   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:07:49.153363   61804 cri.go:89] found id: ""
	I0814 01:07:49.153398   61804 logs.go:276] 0 containers: []
	W0814 01:07:49.153409   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:07:49.153417   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:07:49.153488   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:07:49.186229   61804 cri.go:89] found id: ""
	I0814 01:07:49.186253   61804 logs.go:276] 0 containers: []
	W0814 01:07:49.186261   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:07:49.186267   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:07:49.186327   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:07:49.218463   61804 cri.go:89] found id: ""
	I0814 01:07:49.218485   61804 logs.go:276] 0 containers: []
	W0814 01:07:49.218492   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:07:49.218498   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:07:49.218559   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:07:49.250172   61804 cri.go:89] found id: ""
	I0814 01:07:49.250204   61804 logs.go:276] 0 containers: []
	W0814 01:07:49.250214   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:07:49.250222   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:07:49.250287   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:07:49.285574   61804 cri.go:89] found id: ""
	I0814 01:07:49.285602   61804 logs.go:276] 0 containers: []
	W0814 01:07:49.285612   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:07:49.285620   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:07:49.285679   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:07:49.317583   61804 cri.go:89] found id: ""
	I0814 01:07:49.317614   61804 logs.go:276] 0 containers: []
	W0814 01:07:49.317625   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:07:49.317632   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:07:49.317690   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:07:49.350486   61804 cri.go:89] found id: ""
	I0814 01:07:49.350513   61804 logs.go:276] 0 containers: []
	W0814 01:07:49.350524   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:07:49.350535   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:07:49.350550   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:07:49.401242   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:07:49.401278   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:07:49.415776   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:07:49.415805   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:07:49.487135   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:07:49.487207   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:07:49.487229   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:07:46.092233   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:48.589232   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:47.780910   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:49.781008   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:50.418178   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:52.917953   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:49.569068   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:07:49.569103   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:07:52.108074   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:52.120495   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:07:52.120568   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:07:52.155022   61804 cri.go:89] found id: ""
	I0814 01:07:52.155047   61804 logs.go:276] 0 containers: []
	W0814 01:07:52.155055   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:07:52.155063   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:07:52.155131   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:07:52.186783   61804 cri.go:89] found id: ""
	I0814 01:07:52.186813   61804 logs.go:276] 0 containers: []
	W0814 01:07:52.186837   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:07:52.186854   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:07:52.186908   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:07:52.219089   61804 cri.go:89] found id: ""
	I0814 01:07:52.219118   61804 logs.go:276] 0 containers: []
	W0814 01:07:52.219129   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:07:52.219136   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:07:52.219200   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:07:52.252343   61804 cri.go:89] found id: ""
	I0814 01:07:52.252378   61804 logs.go:276] 0 containers: []
	W0814 01:07:52.252391   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:07:52.252399   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:07:52.252460   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:07:52.288827   61804 cri.go:89] found id: ""
	I0814 01:07:52.288848   61804 logs.go:276] 0 containers: []
	W0814 01:07:52.288855   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:07:52.288861   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:07:52.288913   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:07:52.322201   61804 cri.go:89] found id: ""
	I0814 01:07:52.322228   61804 logs.go:276] 0 containers: []
	W0814 01:07:52.322240   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:07:52.322247   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:07:52.322327   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:07:52.357482   61804 cri.go:89] found id: ""
	I0814 01:07:52.357508   61804 logs.go:276] 0 containers: []
	W0814 01:07:52.357519   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:07:52.357527   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:07:52.357599   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:07:52.390481   61804 cri.go:89] found id: ""
	I0814 01:07:52.390508   61804 logs.go:276] 0 containers: []
	W0814 01:07:52.390515   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:07:52.390523   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:07:52.390536   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:07:52.403144   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:07:52.403171   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:07:52.474148   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:07:52.474170   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:07:52.474182   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:07:52.555353   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:07:52.555396   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:07:52.592151   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:07:52.592180   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:07:50.589355   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:52.590468   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:52.282598   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:54.780753   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:55.418165   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:57.418294   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:55.143835   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:55.156285   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:07:55.156360   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:07:55.195624   61804 cri.go:89] found id: ""
	I0814 01:07:55.195655   61804 logs.go:276] 0 containers: []
	W0814 01:07:55.195666   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:07:55.195673   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:07:55.195735   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:07:55.230384   61804 cri.go:89] found id: ""
	I0814 01:07:55.230409   61804 logs.go:276] 0 containers: []
	W0814 01:07:55.230419   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:07:55.230426   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:07:55.230491   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:07:55.264774   61804 cri.go:89] found id: ""
	I0814 01:07:55.264802   61804 logs.go:276] 0 containers: []
	W0814 01:07:55.264812   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:07:55.264819   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:07:55.264905   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:07:55.297679   61804 cri.go:89] found id: ""
	I0814 01:07:55.297706   61804 logs.go:276] 0 containers: []
	W0814 01:07:55.297715   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:07:55.297721   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:07:55.297780   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:07:55.331555   61804 cri.go:89] found id: ""
	I0814 01:07:55.331591   61804 logs.go:276] 0 containers: []
	W0814 01:07:55.331602   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:07:55.331609   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:07:55.331685   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:07:55.362351   61804 cri.go:89] found id: ""
	I0814 01:07:55.362374   61804 logs.go:276] 0 containers: []
	W0814 01:07:55.362381   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:07:55.362388   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:07:55.362434   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:07:55.397261   61804 cri.go:89] found id: ""
	I0814 01:07:55.397292   61804 logs.go:276] 0 containers: []
	W0814 01:07:55.397301   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:07:55.397308   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:07:55.397355   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:07:55.431333   61804 cri.go:89] found id: ""
	I0814 01:07:55.431363   61804 logs.go:276] 0 containers: []
	W0814 01:07:55.431376   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:07:55.431388   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:07:55.431403   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:07:55.445865   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:07:55.445901   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:07:55.511474   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:07:55.511494   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:07:55.511505   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:07:55.596934   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:07:55.596966   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:07:55.632440   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:07:55.632477   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:07:58.183656   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:07:58.196717   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:07:58.196776   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:07:58.231854   61804 cri.go:89] found id: ""
	I0814 01:07:58.231890   61804 logs.go:276] 0 containers: []
	W0814 01:07:58.231902   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:07:58.231910   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:07:58.231972   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:07:58.267169   61804 cri.go:89] found id: ""
	I0814 01:07:58.267201   61804 logs.go:276] 0 containers: []
	W0814 01:07:58.267211   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:07:58.267218   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:07:58.267277   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:07:58.301552   61804 cri.go:89] found id: ""
	I0814 01:07:58.301581   61804 logs.go:276] 0 containers: []
	W0814 01:07:58.301589   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:07:58.301596   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:07:58.301652   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:07:58.334399   61804 cri.go:89] found id: ""
	I0814 01:07:58.334427   61804 logs.go:276] 0 containers: []
	W0814 01:07:58.334434   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:07:58.334440   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:07:58.334490   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:07:58.366748   61804 cri.go:89] found id: ""
	I0814 01:07:58.366777   61804 logs.go:276] 0 containers: []
	W0814 01:07:58.366787   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:07:58.366794   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:07:58.366860   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:07:58.401078   61804 cri.go:89] found id: ""
	I0814 01:07:58.401108   61804 logs.go:276] 0 containers: []
	W0814 01:07:58.401117   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:07:58.401123   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:07:58.401179   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:07:58.433766   61804 cri.go:89] found id: ""
	I0814 01:07:58.433795   61804 logs.go:276] 0 containers: []
	W0814 01:07:58.433807   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:07:58.433813   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:07:58.433863   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:07:58.467187   61804 cri.go:89] found id: ""
	I0814 01:07:58.467211   61804 logs.go:276] 0 containers: []
	W0814 01:07:58.467219   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:07:58.467227   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:07:58.467241   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:07:58.520695   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:07:58.520733   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:07:58.535262   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:07:58.535288   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:07:58.601335   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:07:58.601354   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:07:58.601367   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:07:58.683365   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:07:58.683411   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:07:55.089601   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:57.089754   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:59.590432   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:56.783376   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:59.282603   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:07:59.917309   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:01.917515   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:01.221305   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:01.233782   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:01.233863   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:01.265991   61804 cri.go:89] found id: ""
	I0814 01:08:01.266019   61804 logs.go:276] 0 containers: []
	W0814 01:08:01.266030   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:01.266048   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:01.266116   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:01.300802   61804 cri.go:89] found id: ""
	I0814 01:08:01.300825   61804 logs.go:276] 0 containers: []
	W0814 01:08:01.300840   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:01.300851   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:01.300918   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:01.334762   61804 cri.go:89] found id: ""
	I0814 01:08:01.334788   61804 logs.go:276] 0 containers: []
	W0814 01:08:01.334796   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:01.334803   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:01.334858   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:01.367051   61804 cri.go:89] found id: ""
	I0814 01:08:01.367075   61804 logs.go:276] 0 containers: []
	W0814 01:08:01.367083   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:01.367089   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:01.367147   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:01.401875   61804 cri.go:89] found id: ""
	I0814 01:08:01.401904   61804 logs.go:276] 0 containers: []
	W0814 01:08:01.401915   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:01.401922   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:01.401982   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:01.435930   61804 cri.go:89] found id: ""
	I0814 01:08:01.435958   61804 logs.go:276] 0 containers: []
	W0814 01:08:01.435975   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:01.435994   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:01.436056   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:01.470913   61804 cri.go:89] found id: ""
	I0814 01:08:01.470943   61804 logs.go:276] 0 containers: []
	W0814 01:08:01.470958   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:01.470966   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:01.471030   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:01.506552   61804 cri.go:89] found id: ""
	I0814 01:08:01.506584   61804 logs.go:276] 0 containers: []
	W0814 01:08:01.506595   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:01.506607   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:01.506621   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:01.557203   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:01.557245   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:01.570729   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:01.570754   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:01.636244   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:01.636268   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:01.636282   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:01.720905   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:01.720937   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:04.261326   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:04.274952   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:04.275020   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:04.309640   61804 cri.go:89] found id: ""
	I0814 01:08:04.309695   61804 logs.go:276] 0 containers: []
	W0814 01:08:04.309708   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:04.309717   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:04.309784   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:04.343333   61804 cri.go:89] found id: ""
	I0814 01:08:04.343368   61804 logs.go:276] 0 containers: []
	W0814 01:08:04.343380   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:04.343388   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:04.343446   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:04.377058   61804 cri.go:89] found id: ""
	I0814 01:08:04.377090   61804 logs.go:276] 0 containers: []
	W0814 01:08:04.377101   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:04.377109   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:04.377170   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:04.411932   61804 cri.go:89] found id: ""
	I0814 01:08:04.411961   61804 logs.go:276] 0 containers: []
	W0814 01:08:04.411973   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:04.411980   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:04.412039   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:04.449523   61804 cri.go:89] found id: ""
	I0814 01:08:04.449557   61804 logs.go:276] 0 containers: []
	W0814 01:08:04.449569   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:04.449577   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:04.449639   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:04.505818   61804 cri.go:89] found id: ""
	I0814 01:08:04.505844   61804 logs.go:276] 0 containers: []
	W0814 01:08:04.505852   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:04.505858   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:04.505911   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:01.594524   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:04.089421   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:01.780659   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:03.780893   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:06.281784   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:03.917861   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:06.417117   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:04.540720   61804 cri.go:89] found id: ""
	I0814 01:08:04.540747   61804 logs.go:276] 0 containers: []
	W0814 01:08:04.540754   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:04.540759   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:04.540822   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:04.575188   61804 cri.go:89] found id: ""
	I0814 01:08:04.575218   61804 logs.go:276] 0 containers: []
	W0814 01:08:04.575230   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:04.575241   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:04.575254   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:04.624557   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:04.624593   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:04.637679   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:04.637707   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:04.707655   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:04.707676   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:04.707690   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:04.792530   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:04.792564   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:07.333726   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:07.346667   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:07.346762   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:07.379773   61804 cri.go:89] found id: ""
	I0814 01:08:07.379809   61804 logs.go:276] 0 containers: []
	W0814 01:08:07.379821   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:07.379832   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:07.379898   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:07.413473   61804 cri.go:89] found id: ""
	I0814 01:08:07.413508   61804 logs.go:276] 0 containers: []
	W0814 01:08:07.413519   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:07.413528   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:07.413592   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:07.448033   61804 cri.go:89] found id: ""
	I0814 01:08:07.448065   61804 logs.go:276] 0 containers: []
	W0814 01:08:07.448076   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:07.448084   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:07.448149   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:07.483015   61804 cri.go:89] found id: ""
	I0814 01:08:07.483043   61804 logs.go:276] 0 containers: []
	W0814 01:08:07.483051   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:07.483057   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:07.483116   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:07.516222   61804 cri.go:89] found id: ""
	I0814 01:08:07.516245   61804 logs.go:276] 0 containers: []
	W0814 01:08:07.516253   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:07.516259   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:07.516309   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:07.552179   61804 cri.go:89] found id: ""
	I0814 01:08:07.552203   61804 logs.go:276] 0 containers: []
	W0814 01:08:07.552211   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:07.552217   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:07.552269   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:07.585804   61804 cri.go:89] found id: ""
	I0814 01:08:07.585832   61804 logs.go:276] 0 containers: []
	W0814 01:08:07.585842   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:07.585850   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:07.585913   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:07.620731   61804 cri.go:89] found id: ""
	I0814 01:08:07.620757   61804 logs.go:276] 0 containers: []
	W0814 01:08:07.620766   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:07.620774   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:07.620786   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:07.662648   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:07.662686   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:07.713380   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:07.713418   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:07.726770   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:07.726801   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:07.794679   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:07.794705   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:07.794720   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:06.090545   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:08.093404   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:08.780821   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:11.281395   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:08.417151   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:10.418613   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:12.916869   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:10.370665   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:10.383986   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:10.384046   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:10.417596   61804 cri.go:89] found id: ""
	I0814 01:08:10.417622   61804 logs.go:276] 0 containers: []
	W0814 01:08:10.417634   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:10.417642   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:10.417703   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:10.453782   61804 cri.go:89] found id: ""
	I0814 01:08:10.453813   61804 logs.go:276] 0 containers: []
	W0814 01:08:10.453824   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:10.453832   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:10.453895   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:10.486795   61804 cri.go:89] found id: ""
	I0814 01:08:10.486821   61804 logs.go:276] 0 containers: []
	W0814 01:08:10.486831   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:10.486839   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:10.486930   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:10.519249   61804 cri.go:89] found id: ""
	I0814 01:08:10.519285   61804 logs.go:276] 0 containers: []
	W0814 01:08:10.519296   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:10.519304   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:10.519369   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:10.551791   61804 cri.go:89] found id: ""
	I0814 01:08:10.551818   61804 logs.go:276] 0 containers: []
	W0814 01:08:10.551825   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:10.551834   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:10.551892   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:10.584630   61804 cri.go:89] found id: ""
	I0814 01:08:10.584658   61804 logs.go:276] 0 containers: []
	W0814 01:08:10.584669   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:10.584679   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:10.584742   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:10.616870   61804 cri.go:89] found id: ""
	I0814 01:08:10.616898   61804 logs.go:276] 0 containers: []
	W0814 01:08:10.616911   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:10.616918   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:10.616984   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:10.650681   61804 cri.go:89] found id: ""
	I0814 01:08:10.650709   61804 logs.go:276] 0 containers: []
	W0814 01:08:10.650721   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:10.650731   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:10.650748   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:10.663021   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:10.663047   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:10.731788   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:10.731813   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:10.731829   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:10.812174   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:10.812213   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:10.854260   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:10.854287   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:13.414862   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:13.428537   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:13.428595   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:13.460800   61804 cri.go:89] found id: ""
	I0814 01:08:13.460836   61804 logs.go:276] 0 containers: []
	W0814 01:08:13.460850   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:13.460859   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:13.460933   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:13.494240   61804 cri.go:89] found id: ""
	I0814 01:08:13.494264   61804 logs.go:276] 0 containers: []
	W0814 01:08:13.494274   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:13.494282   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:13.494370   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:13.526684   61804 cri.go:89] found id: ""
	I0814 01:08:13.526715   61804 logs.go:276] 0 containers: []
	W0814 01:08:13.526726   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:13.526734   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:13.526797   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:13.560258   61804 cri.go:89] found id: ""
	I0814 01:08:13.560281   61804 logs.go:276] 0 containers: []
	W0814 01:08:13.560289   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:13.560296   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:13.560353   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:13.592615   61804 cri.go:89] found id: ""
	I0814 01:08:13.592641   61804 logs.go:276] 0 containers: []
	W0814 01:08:13.592653   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:13.592668   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:13.592732   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:13.627268   61804 cri.go:89] found id: ""
	I0814 01:08:13.627291   61804 logs.go:276] 0 containers: []
	W0814 01:08:13.627299   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:13.627305   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:13.627363   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:13.661932   61804 cri.go:89] found id: ""
	I0814 01:08:13.661955   61804 logs.go:276] 0 containers: []
	W0814 01:08:13.661963   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:13.661968   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:13.662024   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:13.694724   61804 cri.go:89] found id: ""
	I0814 01:08:13.694750   61804 logs.go:276] 0 containers: []
	W0814 01:08:13.694760   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:13.694770   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:13.694785   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:13.759415   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:13.759436   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:13.759449   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:13.835496   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:13.835532   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:13.873749   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:13.873779   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:13.927612   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:13.927647   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:10.590789   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:13.090113   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:13.781937   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:16.281253   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:14.920559   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:17.418625   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:16.440696   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:16.455648   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:16.455734   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:16.490557   61804 cri.go:89] found id: ""
	I0814 01:08:16.490587   61804 logs.go:276] 0 containers: []
	W0814 01:08:16.490599   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:16.490606   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:16.490667   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:16.524268   61804 cri.go:89] found id: ""
	I0814 01:08:16.524294   61804 logs.go:276] 0 containers: []
	W0814 01:08:16.524303   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:16.524315   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:16.524379   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:16.562651   61804 cri.go:89] found id: ""
	I0814 01:08:16.562686   61804 logs.go:276] 0 containers: []
	W0814 01:08:16.562696   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:16.562708   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:16.562771   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:16.598581   61804 cri.go:89] found id: ""
	I0814 01:08:16.598605   61804 logs.go:276] 0 containers: []
	W0814 01:08:16.598613   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:16.598619   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:16.598669   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:16.646849   61804 cri.go:89] found id: ""
	I0814 01:08:16.646872   61804 logs.go:276] 0 containers: []
	W0814 01:08:16.646880   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:16.646886   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:16.646939   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:16.698695   61804 cri.go:89] found id: ""
	I0814 01:08:16.698720   61804 logs.go:276] 0 containers: []
	W0814 01:08:16.698727   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:16.698733   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:16.698793   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:16.748149   61804 cri.go:89] found id: ""
	I0814 01:08:16.748182   61804 logs.go:276] 0 containers: []
	W0814 01:08:16.748193   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:16.748201   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:16.748263   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:16.783334   61804 cri.go:89] found id: ""
	I0814 01:08:16.783362   61804 logs.go:276] 0 containers: []
	W0814 01:08:16.783371   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:16.783378   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:16.783389   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:16.833178   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:16.833211   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:16.845843   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:16.845873   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:16.916728   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:16.916754   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:16.916770   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:17.001194   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:17.001236   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:15.588888   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:17.589309   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:19.593806   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:18.780869   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:20.780899   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:19.918779   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:22.417464   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:19.540300   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:19.554740   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:19.554823   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:19.590452   61804 cri.go:89] found id: ""
	I0814 01:08:19.590478   61804 logs.go:276] 0 containers: []
	W0814 01:08:19.590489   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:19.590498   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:19.590559   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:19.623643   61804 cri.go:89] found id: ""
	I0814 01:08:19.623673   61804 logs.go:276] 0 containers: []
	W0814 01:08:19.623683   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:19.623691   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:19.623759   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:19.659205   61804 cri.go:89] found id: ""
	I0814 01:08:19.659228   61804 logs.go:276] 0 containers: []
	W0814 01:08:19.659236   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:19.659243   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:19.659312   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:19.695038   61804 cri.go:89] found id: ""
	I0814 01:08:19.695061   61804 logs.go:276] 0 containers: []
	W0814 01:08:19.695068   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:19.695075   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:19.695132   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:19.728525   61804 cri.go:89] found id: ""
	I0814 01:08:19.728555   61804 logs.go:276] 0 containers: []
	W0814 01:08:19.728568   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:19.728585   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:19.728652   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:19.764153   61804 cri.go:89] found id: ""
	I0814 01:08:19.764180   61804 logs.go:276] 0 containers: []
	W0814 01:08:19.764191   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:19.764198   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:19.764261   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:19.803346   61804 cri.go:89] found id: ""
	I0814 01:08:19.803382   61804 logs.go:276] 0 containers: []
	W0814 01:08:19.803392   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:19.803400   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:19.803462   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:19.835783   61804 cri.go:89] found id: ""
	I0814 01:08:19.835811   61804 logs.go:276] 0 containers: []
	W0814 01:08:19.835818   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:19.835827   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:19.835839   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:19.889917   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:19.889961   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:19.903826   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:19.903858   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:19.977790   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:19.977813   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:19.977832   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:20.053634   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:20.053672   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:22.598821   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:22.612128   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:22.612209   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:22.647840   61804 cri.go:89] found id: ""
	I0814 01:08:22.647864   61804 logs.go:276] 0 containers: []
	W0814 01:08:22.647873   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:22.647880   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:22.647942   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:22.681572   61804 cri.go:89] found id: ""
	I0814 01:08:22.681594   61804 logs.go:276] 0 containers: []
	W0814 01:08:22.681601   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:22.681606   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:22.681670   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:22.715737   61804 cri.go:89] found id: ""
	I0814 01:08:22.715785   61804 logs.go:276] 0 containers: []
	W0814 01:08:22.715793   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:22.715799   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:22.715856   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:22.750605   61804 cri.go:89] found id: ""
	I0814 01:08:22.750628   61804 logs.go:276] 0 containers: []
	W0814 01:08:22.750636   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:22.750643   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:22.750693   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:22.786410   61804 cri.go:89] found id: ""
	I0814 01:08:22.786434   61804 logs.go:276] 0 containers: []
	W0814 01:08:22.786442   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:22.786447   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:22.786502   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:22.821799   61804 cri.go:89] found id: ""
	I0814 01:08:22.821830   61804 logs.go:276] 0 containers: []
	W0814 01:08:22.821840   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:22.821846   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:22.821923   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:22.861218   61804 cri.go:89] found id: ""
	I0814 01:08:22.861243   61804 logs.go:276] 0 containers: []
	W0814 01:08:22.861254   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:22.861261   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:22.861324   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:22.896371   61804 cri.go:89] found id: ""
	I0814 01:08:22.896398   61804 logs.go:276] 0 containers: []
	W0814 01:08:22.896408   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:22.896419   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:22.896434   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:22.951998   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:22.952035   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:22.966214   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:22.966239   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:23.035790   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:23.035812   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:23.035824   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:23.119675   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:23.119708   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:22.090427   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:24.100671   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:22.781758   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:25.280556   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:24.419130   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:26.918236   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:25.657771   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:25.671521   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:25.671607   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:25.708419   61804 cri.go:89] found id: ""
	I0814 01:08:25.708451   61804 logs.go:276] 0 containers: []
	W0814 01:08:25.708460   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:25.708466   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:25.708514   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:25.743263   61804 cri.go:89] found id: ""
	I0814 01:08:25.743296   61804 logs.go:276] 0 containers: []
	W0814 01:08:25.743309   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:25.743318   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:25.743384   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:25.773544   61804 cri.go:89] found id: ""
	I0814 01:08:25.773570   61804 logs.go:276] 0 containers: []
	W0814 01:08:25.773580   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:25.773588   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:25.773649   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:25.805316   61804 cri.go:89] found id: ""
	I0814 01:08:25.805339   61804 logs.go:276] 0 containers: []
	W0814 01:08:25.805347   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:25.805353   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:25.805404   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:25.837785   61804 cri.go:89] found id: ""
	I0814 01:08:25.837810   61804 logs.go:276] 0 containers: []
	W0814 01:08:25.837818   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:25.837824   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:25.837893   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:25.877145   61804 cri.go:89] found id: ""
	I0814 01:08:25.877171   61804 logs.go:276] 0 containers: []
	W0814 01:08:25.877182   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:25.877190   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:25.877236   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:25.913823   61804 cri.go:89] found id: ""
	I0814 01:08:25.913861   61804 logs.go:276] 0 containers: []
	W0814 01:08:25.913872   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:25.913880   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:25.913946   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:25.947397   61804 cri.go:89] found id: ""
	I0814 01:08:25.947419   61804 logs.go:276] 0 containers: []
	W0814 01:08:25.947427   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:25.947435   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:25.947446   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:26.023754   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:26.023812   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:26.060030   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:26.060068   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:26.110625   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:26.110663   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:26.123952   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:26.123991   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:26.194210   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:28.694490   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:28.706976   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:28.707040   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:28.739739   61804 cri.go:89] found id: ""
	I0814 01:08:28.739768   61804 logs.go:276] 0 containers: []
	W0814 01:08:28.739775   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:28.739781   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:28.739831   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:28.771179   61804 cri.go:89] found id: ""
	I0814 01:08:28.771217   61804 logs.go:276] 0 containers: []
	W0814 01:08:28.771228   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:28.771237   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:28.771303   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:28.805634   61804 cri.go:89] found id: ""
	I0814 01:08:28.805661   61804 logs.go:276] 0 containers: []
	W0814 01:08:28.805670   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:28.805675   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:28.805727   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:28.840796   61804 cri.go:89] found id: ""
	I0814 01:08:28.840819   61804 logs.go:276] 0 containers: []
	W0814 01:08:28.840827   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:28.840833   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:28.840893   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:28.879627   61804 cri.go:89] found id: ""
	I0814 01:08:28.879656   61804 logs.go:276] 0 containers: []
	W0814 01:08:28.879668   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:28.879675   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:28.879734   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:28.916568   61804 cri.go:89] found id: ""
	I0814 01:08:28.916588   61804 logs.go:276] 0 containers: []
	W0814 01:08:28.916597   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:28.916602   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:28.916658   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:28.952959   61804 cri.go:89] found id: ""
	I0814 01:08:28.952986   61804 logs.go:276] 0 containers: []
	W0814 01:08:28.952996   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:28.953003   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:28.953061   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:28.993496   61804 cri.go:89] found id: ""
	I0814 01:08:28.993527   61804 logs.go:276] 0 containers: []
	W0814 01:08:28.993538   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:28.993550   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:28.993565   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:29.079181   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:29.079219   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:29.121692   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:29.121718   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:29.174008   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:29.174068   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:29.188872   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:29.188904   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:29.254381   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:26.589068   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:28.590266   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:27.281232   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:29.781697   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:28.918512   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:31.418087   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:31.754986   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:31.767581   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:31.767656   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:31.803826   61804 cri.go:89] found id: ""
	I0814 01:08:31.803853   61804 logs.go:276] 0 containers: []
	W0814 01:08:31.803861   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:31.803867   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:31.803927   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:31.837958   61804 cri.go:89] found id: ""
	I0814 01:08:31.837986   61804 logs.go:276] 0 containers: []
	W0814 01:08:31.837996   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:31.838004   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:31.838077   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:31.869567   61804 cri.go:89] found id: ""
	I0814 01:08:31.869595   61804 logs.go:276] 0 containers: []
	W0814 01:08:31.869604   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:31.869612   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:31.869680   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:31.906943   61804 cri.go:89] found id: ""
	I0814 01:08:31.906973   61804 logs.go:276] 0 containers: []
	W0814 01:08:31.906985   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:31.906992   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:31.907059   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:31.940969   61804 cri.go:89] found id: ""
	I0814 01:08:31.941006   61804 logs.go:276] 0 containers: []
	W0814 01:08:31.941017   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:31.941025   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:31.941081   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:31.974546   61804 cri.go:89] found id: ""
	I0814 01:08:31.974578   61804 logs.go:276] 0 containers: []
	W0814 01:08:31.974588   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:31.974596   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:31.974657   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:32.007586   61804 cri.go:89] found id: ""
	I0814 01:08:32.007619   61804 logs.go:276] 0 containers: []
	W0814 01:08:32.007633   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:32.007641   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:32.007703   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:32.040073   61804 cri.go:89] found id: ""
	I0814 01:08:32.040104   61804 logs.go:276] 0 containers: []
	W0814 01:08:32.040116   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:32.040128   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:32.040142   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:32.094938   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:32.094978   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:32.107967   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:32.108002   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:32.176290   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:32.176314   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:32.176326   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:32.251231   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:32.251269   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:30.590569   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:33.089507   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:32.287689   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:34.781273   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:33.918103   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:36.417197   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:34.791693   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:34.804519   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:34.804582   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:34.838907   61804 cri.go:89] found id: ""
	I0814 01:08:34.838933   61804 logs.go:276] 0 containers: []
	W0814 01:08:34.838941   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:34.838947   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:34.839008   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:34.869650   61804 cri.go:89] found id: ""
	I0814 01:08:34.869676   61804 logs.go:276] 0 containers: []
	W0814 01:08:34.869684   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:34.869689   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:34.869739   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:34.903598   61804 cri.go:89] found id: ""
	I0814 01:08:34.903635   61804 logs.go:276] 0 containers: []
	W0814 01:08:34.903648   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:34.903655   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:34.903719   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:34.937101   61804 cri.go:89] found id: ""
	I0814 01:08:34.937131   61804 logs.go:276] 0 containers: []
	W0814 01:08:34.937143   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:34.937151   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:34.937214   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:34.969880   61804 cri.go:89] found id: ""
	I0814 01:08:34.969913   61804 logs.go:276] 0 containers: []
	W0814 01:08:34.969925   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:34.969933   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:34.969990   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:35.004158   61804 cri.go:89] found id: ""
	I0814 01:08:35.004185   61804 logs.go:276] 0 containers: []
	W0814 01:08:35.004194   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:35.004200   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:35.004267   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:35.037368   61804 cri.go:89] found id: ""
	I0814 01:08:35.037397   61804 logs.go:276] 0 containers: []
	W0814 01:08:35.037407   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:35.037415   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:35.037467   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:35.071051   61804 cri.go:89] found id: ""
	I0814 01:08:35.071080   61804 logs.go:276] 0 containers: []
	W0814 01:08:35.071089   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:35.071102   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:35.071116   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:35.147845   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:35.147879   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:35.189235   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:35.189271   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:35.242094   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:35.242132   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:35.255405   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:35.255430   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:35.325820   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:37.826188   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:37.839036   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:37.839117   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:37.876368   61804 cri.go:89] found id: ""
	I0814 01:08:37.876397   61804 logs.go:276] 0 containers: []
	W0814 01:08:37.876406   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:37.876411   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:37.876468   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:37.916680   61804 cri.go:89] found id: ""
	I0814 01:08:37.916717   61804 logs.go:276] 0 containers: []
	W0814 01:08:37.916727   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:37.916735   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:37.916802   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:37.951025   61804 cri.go:89] found id: ""
	I0814 01:08:37.951048   61804 logs.go:276] 0 containers: []
	W0814 01:08:37.951056   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:37.951062   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:37.951122   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:37.984837   61804 cri.go:89] found id: ""
	I0814 01:08:37.984865   61804 logs.go:276] 0 containers: []
	W0814 01:08:37.984873   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:37.984878   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:37.984928   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:38.018722   61804 cri.go:89] found id: ""
	I0814 01:08:38.018744   61804 logs.go:276] 0 containers: []
	W0814 01:08:38.018752   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:38.018757   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:38.018815   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:38.052306   61804 cri.go:89] found id: ""
	I0814 01:08:38.052337   61804 logs.go:276] 0 containers: []
	W0814 01:08:38.052350   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:38.052358   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:38.052419   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:38.086752   61804 cri.go:89] found id: ""
	I0814 01:08:38.086784   61804 logs.go:276] 0 containers: []
	W0814 01:08:38.086801   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:38.086811   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:38.086877   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:38.119201   61804 cri.go:89] found id: ""
	I0814 01:08:38.119228   61804 logs.go:276] 0 containers: []
	W0814 01:08:38.119235   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:38.119243   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:38.119255   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:38.171460   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:38.171492   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:38.184712   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:38.184739   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:38.248529   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:38.248552   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:38.248568   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:38.324517   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:38.324556   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:35.092682   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:37.590633   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:39.590761   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:37.280984   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:39.780961   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:38.417262   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:40.417822   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:42.918615   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:40.865218   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:40.877772   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:40.877847   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:40.910171   61804 cri.go:89] found id: ""
	I0814 01:08:40.910197   61804 logs.go:276] 0 containers: []
	W0814 01:08:40.910204   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:40.910210   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:40.910257   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:40.947205   61804 cri.go:89] found id: ""
	I0814 01:08:40.947234   61804 logs.go:276] 0 containers: []
	W0814 01:08:40.947244   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:40.947249   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:40.947304   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:40.979404   61804 cri.go:89] found id: ""
	I0814 01:08:40.979428   61804 logs.go:276] 0 containers: []
	W0814 01:08:40.979436   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:40.979442   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:40.979500   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:41.017710   61804 cri.go:89] found id: ""
	I0814 01:08:41.017737   61804 logs.go:276] 0 containers: []
	W0814 01:08:41.017746   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:41.017752   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:41.017799   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:41.052240   61804 cri.go:89] found id: ""
	I0814 01:08:41.052269   61804 logs.go:276] 0 containers: []
	W0814 01:08:41.052278   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:41.052286   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:41.052353   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:41.084124   61804 cri.go:89] found id: ""
	I0814 01:08:41.084151   61804 logs.go:276] 0 containers: []
	W0814 01:08:41.084159   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:41.084165   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:41.084230   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:41.120994   61804 cri.go:89] found id: ""
	I0814 01:08:41.121027   61804 logs.go:276] 0 containers: []
	W0814 01:08:41.121039   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:41.121047   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:41.121106   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:41.155794   61804 cri.go:89] found id: ""
	I0814 01:08:41.155829   61804 logs.go:276] 0 containers: []
	W0814 01:08:41.155842   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:41.155854   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:41.155873   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:41.209146   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:41.209191   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:41.222112   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:41.222141   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:41.298512   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:41.298533   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:41.298550   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:41.378609   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:41.378645   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:43.924469   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:43.936857   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:43.936935   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:43.969234   61804 cri.go:89] found id: ""
	I0814 01:08:43.969267   61804 logs.go:276] 0 containers: []
	W0814 01:08:43.969276   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:43.969282   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:43.969348   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:44.003814   61804 cri.go:89] found id: ""
	I0814 01:08:44.003841   61804 logs.go:276] 0 containers: []
	W0814 01:08:44.003852   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:44.003860   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:44.003929   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:44.037828   61804 cri.go:89] found id: ""
	I0814 01:08:44.037858   61804 logs.go:276] 0 containers: []
	W0814 01:08:44.037869   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:44.037877   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:44.037931   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:44.077084   61804 cri.go:89] found id: ""
	I0814 01:08:44.077110   61804 logs.go:276] 0 containers: []
	W0814 01:08:44.077118   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:44.077124   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:44.077174   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:44.111028   61804 cri.go:89] found id: ""
	I0814 01:08:44.111054   61804 logs.go:276] 0 containers: []
	W0814 01:08:44.111063   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:44.111070   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:44.111122   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:44.143178   61804 cri.go:89] found id: ""
	I0814 01:08:44.143211   61804 logs.go:276] 0 containers: []
	W0814 01:08:44.143222   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:44.143229   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:44.143293   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:44.177606   61804 cri.go:89] found id: ""
	I0814 01:08:44.177636   61804 logs.go:276] 0 containers: []
	W0814 01:08:44.177648   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:44.177657   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:44.177723   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:44.210941   61804 cri.go:89] found id: ""
	I0814 01:08:44.210965   61804 logs.go:276] 0 containers: []
	W0814 01:08:44.210973   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:44.210982   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:44.210995   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:44.224219   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:44.224248   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:44.289411   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:44.289431   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:44.289442   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:44.369680   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:44.369720   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:44.407705   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:44.407742   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:42.088924   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:44.090237   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:41.781814   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:44.281794   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:45.418397   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:47.419132   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:46.962321   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:46.975711   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:46.975843   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:47.008529   61804 cri.go:89] found id: ""
	I0814 01:08:47.008642   61804 logs.go:276] 0 containers: []
	W0814 01:08:47.008651   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:47.008657   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:47.008707   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:47.042469   61804 cri.go:89] found id: ""
	I0814 01:08:47.042498   61804 logs.go:276] 0 containers: []
	W0814 01:08:47.042509   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:47.042518   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:47.042586   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:47.081186   61804 cri.go:89] found id: ""
	I0814 01:08:47.081214   61804 logs.go:276] 0 containers: []
	W0814 01:08:47.081222   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:47.081229   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:47.081286   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:47.117727   61804 cri.go:89] found id: ""
	I0814 01:08:47.117754   61804 logs.go:276] 0 containers: []
	W0814 01:08:47.117765   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:47.117773   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:47.117858   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:47.151247   61804 cri.go:89] found id: ""
	I0814 01:08:47.151283   61804 logs.go:276] 0 containers: []
	W0814 01:08:47.151298   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:47.151307   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:47.151370   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:47.185640   61804 cri.go:89] found id: ""
	I0814 01:08:47.185671   61804 logs.go:276] 0 containers: []
	W0814 01:08:47.185681   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:47.185689   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:47.185755   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:47.220597   61804 cri.go:89] found id: ""
	I0814 01:08:47.220625   61804 logs.go:276] 0 containers: []
	W0814 01:08:47.220633   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:47.220641   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:47.220714   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:47.257099   61804 cri.go:89] found id: ""
	I0814 01:08:47.257131   61804 logs.go:276] 0 containers: []
	W0814 01:08:47.257147   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:47.257162   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:47.257179   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:47.307503   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:47.307538   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:47.320882   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:47.320907   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:47.394519   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:47.394553   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:47.394567   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:47.475998   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:47.476058   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:46.091154   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:48.590382   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:46.780699   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:48.780773   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:51.281235   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:49.421293   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:51.918374   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:50.019454   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:50.033470   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:50.033550   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:50.070782   61804 cri.go:89] found id: ""
	I0814 01:08:50.070806   61804 logs.go:276] 0 containers: []
	W0814 01:08:50.070813   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:50.070819   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:50.070881   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:50.104047   61804 cri.go:89] found id: ""
	I0814 01:08:50.104083   61804 logs.go:276] 0 containers: []
	W0814 01:08:50.104092   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:50.104101   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:50.104172   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:50.139445   61804 cri.go:89] found id: ""
	I0814 01:08:50.139472   61804 logs.go:276] 0 containers: []
	W0814 01:08:50.139480   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:50.139487   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:50.139545   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:50.173077   61804 cri.go:89] found id: ""
	I0814 01:08:50.173109   61804 logs.go:276] 0 containers: []
	W0814 01:08:50.173118   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:50.173126   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:50.173189   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:50.204234   61804 cri.go:89] found id: ""
	I0814 01:08:50.204264   61804 logs.go:276] 0 containers: []
	W0814 01:08:50.204273   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:50.204281   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:50.204342   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:50.237005   61804 cri.go:89] found id: ""
	I0814 01:08:50.237034   61804 logs.go:276] 0 containers: []
	W0814 01:08:50.237044   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:50.237052   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:50.237107   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:50.270171   61804 cri.go:89] found id: ""
	I0814 01:08:50.270197   61804 logs.go:276] 0 containers: []
	W0814 01:08:50.270204   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:50.270209   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:50.270274   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:50.304932   61804 cri.go:89] found id: ""
	I0814 01:08:50.304959   61804 logs.go:276] 0 containers: []
	W0814 01:08:50.304968   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:50.304980   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:50.305000   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:50.317524   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:50.317552   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:50.384790   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:50.384817   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:50.384833   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:50.461398   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:50.461432   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:50.518516   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:50.518545   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:53.069835   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:53.082707   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:53.082777   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:53.119053   61804 cri.go:89] found id: ""
	I0814 01:08:53.119075   61804 logs.go:276] 0 containers: []
	W0814 01:08:53.119083   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:53.119089   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:53.119138   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:53.154565   61804 cri.go:89] found id: ""
	I0814 01:08:53.154598   61804 logs.go:276] 0 containers: []
	W0814 01:08:53.154610   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:53.154618   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:53.154690   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:53.187144   61804 cri.go:89] found id: ""
	I0814 01:08:53.187171   61804 logs.go:276] 0 containers: []
	W0814 01:08:53.187178   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:53.187184   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:53.187236   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:53.220965   61804 cri.go:89] found id: ""
	I0814 01:08:53.220989   61804 logs.go:276] 0 containers: []
	W0814 01:08:53.220998   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:53.221004   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:53.221062   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:53.256825   61804 cri.go:89] found id: ""
	I0814 01:08:53.256857   61804 logs.go:276] 0 containers: []
	W0814 01:08:53.256868   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:53.256875   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:53.256941   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:53.295733   61804 cri.go:89] found id: ""
	I0814 01:08:53.295761   61804 logs.go:276] 0 containers: []
	W0814 01:08:53.295768   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:53.295774   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:53.295822   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:53.328928   61804 cri.go:89] found id: ""
	I0814 01:08:53.328959   61804 logs.go:276] 0 containers: []
	W0814 01:08:53.328970   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:53.328979   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:53.329049   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:53.362866   61804 cri.go:89] found id: ""
	I0814 01:08:53.362896   61804 logs.go:276] 0 containers: []
	W0814 01:08:53.362907   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:53.362919   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:53.362934   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:53.375681   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:53.375718   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:53.439108   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:53.439132   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:53.439148   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:53.524801   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:53.524838   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:53.560832   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:53.560866   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:51.091445   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:53.589472   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:53.780960   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:56.281731   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:54.417207   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:56.417442   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:56.117383   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:56.129668   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:56.129729   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:56.161928   61804 cri.go:89] found id: ""
	I0814 01:08:56.161953   61804 logs.go:276] 0 containers: []
	W0814 01:08:56.161966   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:56.161971   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:56.162017   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:56.192303   61804 cri.go:89] found id: ""
	I0814 01:08:56.192332   61804 logs.go:276] 0 containers: []
	W0814 01:08:56.192343   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:56.192360   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:56.192428   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:56.226668   61804 cri.go:89] found id: ""
	I0814 01:08:56.226696   61804 logs.go:276] 0 containers: []
	W0814 01:08:56.226707   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:56.226715   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:56.226776   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:56.284959   61804 cri.go:89] found id: ""
	I0814 01:08:56.284987   61804 logs.go:276] 0 containers: []
	W0814 01:08:56.284998   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:56.285006   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:56.285066   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:56.317591   61804 cri.go:89] found id: ""
	I0814 01:08:56.317623   61804 logs.go:276] 0 containers: []
	W0814 01:08:56.317633   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:56.317640   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:56.317707   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:56.350119   61804 cri.go:89] found id: ""
	I0814 01:08:56.350146   61804 logs.go:276] 0 containers: []
	W0814 01:08:56.350157   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:56.350165   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:56.350223   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:56.382204   61804 cri.go:89] found id: ""
	I0814 01:08:56.382231   61804 logs.go:276] 0 containers: []
	W0814 01:08:56.382239   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:56.382244   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:56.382295   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:56.415098   61804 cri.go:89] found id: ""
	I0814 01:08:56.415130   61804 logs.go:276] 0 containers: []
	W0814 01:08:56.415140   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:56.415160   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:56.415174   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:56.466056   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:56.466094   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:56.480989   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:56.481019   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:56.550348   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:56.550371   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:56.550387   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:08:56.629331   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:56.629371   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:59.166791   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:08:59.179818   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:08:59.179907   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:08:59.212759   61804 cri.go:89] found id: ""
	I0814 01:08:59.212781   61804 logs.go:276] 0 containers: []
	W0814 01:08:59.212789   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:08:59.212796   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:08:59.212851   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:08:59.248330   61804 cri.go:89] found id: ""
	I0814 01:08:59.248354   61804 logs.go:276] 0 containers: []
	W0814 01:08:59.248362   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:08:59.248368   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:08:59.248420   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:08:59.282101   61804 cri.go:89] found id: ""
	I0814 01:08:59.282123   61804 logs.go:276] 0 containers: []
	W0814 01:08:59.282136   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:08:59.282142   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:08:59.282190   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:08:59.318477   61804 cri.go:89] found id: ""
	I0814 01:08:59.318502   61804 logs.go:276] 0 containers: []
	W0814 01:08:59.318510   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:08:59.318516   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:08:59.318566   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:08:59.352473   61804 cri.go:89] found id: ""
	I0814 01:08:59.352499   61804 logs.go:276] 0 containers: []
	W0814 01:08:59.352507   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:08:59.352514   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:08:59.352583   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:08:59.386004   61804 cri.go:89] found id: ""
	I0814 01:08:59.386032   61804 logs.go:276] 0 containers: []
	W0814 01:08:59.386056   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:08:59.386065   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:08:59.386127   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:08:59.424280   61804 cri.go:89] found id: ""
	I0814 01:08:59.424309   61804 logs.go:276] 0 containers: []
	W0814 01:08:59.424334   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:08:59.424340   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:08:59.424390   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:08:59.461555   61804 cri.go:89] found id: ""
	I0814 01:08:59.461579   61804 logs.go:276] 0 containers: []
	W0814 01:08:59.461587   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:08:59.461596   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:08:59.461608   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:08:59.501997   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:08:59.502032   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:08:56.089181   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:58.089349   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:58.780740   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:01.280817   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:58.417590   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:00.417914   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:02.418923   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:08:59.554228   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:08:59.554276   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:08:59.569169   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:08:59.569201   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:08:59.635758   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:08:59.635779   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:08:59.635793   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:02.211233   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:02.223647   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:02.223733   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:02.257172   61804 cri.go:89] found id: ""
	I0814 01:09:02.257204   61804 logs.go:276] 0 containers: []
	W0814 01:09:02.257215   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:02.257222   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:02.257286   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:02.290090   61804 cri.go:89] found id: ""
	I0814 01:09:02.290123   61804 logs.go:276] 0 containers: []
	W0814 01:09:02.290132   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:02.290139   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:02.290207   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:02.324436   61804 cri.go:89] found id: ""
	I0814 01:09:02.324461   61804 logs.go:276] 0 containers: []
	W0814 01:09:02.324469   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:02.324474   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:02.324531   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:02.357092   61804 cri.go:89] found id: ""
	I0814 01:09:02.357116   61804 logs.go:276] 0 containers: []
	W0814 01:09:02.357124   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:02.357130   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:02.357191   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:02.390237   61804 cri.go:89] found id: ""
	I0814 01:09:02.390265   61804 logs.go:276] 0 containers: []
	W0814 01:09:02.390278   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:02.390287   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:02.390357   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:02.425960   61804 cri.go:89] found id: ""
	I0814 01:09:02.425988   61804 logs.go:276] 0 containers: []
	W0814 01:09:02.425996   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:02.426002   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:02.426072   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:02.459644   61804 cri.go:89] found id: ""
	I0814 01:09:02.459683   61804 logs.go:276] 0 containers: []
	W0814 01:09:02.459694   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:02.459702   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:02.459764   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:02.496147   61804 cri.go:89] found id: ""
	I0814 01:09:02.496169   61804 logs.go:276] 0 containers: []
	W0814 01:09:02.496182   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:02.496190   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:02.496202   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:02.576512   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:02.576547   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:02.612410   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:02.612440   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:02.665810   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:02.665850   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:02.680992   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:02.681020   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:02.781868   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:00.089915   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:02.090971   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:04.589030   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:03.780689   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:05.784928   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:04.917086   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:06.918108   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:05.282001   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:05.294986   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:05.295064   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:05.326520   61804 cri.go:89] found id: ""
	I0814 01:09:05.326547   61804 logs.go:276] 0 containers: []
	W0814 01:09:05.326555   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:05.326562   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:05.326618   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:05.358458   61804 cri.go:89] found id: ""
	I0814 01:09:05.358482   61804 logs.go:276] 0 containers: []
	W0814 01:09:05.358490   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:05.358497   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:05.358556   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:05.393729   61804 cri.go:89] found id: ""
	I0814 01:09:05.393763   61804 logs.go:276] 0 containers: []
	W0814 01:09:05.393771   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:05.393777   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:05.393824   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:05.433291   61804 cri.go:89] found id: ""
	I0814 01:09:05.433319   61804 logs.go:276] 0 containers: []
	W0814 01:09:05.433327   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:05.433334   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:05.433384   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:05.467163   61804 cri.go:89] found id: ""
	I0814 01:09:05.467187   61804 logs.go:276] 0 containers: []
	W0814 01:09:05.467198   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:05.467206   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:05.467284   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:05.499718   61804 cri.go:89] found id: ""
	I0814 01:09:05.499747   61804 logs.go:276] 0 containers: []
	W0814 01:09:05.499758   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:05.499768   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:05.499819   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:05.532818   61804 cri.go:89] found id: ""
	I0814 01:09:05.532847   61804 logs.go:276] 0 containers: []
	W0814 01:09:05.532859   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:05.532867   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:05.532920   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:05.566908   61804 cri.go:89] found id: ""
	I0814 01:09:05.566936   61804 logs.go:276] 0 containers: []
	W0814 01:09:05.566947   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:05.566957   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:05.566969   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:05.621247   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:05.621283   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:05.635566   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:05.635606   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:05.698579   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:05.698606   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:05.698622   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:05.780861   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:05.780897   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:08.322931   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:08.336836   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:08.336918   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:08.369802   61804 cri.go:89] found id: ""
	I0814 01:09:08.369833   61804 logs.go:276] 0 containers: []
	W0814 01:09:08.369842   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:08.369849   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:08.369899   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:08.415414   61804 cri.go:89] found id: ""
	I0814 01:09:08.415441   61804 logs.go:276] 0 containers: []
	W0814 01:09:08.415451   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:08.415459   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:08.415525   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:08.477026   61804 cri.go:89] found id: ""
	I0814 01:09:08.477058   61804 logs.go:276] 0 containers: []
	W0814 01:09:08.477069   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:08.477077   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:08.477145   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:08.522385   61804 cri.go:89] found id: ""
	I0814 01:09:08.522417   61804 logs.go:276] 0 containers: []
	W0814 01:09:08.522429   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:08.522438   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:08.522502   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:08.555803   61804 cri.go:89] found id: ""
	I0814 01:09:08.555839   61804 logs.go:276] 0 containers: []
	W0814 01:09:08.555848   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:08.555855   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:08.555922   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:08.589910   61804 cri.go:89] found id: ""
	I0814 01:09:08.589932   61804 logs.go:276] 0 containers: []
	W0814 01:09:08.589939   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:08.589945   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:08.589992   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:08.622278   61804 cri.go:89] found id: ""
	I0814 01:09:08.622313   61804 logs.go:276] 0 containers: []
	W0814 01:09:08.622321   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:08.622328   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:08.622381   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:08.655221   61804 cri.go:89] found id: ""
	I0814 01:09:08.655248   61804 logs.go:276] 0 containers: []
	W0814 01:09:08.655257   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:08.655266   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:08.655280   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:08.691932   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:08.691965   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:08.742551   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:08.742586   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:08.755590   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:08.755619   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:08.822365   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:08.822384   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:08.822401   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:06.589889   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:09.089601   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:08.281249   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:10.781156   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:09.418153   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:11.418222   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:11.397107   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:11.409425   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:11.409498   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:11.442680   61804 cri.go:89] found id: ""
	I0814 01:09:11.442711   61804 logs.go:276] 0 containers: []
	W0814 01:09:11.442724   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:11.442732   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:11.442791   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:11.482991   61804 cri.go:89] found id: ""
	I0814 01:09:11.483016   61804 logs.go:276] 0 containers: []
	W0814 01:09:11.483023   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:11.483034   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:11.483099   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:11.516069   61804 cri.go:89] found id: ""
	I0814 01:09:11.516091   61804 logs.go:276] 0 containers: []
	W0814 01:09:11.516100   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:11.516105   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:11.516154   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:11.549745   61804 cri.go:89] found id: ""
	I0814 01:09:11.549773   61804 logs.go:276] 0 containers: []
	W0814 01:09:11.549780   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:11.549787   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:11.549851   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:11.582542   61804 cri.go:89] found id: ""
	I0814 01:09:11.582569   61804 logs.go:276] 0 containers: []
	W0814 01:09:11.582577   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:11.582583   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:11.582642   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:11.616238   61804 cri.go:89] found id: ""
	I0814 01:09:11.616261   61804 logs.go:276] 0 containers: []
	W0814 01:09:11.616269   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:11.616275   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:11.616330   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:11.650238   61804 cri.go:89] found id: ""
	I0814 01:09:11.650286   61804 logs.go:276] 0 containers: []
	W0814 01:09:11.650301   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:11.650311   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:11.650384   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:11.683100   61804 cri.go:89] found id: ""
	I0814 01:09:11.683128   61804 logs.go:276] 0 containers: []
	W0814 01:09:11.683139   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:11.683149   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:11.683169   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:11.760248   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:11.760292   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:11.798965   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:11.798996   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:11.853109   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:11.853145   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:11.865645   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:11.865682   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:11.935478   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:14.436076   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:14.448846   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:14.448927   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:14.483833   61804 cri.go:89] found id: ""
	I0814 01:09:14.483873   61804 logs.go:276] 0 containers: []
	W0814 01:09:14.483882   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:14.483887   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:14.483940   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:11.089723   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:13.090681   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:12.781680   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:14.782443   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:13.918681   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:16.417982   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:14.522643   61804 cri.go:89] found id: ""
	I0814 01:09:14.522670   61804 logs.go:276] 0 containers: []
	W0814 01:09:14.522678   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:14.522683   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:14.522783   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:14.564084   61804 cri.go:89] found id: ""
	I0814 01:09:14.564111   61804 logs.go:276] 0 containers: []
	W0814 01:09:14.564121   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:14.564129   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:14.564193   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:14.603532   61804 cri.go:89] found id: ""
	I0814 01:09:14.603560   61804 logs.go:276] 0 containers: []
	W0814 01:09:14.603571   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:14.603578   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:14.603641   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:14.644420   61804 cri.go:89] found id: ""
	I0814 01:09:14.644443   61804 logs.go:276] 0 containers: []
	W0814 01:09:14.644450   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:14.644455   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:14.644503   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:14.681652   61804 cri.go:89] found id: ""
	I0814 01:09:14.681685   61804 logs.go:276] 0 containers: []
	W0814 01:09:14.681693   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:14.681701   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:14.681757   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:14.715830   61804 cri.go:89] found id: ""
	I0814 01:09:14.715852   61804 logs.go:276] 0 containers: []
	W0814 01:09:14.715860   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:14.715866   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:14.715912   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:14.752305   61804 cri.go:89] found id: ""
	I0814 01:09:14.752336   61804 logs.go:276] 0 containers: []
	W0814 01:09:14.752343   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:14.752352   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:14.752367   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:14.765250   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:14.765287   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:14.834427   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:14.834453   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:14.834470   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:14.914683   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:14.914721   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:14.959497   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:14.959534   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:17.513077   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:17.526300   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:17.526409   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:17.563670   61804 cri.go:89] found id: ""
	I0814 01:09:17.563700   61804 logs.go:276] 0 containers: []
	W0814 01:09:17.563709   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:17.563715   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:17.563768   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:17.599019   61804 cri.go:89] found id: ""
	I0814 01:09:17.599048   61804 logs.go:276] 0 containers: []
	W0814 01:09:17.599070   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:17.599078   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:17.599158   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:17.633378   61804 cri.go:89] found id: ""
	I0814 01:09:17.633407   61804 logs.go:276] 0 containers: []
	W0814 01:09:17.633422   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:17.633430   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:17.633494   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:17.667180   61804 cri.go:89] found id: ""
	I0814 01:09:17.667213   61804 logs.go:276] 0 containers: []
	W0814 01:09:17.667225   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:17.667233   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:17.667293   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:17.704552   61804 cri.go:89] found id: ""
	I0814 01:09:17.704582   61804 logs.go:276] 0 containers: []
	W0814 01:09:17.704595   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:17.704603   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:17.704670   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:17.735937   61804 cri.go:89] found id: ""
	I0814 01:09:17.735966   61804 logs.go:276] 0 containers: []
	W0814 01:09:17.735978   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:17.735987   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:17.736057   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:17.772223   61804 cri.go:89] found id: ""
	I0814 01:09:17.772251   61804 logs.go:276] 0 containers: []
	W0814 01:09:17.772263   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:17.772271   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:17.772335   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:17.807432   61804 cri.go:89] found id: ""
	I0814 01:09:17.807462   61804 logs.go:276] 0 containers: []
	W0814 01:09:17.807474   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:17.807485   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:17.807499   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:17.860093   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:17.860135   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:17.874608   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:17.874644   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:17.948791   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:17.948812   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:17.948827   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:18.024743   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:18.024778   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:15.590951   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:18.090491   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:17.296200   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:19.780540   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:18.419867   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:20.917387   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:22.918933   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:20.559854   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:20.572920   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:20.573004   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:20.609163   61804 cri.go:89] found id: ""
	I0814 01:09:20.609189   61804 logs.go:276] 0 containers: []
	W0814 01:09:20.609200   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:20.609205   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:20.609253   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:20.646826   61804 cri.go:89] found id: ""
	I0814 01:09:20.646852   61804 logs.go:276] 0 containers: []
	W0814 01:09:20.646859   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:20.646865   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:20.646913   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:20.682403   61804 cri.go:89] found id: ""
	I0814 01:09:20.682432   61804 logs.go:276] 0 containers: []
	W0814 01:09:20.682443   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:20.682452   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:20.682515   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:20.717678   61804 cri.go:89] found id: ""
	I0814 01:09:20.717700   61804 logs.go:276] 0 containers: []
	W0814 01:09:20.717708   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:20.717713   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:20.717761   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:20.748451   61804 cri.go:89] found id: ""
	I0814 01:09:20.748481   61804 logs.go:276] 0 containers: []
	W0814 01:09:20.748492   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:20.748501   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:20.748567   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:20.785684   61804 cri.go:89] found id: ""
	I0814 01:09:20.785712   61804 logs.go:276] 0 containers: []
	W0814 01:09:20.785722   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:20.785729   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:20.785792   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:20.826195   61804 cri.go:89] found id: ""
	I0814 01:09:20.826225   61804 logs.go:276] 0 containers: []
	W0814 01:09:20.826233   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:20.826239   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:20.826305   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:20.860155   61804 cri.go:89] found id: ""
	I0814 01:09:20.860181   61804 logs.go:276] 0 containers: []
	W0814 01:09:20.860190   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:20.860198   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:20.860209   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:20.909428   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:20.909464   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:20.923178   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:20.923208   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:20.994502   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:20.994537   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:20.994556   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:21.074097   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:21.074138   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:23.615557   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:23.628906   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:23.628976   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:23.661923   61804 cri.go:89] found id: ""
	I0814 01:09:23.661954   61804 logs.go:276] 0 containers: []
	W0814 01:09:23.661966   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:23.661973   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:23.662033   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:23.693786   61804 cri.go:89] found id: ""
	I0814 01:09:23.693815   61804 logs.go:276] 0 containers: []
	W0814 01:09:23.693828   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:23.693844   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:23.693938   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:23.726707   61804 cri.go:89] found id: ""
	I0814 01:09:23.726739   61804 logs.go:276] 0 containers: []
	W0814 01:09:23.726750   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:23.726758   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:23.726823   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:23.757433   61804 cri.go:89] found id: ""
	I0814 01:09:23.757457   61804 logs.go:276] 0 containers: []
	W0814 01:09:23.757465   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:23.757471   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:23.757521   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:23.789493   61804 cri.go:89] found id: ""
	I0814 01:09:23.789516   61804 logs.go:276] 0 containers: []
	W0814 01:09:23.789523   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:23.789529   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:23.789589   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:23.824641   61804 cri.go:89] found id: ""
	I0814 01:09:23.824668   61804 logs.go:276] 0 containers: []
	W0814 01:09:23.824676   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:23.824685   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:23.824758   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:23.857651   61804 cri.go:89] found id: ""
	I0814 01:09:23.857678   61804 logs.go:276] 0 containers: []
	W0814 01:09:23.857688   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:23.857697   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:23.857761   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:23.898116   61804 cri.go:89] found id: ""
	I0814 01:09:23.898138   61804 logs.go:276] 0 containers: []
	W0814 01:09:23.898145   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:23.898154   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:23.898169   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:23.982086   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:23.982121   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:24.018340   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:24.018372   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:24.067264   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:24.067300   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:24.081648   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:24.081681   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:24.156566   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:20.590620   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:23.090160   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:21.781174   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:23.782333   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:26.282145   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:25.417101   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:27.417596   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:26.656930   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:26.669540   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:26.669616   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:26.701786   61804 cri.go:89] found id: ""
	I0814 01:09:26.701819   61804 logs.go:276] 0 containers: []
	W0814 01:09:26.701828   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:26.701834   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:26.701897   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:26.734372   61804 cri.go:89] found id: ""
	I0814 01:09:26.734397   61804 logs.go:276] 0 containers: []
	W0814 01:09:26.734405   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:26.734410   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:26.734463   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:26.767100   61804 cri.go:89] found id: ""
	I0814 01:09:26.767125   61804 logs.go:276] 0 containers: []
	W0814 01:09:26.767140   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:26.767148   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:26.767210   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:26.802145   61804 cri.go:89] found id: ""
	I0814 01:09:26.802168   61804 logs.go:276] 0 containers: []
	W0814 01:09:26.802177   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:26.802182   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:26.802230   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:26.835588   61804 cri.go:89] found id: ""
	I0814 01:09:26.835616   61804 logs.go:276] 0 containers: []
	W0814 01:09:26.835624   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:26.835630   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:26.835685   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:26.868104   61804 cri.go:89] found id: ""
	I0814 01:09:26.868130   61804 logs.go:276] 0 containers: []
	W0814 01:09:26.868138   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:26.868144   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:26.868209   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:26.899709   61804 cri.go:89] found id: ""
	I0814 01:09:26.899736   61804 logs.go:276] 0 containers: []
	W0814 01:09:26.899755   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:26.899764   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:26.899824   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:26.934964   61804 cri.go:89] found id: ""
	I0814 01:09:26.934989   61804 logs.go:276] 0 containers: []
	W0814 01:09:26.934996   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:26.935005   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:26.935023   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:26.970832   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:26.970859   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:27.022349   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:27.022390   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:27.035656   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:27.035683   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:27.115414   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:27.115441   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:27.115458   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:25.090543   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:27.590088   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:29.590449   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:28.781004   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:30.781622   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:29.920036   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:32.417796   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:29.701338   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:29.713890   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:29.713947   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:29.745724   61804 cri.go:89] found id: ""
	I0814 01:09:29.745749   61804 logs.go:276] 0 containers: []
	W0814 01:09:29.745756   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:29.745763   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:29.745816   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:29.777020   61804 cri.go:89] found id: ""
	I0814 01:09:29.777047   61804 logs.go:276] 0 containers: []
	W0814 01:09:29.777057   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:29.777065   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:29.777130   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:29.813355   61804 cri.go:89] found id: ""
	I0814 01:09:29.813386   61804 logs.go:276] 0 containers: []
	W0814 01:09:29.813398   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:29.813406   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:29.813464   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:29.845184   61804 cri.go:89] found id: ""
	I0814 01:09:29.845212   61804 logs.go:276] 0 containers: []
	W0814 01:09:29.845222   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:29.845227   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:29.845288   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:29.881128   61804 cri.go:89] found id: ""
	I0814 01:09:29.881158   61804 logs.go:276] 0 containers: []
	W0814 01:09:29.881169   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:29.881177   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:29.881249   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:29.912034   61804 cri.go:89] found id: ""
	I0814 01:09:29.912078   61804 logs.go:276] 0 containers: []
	W0814 01:09:29.912091   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:29.912100   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:29.912173   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:29.950345   61804 cri.go:89] found id: ""
	I0814 01:09:29.950378   61804 logs.go:276] 0 containers: []
	W0814 01:09:29.950386   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:29.950392   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:29.950454   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:29.989118   61804 cri.go:89] found id: ""
	I0814 01:09:29.989150   61804 logs.go:276] 0 containers: []
	W0814 01:09:29.989161   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:29.989172   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:29.989186   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:30.042231   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:30.042262   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:30.056231   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:30.056262   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:30.130840   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:30.130871   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:30.130891   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:30.209332   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:30.209372   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:32.751036   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:32.765011   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:32.765072   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:32.802505   61804 cri.go:89] found id: ""
	I0814 01:09:32.802533   61804 logs.go:276] 0 containers: []
	W0814 01:09:32.802543   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:32.802548   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:32.802600   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:32.835127   61804 cri.go:89] found id: ""
	I0814 01:09:32.835165   61804 logs.go:276] 0 containers: []
	W0814 01:09:32.835174   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:32.835179   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:32.835230   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:32.871768   61804 cri.go:89] found id: ""
	I0814 01:09:32.871793   61804 logs.go:276] 0 containers: []
	W0814 01:09:32.871800   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:32.871814   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:32.871865   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:32.907601   61804 cri.go:89] found id: ""
	I0814 01:09:32.907625   61804 logs.go:276] 0 containers: []
	W0814 01:09:32.907634   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:32.907640   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:32.907693   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:32.942615   61804 cri.go:89] found id: ""
	I0814 01:09:32.942640   61804 logs.go:276] 0 containers: []
	W0814 01:09:32.942649   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:32.942655   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:32.942707   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:32.975436   61804 cri.go:89] found id: ""
	I0814 01:09:32.975467   61804 logs.go:276] 0 containers: []
	W0814 01:09:32.975478   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:32.975486   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:32.975546   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:33.008982   61804 cri.go:89] found id: ""
	I0814 01:09:33.009013   61804 logs.go:276] 0 containers: []
	W0814 01:09:33.009021   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:33.009027   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:33.009077   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:33.042312   61804 cri.go:89] found id: ""
	I0814 01:09:33.042345   61804 logs.go:276] 0 containers: []
	W0814 01:09:33.042362   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:33.042371   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:33.042383   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:33.102102   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:33.102145   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:33.116497   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:33.116527   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:33.191821   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:33.191847   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:33.191862   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:33.272510   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:33.272562   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:32.090206   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:34.589260   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:33.280565   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:35.280918   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:34.417839   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:36.417950   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:35.813246   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:35.826224   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:35.826304   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:35.859220   61804 cri.go:89] found id: ""
	I0814 01:09:35.859252   61804 logs.go:276] 0 containers: []
	W0814 01:09:35.859263   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:35.859274   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:35.859349   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:35.896460   61804 cri.go:89] found id: ""
	I0814 01:09:35.896485   61804 logs.go:276] 0 containers: []
	W0814 01:09:35.896494   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:35.896500   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:35.896559   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:35.929796   61804 cri.go:89] found id: ""
	I0814 01:09:35.929819   61804 logs.go:276] 0 containers: []
	W0814 01:09:35.929827   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:35.929832   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:35.929883   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:35.963928   61804 cri.go:89] found id: ""
	I0814 01:09:35.963954   61804 logs.go:276] 0 containers: []
	W0814 01:09:35.963965   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:35.963972   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:35.964033   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:36.004613   61804 cri.go:89] found id: ""
	I0814 01:09:36.004644   61804 logs.go:276] 0 containers: []
	W0814 01:09:36.004654   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:36.004660   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:36.004729   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:36.039212   61804 cri.go:89] found id: ""
	I0814 01:09:36.039241   61804 logs.go:276] 0 containers: []
	W0814 01:09:36.039249   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:36.039256   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:36.039311   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:36.072917   61804 cri.go:89] found id: ""
	I0814 01:09:36.072945   61804 logs.go:276] 0 containers: []
	W0814 01:09:36.072954   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:36.072960   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:36.073020   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:36.113542   61804 cri.go:89] found id: ""
	I0814 01:09:36.113573   61804 logs.go:276] 0 containers: []
	W0814 01:09:36.113584   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:36.113594   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:36.113610   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:36.152043   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:36.152071   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:36.203163   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:36.203200   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:36.216733   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:36.216764   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:36.288171   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:36.288193   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:36.288206   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:38.868008   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:38.881009   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:38.881089   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:38.914485   61804 cri.go:89] found id: ""
	I0814 01:09:38.914515   61804 logs.go:276] 0 containers: []
	W0814 01:09:38.914527   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:38.914535   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:38.914595   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:38.950810   61804 cri.go:89] found id: ""
	I0814 01:09:38.950841   61804 logs.go:276] 0 containers: []
	W0814 01:09:38.950852   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:38.950860   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:38.950913   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:38.984938   61804 cri.go:89] found id: ""
	I0814 01:09:38.984964   61804 logs.go:276] 0 containers: []
	W0814 01:09:38.984972   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:38.984980   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:38.985050   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:39.017383   61804 cri.go:89] found id: ""
	I0814 01:09:39.017408   61804 logs.go:276] 0 containers: []
	W0814 01:09:39.017415   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:39.017421   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:39.017467   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:39.050669   61804 cri.go:89] found id: ""
	I0814 01:09:39.050694   61804 logs.go:276] 0 containers: []
	W0814 01:09:39.050706   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:39.050712   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:39.050777   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:39.083840   61804 cri.go:89] found id: ""
	I0814 01:09:39.083870   61804 logs.go:276] 0 containers: []
	W0814 01:09:39.083882   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:39.083903   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:39.083973   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:39.117880   61804 cri.go:89] found id: ""
	I0814 01:09:39.117905   61804 logs.go:276] 0 containers: []
	W0814 01:09:39.117913   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:39.117920   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:39.117989   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:39.151956   61804 cri.go:89] found id: ""
	I0814 01:09:39.151981   61804 logs.go:276] 0 containers: []
	W0814 01:09:39.151991   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:39.152002   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:39.152017   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:39.229820   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:39.229860   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:39.266989   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:39.267023   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:39.317673   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:39.317709   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:39.332968   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:39.332997   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:39.401164   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:36.591033   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:39.089990   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:37.282218   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:39.781653   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:38.918816   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:41.417142   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:41.901891   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:41.914735   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:41.914810   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:41.950605   61804 cri.go:89] found id: ""
	I0814 01:09:41.950633   61804 logs.go:276] 0 containers: []
	W0814 01:09:41.950641   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:41.950648   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:41.950699   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:41.984517   61804 cri.go:89] found id: ""
	I0814 01:09:41.984541   61804 logs.go:276] 0 containers: []
	W0814 01:09:41.984549   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:41.984555   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:41.984609   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:42.018378   61804 cri.go:89] found id: ""
	I0814 01:09:42.018405   61804 logs.go:276] 0 containers: []
	W0814 01:09:42.018413   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:42.018418   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:42.018475   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:42.057088   61804 cri.go:89] found id: ""
	I0814 01:09:42.057126   61804 logs.go:276] 0 containers: []
	W0814 01:09:42.057134   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:42.057140   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:42.057208   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:42.093523   61804 cri.go:89] found id: ""
	I0814 01:09:42.093548   61804 logs.go:276] 0 containers: []
	W0814 01:09:42.093564   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:42.093569   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:42.093621   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:42.127036   61804 cri.go:89] found id: ""
	I0814 01:09:42.127059   61804 logs.go:276] 0 containers: []
	W0814 01:09:42.127067   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:42.127072   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:42.127123   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:42.161194   61804 cri.go:89] found id: ""
	I0814 01:09:42.161218   61804 logs.go:276] 0 containers: []
	W0814 01:09:42.161226   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:42.161231   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:42.161279   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:42.195595   61804 cri.go:89] found id: ""
	I0814 01:09:42.195624   61804 logs.go:276] 0 containers: []
	W0814 01:09:42.195633   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:42.195643   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:42.195656   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:42.251942   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:42.251974   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:42.309142   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:42.309179   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:42.322696   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:42.322724   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:42.389877   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:42.389903   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:42.389918   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:41.589650   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:43.589804   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:42.281108   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:44.780495   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:43.417531   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:45.419122   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:47.918282   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:44.974486   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:44.986981   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:44.987044   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:45.023400   61804 cri.go:89] found id: ""
	I0814 01:09:45.023426   61804 logs.go:276] 0 containers: []
	W0814 01:09:45.023435   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:45.023441   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:45.023492   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:45.057923   61804 cri.go:89] found id: ""
	I0814 01:09:45.057948   61804 logs.go:276] 0 containers: []
	W0814 01:09:45.057961   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:45.057968   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:45.058024   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:45.092882   61804 cri.go:89] found id: ""
	I0814 01:09:45.092908   61804 logs.go:276] 0 containers: []
	W0814 01:09:45.092918   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:45.092924   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:45.092987   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:45.128802   61804 cri.go:89] found id: ""
	I0814 01:09:45.128832   61804 logs.go:276] 0 containers: []
	W0814 01:09:45.128840   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:45.128846   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:45.128909   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:45.164528   61804 cri.go:89] found id: ""
	I0814 01:09:45.164556   61804 logs.go:276] 0 containers: []
	W0814 01:09:45.164564   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:45.164571   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:45.164619   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:45.198115   61804 cri.go:89] found id: ""
	I0814 01:09:45.198145   61804 logs.go:276] 0 containers: []
	W0814 01:09:45.198157   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:45.198164   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:45.198231   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:45.230356   61804 cri.go:89] found id: ""
	I0814 01:09:45.230389   61804 logs.go:276] 0 containers: []
	W0814 01:09:45.230401   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:45.230409   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:45.230471   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:45.268342   61804 cri.go:89] found id: ""
	I0814 01:09:45.268367   61804 logs.go:276] 0 containers: []
	W0814 01:09:45.268376   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:45.268384   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:45.268398   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:45.321257   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:45.321294   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:45.334182   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:45.334206   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:45.409140   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:45.409162   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:45.409178   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:45.493974   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:45.494013   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:48.032466   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:48.045704   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:48.045783   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:48.084634   61804 cri.go:89] found id: ""
	I0814 01:09:48.084663   61804 logs.go:276] 0 containers: []
	W0814 01:09:48.084674   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:48.084683   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:48.084743   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:48.121917   61804 cri.go:89] found id: ""
	I0814 01:09:48.121941   61804 logs.go:276] 0 containers: []
	W0814 01:09:48.121948   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:48.121953   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:48.122014   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:48.156005   61804 cri.go:89] found id: ""
	I0814 01:09:48.156029   61804 logs.go:276] 0 containers: []
	W0814 01:09:48.156038   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:48.156046   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:48.156104   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:48.190105   61804 cri.go:89] found id: ""
	I0814 01:09:48.190127   61804 logs.go:276] 0 containers: []
	W0814 01:09:48.190136   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:48.190141   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:48.190202   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:48.222617   61804 cri.go:89] found id: ""
	I0814 01:09:48.222641   61804 logs.go:276] 0 containers: []
	W0814 01:09:48.222649   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:48.222655   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:48.222727   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:48.256198   61804 cri.go:89] found id: ""
	I0814 01:09:48.256222   61804 logs.go:276] 0 containers: []
	W0814 01:09:48.256230   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:48.256236   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:48.256294   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:48.294389   61804 cri.go:89] found id: ""
	I0814 01:09:48.294420   61804 logs.go:276] 0 containers: []
	W0814 01:09:48.294428   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:48.294434   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:48.294496   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:48.331503   61804 cri.go:89] found id: ""
	I0814 01:09:48.331540   61804 logs.go:276] 0 containers: []
	W0814 01:09:48.331553   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:48.331565   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:48.331585   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:48.407092   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:48.407134   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:48.446890   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:48.446920   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:48.498523   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:48.498559   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:48.511540   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:48.511578   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:48.576299   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:45.590239   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:48.090689   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:46.781816   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:49.280840   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:51.281638   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:50.418154   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:52.917611   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:51.076974   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:51.089440   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:51.089508   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:51.122770   61804 cri.go:89] found id: ""
	I0814 01:09:51.122794   61804 logs.go:276] 0 containers: []
	W0814 01:09:51.122806   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:51.122814   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:51.122873   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:51.159045   61804 cri.go:89] found id: ""
	I0814 01:09:51.159075   61804 logs.go:276] 0 containers: []
	W0814 01:09:51.159084   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:51.159090   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:51.159144   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:51.192983   61804 cri.go:89] found id: ""
	I0814 01:09:51.193013   61804 logs.go:276] 0 containers: []
	W0814 01:09:51.193022   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:51.193028   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:51.193087   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:51.225112   61804 cri.go:89] found id: ""
	I0814 01:09:51.225137   61804 logs.go:276] 0 containers: []
	W0814 01:09:51.225145   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:51.225151   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:51.225204   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:51.257785   61804 cri.go:89] found id: ""
	I0814 01:09:51.257813   61804 logs.go:276] 0 containers: []
	W0814 01:09:51.257822   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:51.257828   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:51.257879   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:51.289863   61804 cri.go:89] found id: ""
	I0814 01:09:51.289891   61804 logs.go:276] 0 containers: []
	W0814 01:09:51.289902   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:51.289910   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:51.289963   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:51.321834   61804 cri.go:89] found id: ""
	I0814 01:09:51.321860   61804 logs.go:276] 0 containers: []
	W0814 01:09:51.321870   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:51.321880   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:51.321949   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:51.354494   61804 cri.go:89] found id: ""
	I0814 01:09:51.354517   61804 logs.go:276] 0 containers: []
	W0814 01:09:51.354526   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:51.354535   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:51.354556   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:51.424704   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:51.424726   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:51.424741   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:51.505301   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:51.505337   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:51.544567   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:51.544603   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:51.598924   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:51.598954   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:54.113501   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:54.128000   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:54.128075   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:54.162230   61804 cri.go:89] found id: ""
	I0814 01:09:54.162260   61804 logs.go:276] 0 containers: []
	W0814 01:09:54.162270   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:54.162277   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:54.162327   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:54.196395   61804 cri.go:89] found id: ""
	I0814 01:09:54.196421   61804 logs.go:276] 0 containers: []
	W0814 01:09:54.196432   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:54.196440   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:54.196500   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:54.229685   61804 cri.go:89] found id: ""
	I0814 01:09:54.229718   61804 logs.go:276] 0 containers: []
	W0814 01:09:54.229730   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:54.229738   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:54.229825   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:54.263141   61804 cri.go:89] found id: ""
	I0814 01:09:54.263174   61804 logs.go:276] 0 containers: []
	W0814 01:09:54.263185   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:54.263193   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:54.263257   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:54.298658   61804 cri.go:89] found id: ""
	I0814 01:09:54.298689   61804 logs.go:276] 0 containers: []
	W0814 01:09:54.298700   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:54.298708   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:54.298794   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:54.331254   61804 cri.go:89] found id: ""
	I0814 01:09:54.331278   61804 logs.go:276] 0 containers: []
	W0814 01:09:54.331287   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:54.331294   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:54.331348   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:54.362930   61804 cri.go:89] found id: ""
	I0814 01:09:54.362954   61804 logs.go:276] 0 containers: []
	W0814 01:09:54.362961   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:54.362967   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:54.363017   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:54.403663   61804 cri.go:89] found id: ""
	I0814 01:09:54.403690   61804 logs.go:276] 0 containers: []
	W0814 01:09:54.403697   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:54.403706   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:54.403725   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:54.460623   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:54.460661   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:54.478728   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:54.478757   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 01:09:50.589697   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:53.089733   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:53.781208   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:56.282166   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:54.918107   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:56.918502   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	W0814 01:09:54.548615   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:54.548640   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:54.548654   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:54.624350   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:54.624385   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:57.164202   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:09:57.176107   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:09:57.176174   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:09:57.211204   61804 cri.go:89] found id: ""
	I0814 01:09:57.211230   61804 logs.go:276] 0 containers: []
	W0814 01:09:57.211238   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:09:57.211245   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:09:57.211305   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:09:57.243004   61804 cri.go:89] found id: ""
	I0814 01:09:57.243035   61804 logs.go:276] 0 containers: []
	W0814 01:09:57.243046   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:09:57.243052   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:09:57.243113   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:09:57.275315   61804 cri.go:89] found id: ""
	I0814 01:09:57.275346   61804 logs.go:276] 0 containers: []
	W0814 01:09:57.275357   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:09:57.275365   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:09:57.275435   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:09:57.311856   61804 cri.go:89] found id: ""
	I0814 01:09:57.311878   61804 logs.go:276] 0 containers: []
	W0814 01:09:57.311885   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:09:57.311890   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:09:57.311944   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:09:57.345305   61804 cri.go:89] found id: ""
	I0814 01:09:57.345335   61804 logs.go:276] 0 containers: []
	W0814 01:09:57.345347   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:09:57.345355   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:09:57.345419   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:09:57.378001   61804 cri.go:89] found id: ""
	I0814 01:09:57.378033   61804 logs.go:276] 0 containers: []
	W0814 01:09:57.378058   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:09:57.378066   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:09:57.378127   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:09:57.410664   61804 cri.go:89] found id: ""
	I0814 01:09:57.410691   61804 logs.go:276] 0 containers: []
	W0814 01:09:57.410700   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:09:57.410706   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:09:57.410766   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:09:57.443477   61804 cri.go:89] found id: ""
	I0814 01:09:57.443505   61804 logs.go:276] 0 containers: []
	W0814 01:09:57.443514   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:09:57.443523   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:09:57.443538   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:09:57.497674   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:09:57.497710   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:09:57.511347   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:09:57.511380   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:09:57.580111   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:09:57.580137   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:09:57.580153   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:09:57.660119   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:09:57.660166   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:09:55.089771   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:57.090272   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:59.591289   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:58.780363   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:00.781165   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:09:59.417990   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:01.419950   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:00.203685   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:10:00.224480   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:10:00.224552   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:10:00.265353   61804 cri.go:89] found id: ""
	I0814 01:10:00.265379   61804 logs.go:276] 0 containers: []
	W0814 01:10:00.265388   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:10:00.265395   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:10:00.265449   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:10:00.301086   61804 cri.go:89] found id: ""
	I0814 01:10:00.301112   61804 logs.go:276] 0 containers: []
	W0814 01:10:00.301122   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:10:00.301129   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:10:00.301203   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:10:00.335369   61804 cri.go:89] found id: ""
	I0814 01:10:00.335400   61804 logs.go:276] 0 containers: []
	W0814 01:10:00.335422   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:10:00.335442   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:10:00.335501   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:10:00.369341   61804 cri.go:89] found id: ""
	I0814 01:10:00.369367   61804 logs.go:276] 0 containers: []
	W0814 01:10:00.369377   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:10:00.369384   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:10:00.369446   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:10:00.403958   61804 cri.go:89] found id: ""
	I0814 01:10:00.403985   61804 logs.go:276] 0 containers: []
	W0814 01:10:00.403993   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:10:00.403998   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:10:00.404059   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:10:00.437921   61804 cri.go:89] found id: ""
	I0814 01:10:00.437944   61804 logs.go:276] 0 containers: []
	W0814 01:10:00.437952   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:10:00.437958   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:10:00.438020   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:10:00.471076   61804 cri.go:89] found id: ""
	I0814 01:10:00.471116   61804 logs.go:276] 0 containers: []
	W0814 01:10:00.471127   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:10:00.471135   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:10:00.471194   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:10:00.506002   61804 cri.go:89] found id: ""
	I0814 01:10:00.506026   61804 logs.go:276] 0 containers: []
	W0814 01:10:00.506034   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:10:00.506056   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:10:00.506074   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:10:00.576627   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:10:00.576653   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:10:00.576668   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:10:00.661108   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:10:00.661150   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:10:00.699083   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:10:00.699111   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:10:00.748944   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:10:00.748981   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:10:03.262338   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:10:03.274831   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:10:03.274909   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:10:03.308413   61804 cri.go:89] found id: ""
	I0814 01:10:03.308445   61804 logs.go:276] 0 containers: []
	W0814 01:10:03.308456   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:10:03.308463   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:10:03.308530   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:10:03.340763   61804 cri.go:89] found id: ""
	I0814 01:10:03.340789   61804 logs.go:276] 0 containers: []
	W0814 01:10:03.340798   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:10:03.340804   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:10:03.340872   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:10:03.375914   61804 cri.go:89] found id: ""
	I0814 01:10:03.375945   61804 logs.go:276] 0 containers: []
	W0814 01:10:03.375956   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:10:03.375964   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:10:03.376028   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:10:03.408904   61804 cri.go:89] found id: ""
	I0814 01:10:03.408934   61804 logs.go:276] 0 containers: []
	W0814 01:10:03.408944   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:10:03.408951   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:10:03.409015   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:10:03.443664   61804 cri.go:89] found id: ""
	I0814 01:10:03.443694   61804 logs.go:276] 0 containers: []
	W0814 01:10:03.443704   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:10:03.443712   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:10:03.443774   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:10:03.475742   61804 cri.go:89] found id: ""
	I0814 01:10:03.475775   61804 logs.go:276] 0 containers: []
	W0814 01:10:03.475786   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:10:03.475794   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:10:03.475856   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:10:03.509252   61804 cri.go:89] found id: ""
	I0814 01:10:03.509297   61804 logs.go:276] 0 containers: []
	W0814 01:10:03.509309   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:10:03.509315   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:10:03.509380   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:10:03.544311   61804 cri.go:89] found id: ""
	I0814 01:10:03.544332   61804 logs.go:276] 0 containers: []
	W0814 01:10:03.544341   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:10:03.544350   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:10:03.544365   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:10:03.620425   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:10:03.620459   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:10:03.658574   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:10:03.658601   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:10:03.708154   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:10:03.708187   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:10:03.721959   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:10:03.721986   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:10:03.789903   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:10:02.088526   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:04.092427   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:02.781595   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:05.280678   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:03.917268   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:05.917774   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:07.918699   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:06.290301   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:10:06.301935   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:10:06.301994   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:10:06.336211   61804 cri.go:89] found id: ""
	I0814 01:10:06.336231   61804 logs.go:276] 0 containers: []
	W0814 01:10:06.336239   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:10:06.336245   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:10:06.336293   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:10:06.369489   61804 cri.go:89] found id: ""
	I0814 01:10:06.369517   61804 logs.go:276] 0 containers: []
	W0814 01:10:06.369526   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:10:06.369532   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:10:06.369590   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:10:06.401142   61804 cri.go:89] found id: ""
	I0814 01:10:06.401167   61804 logs.go:276] 0 containers: []
	W0814 01:10:06.401176   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:10:06.401183   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:10:06.401233   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:10:06.432265   61804 cri.go:89] found id: ""
	I0814 01:10:06.432294   61804 logs.go:276] 0 containers: []
	W0814 01:10:06.432303   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:10:06.432308   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:10:06.432368   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:10:06.464786   61804 cri.go:89] found id: ""
	I0814 01:10:06.464815   61804 logs.go:276] 0 containers: []
	W0814 01:10:06.464826   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:10:06.464834   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:10:06.464928   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:10:06.497984   61804 cri.go:89] found id: ""
	I0814 01:10:06.498013   61804 logs.go:276] 0 containers: []
	W0814 01:10:06.498021   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:10:06.498027   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:10:06.498122   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:10:06.528722   61804 cri.go:89] found id: ""
	I0814 01:10:06.528750   61804 logs.go:276] 0 containers: []
	W0814 01:10:06.528760   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:10:06.528768   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:10:06.528836   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:10:06.559920   61804 cri.go:89] found id: ""
	I0814 01:10:06.559947   61804 logs.go:276] 0 containers: []
	W0814 01:10:06.559955   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:10:06.559964   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:10:06.559976   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:10:06.609227   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:10:06.609256   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:10:06.621627   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:10:06.621652   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:10:06.686110   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:10:06.686132   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:10:06.686145   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:10:06.767163   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:10:06.767201   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:10:09.302611   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:10:09.314804   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:10:09.314863   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:10:09.347222   61804 cri.go:89] found id: ""
	I0814 01:10:09.347248   61804 logs.go:276] 0 containers: []
	W0814 01:10:09.347257   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:10:09.347262   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:10:09.347311   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:10:09.382005   61804 cri.go:89] found id: ""
	I0814 01:10:09.382035   61804 logs.go:276] 0 containers: []
	W0814 01:10:09.382059   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:10:09.382067   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:10:09.382129   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:10:09.413728   61804 cri.go:89] found id: ""
	I0814 01:10:09.413759   61804 logs.go:276] 0 containers: []
	W0814 01:10:09.413771   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:10:09.413778   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:10:09.413843   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:10:09.446389   61804 cri.go:89] found id: ""
	I0814 01:10:09.446422   61804 logs.go:276] 0 containers: []
	W0814 01:10:09.446435   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:10:09.446455   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:10:09.446511   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:10:09.482224   61804 cri.go:89] found id: ""
	I0814 01:10:09.482253   61804 logs.go:276] 0 containers: []
	W0814 01:10:09.482261   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:10:09.482267   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:10:09.482330   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:10:06.589791   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:09.089933   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:07.782212   61447 pod_ready.go:102] pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:07.782245   61447 pod_ready.go:81] duration metric: took 4m0.007594209s for pod "metrics-server-6867b74b74-gb2dt" in "kube-system" namespace to be "Ready" ...
	E0814 01:10:07.782257   61447 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0814 01:10:07.782267   61447 pod_ready.go:38] duration metric: took 4m3.607931792s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 01:10:07.782286   61447 api_server.go:52] waiting for apiserver process to appear ...
	I0814 01:10:07.782318   61447 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:10:07.782382   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:10:07.840346   61447 cri.go:89] found id: "ddba3ebb8413d6647bf6c8fe0bff16a1a10b35bcd7219cad5b66979c372cd92e"
	I0814 01:10:07.840370   61447 cri.go:89] found id: ""
	I0814 01:10:07.840378   61447 logs.go:276] 1 containers: [ddba3ebb8413d6647bf6c8fe0bff16a1a10b35bcd7219cad5b66979c372cd92e]
	I0814 01:10:07.840426   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:07.844721   61447 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:10:07.844775   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:10:07.879720   61447 cri.go:89] found id: "1632d4b88f7f0e7e802d1848acf6c67950cfa7cdfbe5b4dd7f6fbc467a969388"
	I0814 01:10:07.879748   61447 cri.go:89] found id: ""
	I0814 01:10:07.879756   61447 logs.go:276] 1 containers: [1632d4b88f7f0e7e802d1848acf6c67950cfa7cdfbe5b4dd7f6fbc467a969388]
	I0814 01:10:07.879805   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:07.883392   61447 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:10:07.883455   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:10:07.919395   61447 cri.go:89] found id: "7d3cb1d648607a62a84856164fa12f6d400c0d4316d7bda5d83a448870c145fc"
	I0814 01:10:07.919414   61447 cri.go:89] found id: ""
	I0814 01:10:07.919423   61447 logs.go:276] 1 containers: [7d3cb1d648607a62a84856164fa12f6d400c0d4316d7bda5d83a448870c145fc]
	I0814 01:10:07.919481   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:07.923650   61447 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:10:07.923715   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:10:07.960706   61447 cri.go:89] found id: "89953f1dc813e32748579ac4c2541c0f99b1443ae3956d85a367db06810107b2"
	I0814 01:10:07.960734   61447 cri.go:89] found id: ""
	I0814 01:10:07.960744   61447 logs.go:276] 1 containers: [89953f1dc813e32748579ac4c2541c0f99b1443ae3956d85a367db06810107b2]
	I0814 01:10:07.960792   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:07.964923   61447 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:10:07.964984   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:10:08.000107   61447 cri.go:89] found id: "0ec88a5a7a9d566fabdf1393667a95e5eac0ed7f2a2aaa326ee51d3e99a72b12"
	I0814 01:10:08.000127   61447 cri.go:89] found id: ""
	I0814 01:10:08.000134   61447 logs.go:276] 1 containers: [0ec88a5a7a9d566fabdf1393667a95e5eac0ed7f2a2aaa326ee51d3e99a72b12]
	I0814 01:10:08.000187   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:08.004313   61447 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:10:08.004375   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:10:08.039317   61447 cri.go:89] found id: "3ef9bf666bbbc4c4c8b8036d2fa7e15e882f2bb301fe7d3b63a483d8b38e6091"
	I0814 01:10:08.039346   61447 cri.go:89] found id: ""
	I0814 01:10:08.039356   61447 logs.go:276] 1 containers: [3ef9bf666bbbc4c4c8b8036d2fa7e15e882f2bb301fe7d3b63a483d8b38e6091]
	I0814 01:10:08.039433   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:08.043054   61447 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:10:08.043122   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:10:08.078708   61447 cri.go:89] found id: ""
	I0814 01:10:08.078745   61447 logs.go:276] 0 containers: []
	W0814 01:10:08.078756   61447 logs.go:278] No container was found matching "kindnet"
	I0814 01:10:08.078764   61447 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0814 01:10:08.078826   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0814 01:10:08.119964   61447 cri.go:89] found id: "d4d7da10edbe3c5e4d9a0997c7f8909523ed34eacac6b44dc982bf6ab7504eff"
	I0814 01:10:08.119989   61447 cri.go:89] found id: "bacb411cbea2000ee28b8a5eb1c371d4094e22dea31e33892860568e08ee9768"
	I0814 01:10:08.119995   61447 cri.go:89] found id: ""
	I0814 01:10:08.120004   61447 logs.go:276] 2 containers: [d4d7da10edbe3c5e4d9a0997c7f8909523ed34eacac6b44dc982bf6ab7504eff bacb411cbea2000ee28b8a5eb1c371d4094e22dea31e33892860568e08ee9768]
	I0814 01:10:08.120067   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:08.123852   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:08.127530   61447 logs.go:123] Gathering logs for dmesg ...
	I0814 01:10:08.127553   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:10:08.144431   61447 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:10:08.144466   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 01:10:08.267719   61447 logs.go:123] Gathering logs for coredns [7d3cb1d648607a62a84856164fa12f6d400c0d4316d7bda5d83a448870c145fc] ...
	I0814 01:10:08.267751   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7d3cb1d648607a62a84856164fa12f6d400c0d4316d7bda5d83a448870c145fc"
	I0814 01:10:08.308901   61447 logs.go:123] Gathering logs for kube-controller-manager [3ef9bf666bbbc4c4c8b8036d2fa7e15e882f2bb301fe7d3b63a483d8b38e6091] ...
	I0814 01:10:08.308936   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ef9bf666bbbc4c4c8b8036d2fa7e15e882f2bb301fe7d3b63a483d8b38e6091"
	I0814 01:10:08.357837   61447 logs.go:123] Gathering logs for storage-provisioner [d4d7da10edbe3c5e4d9a0997c7f8909523ed34eacac6b44dc982bf6ab7504eff] ...
	I0814 01:10:08.357868   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d4d7da10edbe3c5e4d9a0997c7f8909523ed34eacac6b44dc982bf6ab7504eff"
	I0814 01:10:08.393863   61447 logs.go:123] Gathering logs for storage-provisioner [bacb411cbea2000ee28b8a5eb1c371d4094e22dea31e33892860568e08ee9768] ...
	I0814 01:10:08.393890   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bacb411cbea2000ee28b8a5eb1c371d4094e22dea31e33892860568e08ee9768"
	I0814 01:10:08.430599   61447 logs.go:123] Gathering logs for kubelet ...
	I0814 01:10:08.430631   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:10:08.512420   61447 logs.go:123] Gathering logs for etcd [1632d4b88f7f0e7e802d1848acf6c67950cfa7cdfbe5b4dd7f6fbc467a969388] ...
	I0814 01:10:08.512460   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1632d4b88f7f0e7e802d1848acf6c67950cfa7cdfbe5b4dd7f6fbc467a969388"
	I0814 01:10:08.561482   61447 logs.go:123] Gathering logs for kube-scheduler [89953f1dc813e32748579ac4c2541c0f99b1443ae3956d85a367db06810107b2] ...
	I0814 01:10:08.561512   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 89953f1dc813e32748579ac4c2541c0f99b1443ae3956d85a367db06810107b2"
	I0814 01:10:08.598681   61447 logs.go:123] Gathering logs for kube-proxy [0ec88a5a7a9d566fabdf1393667a95e5eac0ed7f2a2aaa326ee51d3e99a72b12] ...
	I0814 01:10:08.598705   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ec88a5a7a9d566fabdf1393667a95e5eac0ed7f2a2aaa326ee51d3e99a72b12"
	I0814 01:10:08.634798   61447 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:10:08.634835   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:10:09.113197   61447 logs.go:123] Gathering logs for container status ...
	I0814 01:10:09.113249   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:10:09.166264   61447 logs.go:123] Gathering logs for kube-apiserver [ddba3ebb8413d6647bf6c8fe0bff16a1a10b35bcd7219cad5b66979c372cd92e] ...
	I0814 01:10:09.166294   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ddba3ebb8413d6647bf6c8fe0bff16a1a10b35bcd7219cad5b66979c372cd92e"
	I0814 01:10:10.417612   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:12.418303   61689 pod_ready.go:102] pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:12.911546   61689 pod_ready.go:81] duration metric: took 4m0.00009953s for pod "metrics-server-6867b74b74-6cql9" in "kube-system" namespace to be "Ready" ...
	E0814 01:10:12.911580   61689 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0814 01:10:12.911610   61689 pod_ready.go:38] duration metric: took 4m7.021956674s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 01:10:12.911643   61689 kubeadm.go:597] duration metric: took 4m14.591841657s to restartPrimaryControlPlane
	W0814 01:10:12.911710   61689 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0814 01:10:12.911741   61689 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0814 01:10:09.517482   61804 cri.go:89] found id: ""
	I0814 01:10:09.517511   61804 logs.go:276] 0 containers: []
	W0814 01:10:09.517529   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:10:09.517538   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:10:09.517600   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:10:09.550825   61804 cri.go:89] found id: ""
	I0814 01:10:09.550849   61804 logs.go:276] 0 containers: []
	W0814 01:10:09.550857   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:10:09.550863   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:10:09.550923   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:10:09.585090   61804 cri.go:89] found id: ""
	I0814 01:10:09.585122   61804 logs.go:276] 0 containers: []
	W0814 01:10:09.585129   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:10:09.585137   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:10:09.585148   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:10:09.636337   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:10:09.636367   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:10:09.649807   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:10:09.649837   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:10:09.720720   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:10:09.720743   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:10:09.720759   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:10:09.805985   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:10:09.806027   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:10:12.350767   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:10:12.364446   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:10:12.364516   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:10:12.396353   61804 cri.go:89] found id: ""
	I0814 01:10:12.396387   61804 logs.go:276] 0 containers: []
	W0814 01:10:12.396400   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:10:12.396409   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:10:12.396478   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:10:12.427988   61804 cri.go:89] found id: ""
	I0814 01:10:12.428010   61804 logs.go:276] 0 containers: []
	W0814 01:10:12.428022   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:10:12.428033   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:10:12.428094   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:10:12.461269   61804 cri.go:89] found id: ""
	I0814 01:10:12.461295   61804 logs.go:276] 0 containers: []
	W0814 01:10:12.461304   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:10:12.461310   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:10:12.461364   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:10:12.495746   61804 cri.go:89] found id: ""
	I0814 01:10:12.495772   61804 logs.go:276] 0 containers: []
	W0814 01:10:12.495783   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:10:12.495791   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:10:12.495850   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:10:12.528862   61804 cri.go:89] found id: ""
	I0814 01:10:12.528891   61804 logs.go:276] 0 containers: []
	W0814 01:10:12.528901   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:10:12.528909   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:10:12.528969   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:10:12.562169   61804 cri.go:89] found id: ""
	I0814 01:10:12.562196   61804 logs.go:276] 0 containers: []
	W0814 01:10:12.562206   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:10:12.562214   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:10:12.562279   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:10:12.601089   61804 cri.go:89] found id: ""
	I0814 01:10:12.601118   61804 logs.go:276] 0 containers: []
	W0814 01:10:12.601129   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:10:12.601137   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:10:12.601200   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:10:12.635250   61804 cri.go:89] found id: ""
	I0814 01:10:12.635276   61804 logs.go:276] 0 containers: []
	W0814 01:10:12.635285   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:10:12.635293   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:10:12.635306   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:10:12.686904   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:10:12.686937   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:10:12.702218   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:10:12.702244   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:10:12.767008   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:10:12.767034   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:10:12.767051   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:10:12.849601   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:10:12.849639   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:10:11.090068   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:13.090518   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:11.715364   61447 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:10:11.731610   61447 api_server.go:72] duration metric: took 4m15.320142444s to wait for apiserver process to appear ...
	I0814 01:10:11.731645   61447 api_server.go:88] waiting for apiserver healthz status ...
	I0814 01:10:11.731689   61447 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:10:11.731748   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:10:11.769722   61447 cri.go:89] found id: "ddba3ebb8413d6647bf6c8fe0bff16a1a10b35bcd7219cad5b66979c372cd92e"
	I0814 01:10:11.769754   61447 cri.go:89] found id: ""
	I0814 01:10:11.769763   61447 logs.go:276] 1 containers: [ddba3ebb8413d6647bf6c8fe0bff16a1a10b35bcd7219cad5b66979c372cd92e]
	I0814 01:10:11.769824   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:11.774334   61447 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:10:11.774403   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:10:11.808392   61447 cri.go:89] found id: "1632d4b88f7f0e7e802d1848acf6c67950cfa7cdfbe5b4dd7f6fbc467a969388"
	I0814 01:10:11.808412   61447 cri.go:89] found id: ""
	I0814 01:10:11.808419   61447 logs.go:276] 1 containers: [1632d4b88f7f0e7e802d1848acf6c67950cfa7cdfbe5b4dd7f6fbc467a969388]
	I0814 01:10:11.808464   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:11.812100   61447 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:10:11.812154   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:10:11.846105   61447 cri.go:89] found id: "7d3cb1d648607a62a84856164fa12f6d400c0d4316d7bda5d83a448870c145fc"
	I0814 01:10:11.846133   61447 cri.go:89] found id: ""
	I0814 01:10:11.846144   61447 logs.go:276] 1 containers: [7d3cb1d648607a62a84856164fa12f6d400c0d4316d7bda5d83a448870c145fc]
	I0814 01:10:11.846202   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:11.850271   61447 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:10:11.850330   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:10:11.889364   61447 cri.go:89] found id: "89953f1dc813e32748579ac4c2541c0f99b1443ae3956d85a367db06810107b2"
	I0814 01:10:11.889389   61447 cri.go:89] found id: ""
	I0814 01:10:11.889399   61447 logs.go:276] 1 containers: [89953f1dc813e32748579ac4c2541c0f99b1443ae3956d85a367db06810107b2]
	I0814 01:10:11.889446   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:11.893413   61447 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:10:11.893483   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:10:11.929675   61447 cri.go:89] found id: "0ec88a5a7a9d566fabdf1393667a95e5eac0ed7f2a2aaa326ee51d3e99a72b12"
	I0814 01:10:11.929696   61447 cri.go:89] found id: ""
	I0814 01:10:11.929704   61447 logs.go:276] 1 containers: [0ec88a5a7a9d566fabdf1393667a95e5eac0ed7f2a2aaa326ee51d3e99a72b12]
	I0814 01:10:11.929764   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:11.933454   61447 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:10:11.933513   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:10:11.971708   61447 cri.go:89] found id: "3ef9bf666bbbc4c4c8b8036d2fa7e15e882f2bb301fe7d3b63a483d8b38e6091"
	I0814 01:10:11.971734   61447 cri.go:89] found id: ""
	I0814 01:10:11.971743   61447 logs.go:276] 1 containers: [3ef9bf666bbbc4c4c8b8036d2fa7e15e882f2bb301fe7d3b63a483d8b38e6091]
	I0814 01:10:11.971801   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:11.975943   61447 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:10:11.976005   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:10:12.010171   61447 cri.go:89] found id: ""
	I0814 01:10:12.010198   61447 logs.go:276] 0 containers: []
	W0814 01:10:12.010209   61447 logs.go:278] No container was found matching "kindnet"
	I0814 01:10:12.010217   61447 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0814 01:10:12.010277   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0814 01:10:12.045333   61447 cri.go:89] found id: "d4d7da10edbe3c5e4d9a0997c7f8909523ed34eacac6b44dc982bf6ab7504eff"
	I0814 01:10:12.045354   61447 cri.go:89] found id: "bacb411cbea2000ee28b8a5eb1c371d4094e22dea31e33892860568e08ee9768"
	I0814 01:10:12.045359   61447 cri.go:89] found id: ""
	I0814 01:10:12.045367   61447 logs.go:276] 2 containers: [d4d7da10edbe3c5e4d9a0997c7f8909523ed34eacac6b44dc982bf6ab7504eff bacb411cbea2000ee28b8a5eb1c371d4094e22dea31e33892860568e08ee9768]
	I0814 01:10:12.045431   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:12.049476   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:12.053712   61447 logs.go:123] Gathering logs for kube-controller-manager [3ef9bf666bbbc4c4c8b8036d2fa7e15e882f2bb301fe7d3b63a483d8b38e6091] ...
	I0814 01:10:12.053732   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ef9bf666bbbc4c4c8b8036d2fa7e15e882f2bb301fe7d3b63a483d8b38e6091"
	I0814 01:10:12.109678   61447 logs.go:123] Gathering logs for storage-provisioner [d4d7da10edbe3c5e4d9a0997c7f8909523ed34eacac6b44dc982bf6ab7504eff] ...
	I0814 01:10:12.109706   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d4d7da10edbe3c5e4d9a0997c7f8909523ed34eacac6b44dc982bf6ab7504eff"
	I0814 01:10:12.146300   61447 logs.go:123] Gathering logs for storage-provisioner [bacb411cbea2000ee28b8a5eb1c371d4094e22dea31e33892860568e08ee9768] ...
	I0814 01:10:12.146327   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bacb411cbea2000ee28b8a5eb1c371d4094e22dea31e33892860568e08ee9768"
	I0814 01:10:12.186556   61447 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:10:12.186585   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:10:12.660273   61447 logs.go:123] Gathering logs for kubelet ...
	I0814 01:10:12.660307   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:10:12.739687   61447 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:10:12.739723   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 01:10:12.859358   61447 logs.go:123] Gathering logs for kube-apiserver [ddba3ebb8413d6647bf6c8fe0bff16a1a10b35bcd7219cad5b66979c372cd92e] ...
	I0814 01:10:12.859388   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ddba3ebb8413d6647bf6c8fe0bff16a1a10b35bcd7219cad5b66979c372cd92e"
	I0814 01:10:12.908682   61447 logs.go:123] Gathering logs for kube-proxy [0ec88a5a7a9d566fabdf1393667a95e5eac0ed7f2a2aaa326ee51d3e99a72b12] ...
	I0814 01:10:12.908712   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ec88a5a7a9d566fabdf1393667a95e5eac0ed7f2a2aaa326ee51d3e99a72b12"
	I0814 01:10:12.943374   61447 logs.go:123] Gathering logs for container status ...
	I0814 01:10:12.943403   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:10:12.985875   61447 logs.go:123] Gathering logs for dmesg ...
	I0814 01:10:12.985915   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:10:13.001173   61447 logs.go:123] Gathering logs for etcd [1632d4b88f7f0e7e802d1848acf6c67950cfa7cdfbe5b4dd7f6fbc467a969388] ...
	I0814 01:10:13.001206   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1632d4b88f7f0e7e802d1848acf6c67950cfa7cdfbe5b4dd7f6fbc467a969388"
	I0814 01:10:13.048387   61447 logs.go:123] Gathering logs for coredns [7d3cb1d648607a62a84856164fa12f6d400c0d4316d7bda5d83a448870c145fc] ...
	I0814 01:10:13.048419   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7d3cb1d648607a62a84856164fa12f6d400c0d4316d7bda5d83a448870c145fc"
	I0814 01:10:13.088258   61447 logs.go:123] Gathering logs for kube-scheduler [89953f1dc813e32748579ac4c2541c0f99b1443ae3956d85a367db06810107b2] ...
	I0814 01:10:13.088295   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 89953f1dc813e32748579ac4c2541c0f99b1443ae3956d85a367db06810107b2"
	I0814 01:10:15.634029   61447 api_server.go:253] Checking apiserver healthz at https://192.168.72.94:8443/healthz ...
	I0814 01:10:15.639313   61447 api_server.go:279] https://192.168.72.94:8443/healthz returned 200:
	ok
	I0814 01:10:15.640756   61447 api_server.go:141] control plane version: v1.31.0
	I0814 01:10:15.640778   61447 api_server.go:131] duration metric: took 3.909125329s to wait for apiserver health ...
	I0814 01:10:15.640785   61447 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 01:10:15.640808   61447 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:10:15.640853   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:10:15.687350   61447 cri.go:89] found id: "ddba3ebb8413d6647bf6c8fe0bff16a1a10b35bcd7219cad5b66979c372cd92e"
	I0814 01:10:15.687373   61447 cri.go:89] found id: ""
	I0814 01:10:15.687381   61447 logs.go:276] 1 containers: [ddba3ebb8413d6647bf6c8fe0bff16a1a10b35bcd7219cad5b66979c372cd92e]
	I0814 01:10:15.687460   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:15.691407   61447 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:10:15.691473   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:10:15.730526   61447 cri.go:89] found id: "1632d4b88f7f0e7e802d1848acf6c67950cfa7cdfbe5b4dd7f6fbc467a969388"
	I0814 01:10:15.730551   61447 cri.go:89] found id: ""
	I0814 01:10:15.730560   61447 logs.go:276] 1 containers: [1632d4b88f7f0e7e802d1848acf6c67950cfa7cdfbe5b4dd7f6fbc467a969388]
	I0814 01:10:15.730618   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:15.734328   61447 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:10:15.734390   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:10:15.773166   61447 cri.go:89] found id: "7d3cb1d648607a62a84856164fa12f6d400c0d4316d7bda5d83a448870c145fc"
	I0814 01:10:15.773185   61447 cri.go:89] found id: ""
	I0814 01:10:15.773192   61447 logs.go:276] 1 containers: [7d3cb1d648607a62a84856164fa12f6d400c0d4316d7bda5d83a448870c145fc]
	I0814 01:10:15.773236   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:15.778757   61447 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:10:15.778815   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:10:15.813960   61447 cri.go:89] found id: "89953f1dc813e32748579ac4c2541c0f99b1443ae3956d85a367db06810107b2"
	I0814 01:10:15.813984   61447 cri.go:89] found id: ""
	I0814 01:10:15.813993   61447 logs.go:276] 1 containers: [89953f1dc813e32748579ac4c2541c0f99b1443ae3956d85a367db06810107b2]
	I0814 01:10:15.814068   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:15.818154   61447 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:10:15.818206   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:10:15.859408   61447 cri.go:89] found id: "0ec88a5a7a9d566fabdf1393667a95e5eac0ed7f2a2aaa326ee51d3e99a72b12"
	I0814 01:10:15.859432   61447 cri.go:89] found id: ""
	I0814 01:10:15.859440   61447 logs.go:276] 1 containers: [0ec88a5a7a9d566fabdf1393667a95e5eac0ed7f2a2aaa326ee51d3e99a72b12]
	I0814 01:10:15.859487   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:15.864494   61447 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:10:15.864583   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:10:15.900903   61447 cri.go:89] found id: "3ef9bf666bbbc4c4c8b8036d2fa7e15e882f2bb301fe7d3b63a483d8b38e6091"
	I0814 01:10:15.900922   61447 cri.go:89] found id: ""
	I0814 01:10:15.900932   61447 logs.go:276] 1 containers: [3ef9bf666bbbc4c4c8b8036d2fa7e15e882f2bb301fe7d3b63a483d8b38e6091]
	I0814 01:10:15.900982   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:15.905238   61447 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:10:15.905298   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:10:15.941185   61447 cri.go:89] found id: ""
	I0814 01:10:15.941215   61447 logs.go:276] 0 containers: []
	W0814 01:10:15.941226   61447 logs.go:278] No container was found matching "kindnet"
	I0814 01:10:15.941233   61447 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0814 01:10:15.941293   61447 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0814 01:10:15.980737   61447 cri.go:89] found id: "d4d7da10edbe3c5e4d9a0997c7f8909523ed34eacac6b44dc982bf6ab7504eff"
	I0814 01:10:15.980756   61447 cri.go:89] found id: "bacb411cbea2000ee28b8a5eb1c371d4094e22dea31e33892860568e08ee9768"
	I0814 01:10:15.980760   61447 cri.go:89] found id: ""
	I0814 01:10:15.980766   61447 logs.go:276] 2 containers: [d4d7da10edbe3c5e4d9a0997c7f8909523ed34eacac6b44dc982bf6ab7504eff bacb411cbea2000ee28b8a5eb1c371d4094e22dea31e33892860568e08ee9768]
	I0814 01:10:15.980809   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:15.985209   61447 ssh_runner.go:195] Run: which crictl
	I0814 01:10:15.989469   61447 logs.go:123] Gathering logs for coredns [7d3cb1d648607a62a84856164fa12f6d400c0d4316d7bda5d83a448870c145fc] ...
	I0814 01:10:15.989492   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7d3cb1d648607a62a84856164fa12f6d400c0d4316d7bda5d83a448870c145fc"
	I0814 01:10:16.026888   61447 logs.go:123] Gathering logs for kube-proxy [0ec88a5a7a9d566fabdf1393667a95e5eac0ed7f2a2aaa326ee51d3e99a72b12] ...
	I0814 01:10:16.026917   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ec88a5a7a9d566fabdf1393667a95e5eac0ed7f2a2aaa326ee51d3e99a72b12"
	I0814 01:10:16.071726   61447 logs.go:123] Gathering logs for storage-provisioner [d4d7da10edbe3c5e4d9a0997c7f8909523ed34eacac6b44dc982bf6ab7504eff] ...
	I0814 01:10:16.071754   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d4d7da10edbe3c5e4d9a0997c7f8909523ed34eacac6b44dc982bf6ab7504eff"
	I0814 01:10:16.109685   61447 logs.go:123] Gathering logs for storage-provisioner [bacb411cbea2000ee28b8a5eb1c371d4094e22dea31e33892860568e08ee9768] ...
	I0814 01:10:16.109710   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bacb411cbea2000ee28b8a5eb1c371d4094e22dea31e33892860568e08ee9768"
	I0814 01:10:16.145898   61447 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:10:16.145928   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:10:15.387785   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:10:15.401850   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:10:15.401916   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:10:15.441217   61804 cri.go:89] found id: ""
	I0814 01:10:15.441240   61804 logs.go:276] 0 containers: []
	W0814 01:10:15.441255   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:10:15.441261   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:10:15.441312   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:10:15.475123   61804 cri.go:89] found id: ""
	I0814 01:10:15.475158   61804 logs.go:276] 0 containers: []
	W0814 01:10:15.475167   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:10:15.475172   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:10:15.475234   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:10:15.509696   61804 cri.go:89] found id: ""
	I0814 01:10:15.509725   61804 logs.go:276] 0 containers: []
	W0814 01:10:15.509733   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:10:15.509739   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:10:15.509797   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:10:15.542584   61804 cri.go:89] found id: ""
	I0814 01:10:15.542615   61804 logs.go:276] 0 containers: []
	W0814 01:10:15.542625   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:10:15.542632   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:10:15.542701   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:10:15.576508   61804 cri.go:89] found id: ""
	I0814 01:10:15.576540   61804 logs.go:276] 0 containers: []
	W0814 01:10:15.576552   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:10:15.576558   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:10:15.576622   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:10:15.613618   61804 cri.go:89] found id: ""
	I0814 01:10:15.613649   61804 logs.go:276] 0 containers: []
	W0814 01:10:15.613660   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:10:15.613669   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:10:15.613732   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:10:15.646153   61804 cri.go:89] found id: ""
	I0814 01:10:15.646173   61804 logs.go:276] 0 containers: []
	W0814 01:10:15.646182   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:10:15.646189   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:10:15.646241   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:10:15.681417   61804 cri.go:89] found id: ""
	I0814 01:10:15.681444   61804 logs.go:276] 0 containers: []
	W0814 01:10:15.681455   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:10:15.681466   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:10:15.681483   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:10:15.763989   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:10:15.764026   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:10:15.803304   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:10:15.803337   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:10:15.872591   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:10:15.872630   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:10:15.886469   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:10:15.886504   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:10:15.956403   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:10:18.457103   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:10:18.470059   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:10:18.470138   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:10:18.505369   61804 cri.go:89] found id: ""
	I0814 01:10:18.505399   61804 logs.go:276] 0 containers: []
	W0814 01:10:18.505410   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:10:18.505419   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:10:18.505481   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:10:18.536719   61804 cri.go:89] found id: ""
	I0814 01:10:18.536750   61804 logs.go:276] 0 containers: []
	W0814 01:10:18.536781   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:10:18.536790   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:10:18.536845   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:10:18.571048   61804 cri.go:89] found id: ""
	I0814 01:10:18.571077   61804 logs.go:276] 0 containers: []
	W0814 01:10:18.571089   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:10:18.571096   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:10:18.571161   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:10:18.605547   61804 cri.go:89] found id: ""
	I0814 01:10:18.605569   61804 logs.go:276] 0 containers: []
	W0814 01:10:18.605578   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:10:18.605585   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:10:18.605645   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:10:18.637177   61804 cri.go:89] found id: ""
	I0814 01:10:18.637199   61804 logs.go:276] 0 containers: []
	W0814 01:10:18.637207   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:10:18.637213   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:10:18.637275   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:10:18.674976   61804 cri.go:89] found id: ""
	I0814 01:10:18.675003   61804 logs.go:276] 0 containers: []
	W0814 01:10:18.675012   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:10:18.675017   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:10:18.675066   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:10:18.709808   61804 cri.go:89] found id: ""
	I0814 01:10:18.709832   61804 logs.go:276] 0 containers: []
	W0814 01:10:18.709840   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:10:18.709846   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:10:18.709902   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:10:18.743577   61804 cri.go:89] found id: ""
	I0814 01:10:18.743601   61804 logs.go:276] 0 containers: []
	W0814 01:10:18.743607   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:10:18.743615   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:10:18.743635   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:10:18.794913   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:10:18.794944   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:10:18.807665   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:10:18.807692   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:10:18.877814   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:10:18.877835   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:10:18.877847   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:10:18.962319   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:10:18.962356   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:10:16.533474   61447 logs.go:123] Gathering logs for container status ...
	I0814 01:10:16.533523   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:10:16.579098   61447 logs.go:123] Gathering logs for kube-apiserver [ddba3ebb8413d6647bf6c8fe0bff16a1a10b35bcd7219cad5b66979c372cd92e] ...
	I0814 01:10:16.579129   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ddba3ebb8413d6647bf6c8fe0bff16a1a10b35bcd7219cad5b66979c372cd92e"
	I0814 01:10:16.620711   61447 logs.go:123] Gathering logs for dmesg ...
	I0814 01:10:16.620744   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:10:16.633968   61447 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:10:16.634005   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 01:10:16.733947   61447 logs.go:123] Gathering logs for etcd [1632d4b88f7f0e7e802d1848acf6c67950cfa7cdfbe5b4dd7f6fbc467a969388] ...
	I0814 01:10:16.733985   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1632d4b88f7f0e7e802d1848acf6c67950cfa7cdfbe5b4dd7f6fbc467a969388"
	I0814 01:10:16.785475   61447 logs.go:123] Gathering logs for kube-scheduler [89953f1dc813e32748579ac4c2541c0f99b1443ae3956d85a367db06810107b2] ...
	I0814 01:10:16.785512   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 89953f1dc813e32748579ac4c2541c0f99b1443ae3956d85a367db06810107b2"
	I0814 01:10:16.826307   61447 logs.go:123] Gathering logs for kube-controller-manager [3ef9bf666bbbc4c4c8b8036d2fa7e15e882f2bb301fe7d3b63a483d8b38e6091] ...
	I0814 01:10:16.826334   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ef9bf666bbbc4c4c8b8036d2fa7e15e882f2bb301fe7d3b63a483d8b38e6091"
	I0814 01:10:16.879391   61447 logs.go:123] Gathering logs for kubelet ...
	I0814 01:10:16.879422   61447 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:10:19.453998   61447 system_pods.go:59] 8 kube-system pods found
	I0814 01:10:19.454028   61447 system_pods.go:61] "coredns-6f6b679f8f-dz9zk" [67e29ce3-7f67-4b96-8030-c980773b5772] Running
	I0814 01:10:19.454034   61447 system_pods.go:61] "etcd-no-preload-776907" [b81b7341-dcd8-4374-8241-8797eb33d707] Running
	I0814 01:10:19.454050   61447 system_pods.go:61] "kube-apiserver-no-preload-776907" [33b066e2-28ef-46a7-95d7-b17806cdbde6] Running
	I0814 01:10:19.454056   61447 system_pods.go:61] "kube-controller-manager-no-preload-776907" [1de07b1f-7e0d-4704-84dc-fbb1280fc3bf] Running
	I0814 01:10:19.454060   61447 system_pods.go:61] "kube-proxy-pgm9t" [efad60b0-c62e-4c47-974b-98fdca9d3496] Running
	I0814 01:10:19.454065   61447 system_pods.go:61] "kube-scheduler-no-preload-776907" [6a57c2f5-6194-4e84-bfd3-985a6ff2333d] Running
	I0814 01:10:19.454074   61447 system_pods.go:61] "metrics-server-6867b74b74-gb2dt" [c950c58e-c5c3-4535-b10f-f4379ff03409] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 01:10:19.454079   61447 system_pods.go:61] "storage-provisioner" [d0ba9510-e0a5-4558-98e3-a9510920f93a] Running
	I0814 01:10:19.454090   61447 system_pods.go:74] duration metric: took 3.813297982s to wait for pod list to return data ...
	I0814 01:10:19.454101   61447 default_sa.go:34] waiting for default service account to be created ...
	I0814 01:10:19.456941   61447 default_sa.go:45] found service account: "default"
	I0814 01:10:19.456969   61447 default_sa.go:55] duration metric: took 2.858057ms for default service account to be created ...
	I0814 01:10:19.456980   61447 system_pods.go:116] waiting for k8s-apps to be running ...
	I0814 01:10:19.461101   61447 system_pods.go:86] 8 kube-system pods found
	I0814 01:10:19.461125   61447 system_pods.go:89] "coredns-6f6b679f8f-dz9zk" [67e29ce3-7f67-4b96-8030-c980773b5772] Running
	I0814 01:10:19.461133   61447 system_pods.go:89] "etcd-no-preload-776907" [b81b7341-dcd8-4374-8241-8797eb33d707] Running
	I0814 01:10:19.461138   61447 system_pods.go:89] "kube-apiserver-no-preload-776907" [33b066e2-28ef-46a7-95d7-b17806cdbde6] Running
	I0814 01:10:19.461144   61447 system_pods.go:89] "kube-controller-manager-no-preload-776907" [1de07b1f-7e0d-4704-84dc-fbb1280fc3bf] Running
	I0814 01:10:19.461150   61447 system_pods.go:89] "kube-proxy-pgm9t" [efad60b0-c62e-4c47-974b-98fdca9d3496] Running
	I0814 01:10:19.461155   61447 system_pods.go:89] "kube-scheduler-no-preload-776907" [6a57c2f5-6194-4e84-bfd3-985a6ff2333d] Running
	I0814 01:10:19.461166   61447 system_pods.go:89] "metrics-server-6867b74b74-gb2dt" [c950c58e-c5c3-4535-b10f-f4379ff03409] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 01:10:19.461178   61447 system_pods.go:89] "storage-provisioner" [d0ba9510-e0a5-4558-98e3-a9510920f93a] Running
	I0814 01:10:19.461191   61447 system_pods.go:126] duration metric: took 4.203785ms to wait for k8s-apps to be running ...
	I0814 01:10:19.461203   61447 system_svc.go:44] waiting for kubelet service to be running ....
	I0814 01:10:19.461253   61447 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 01:10:19.476698   61447 system_svc.go:56] duration metric: took 15.486945ms WaitForService to wait for kubelet
	I0814 01:10:19.476735   61447 kubeadm.go:582] duration metric: took 4m23.065272349s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 01:10:19.476762   61447 node_conditions.go:102] verifying NodePressure condition ...
	I0814 01:10:19.480352   61447 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0814 01:10:19.480377   61447 node_conditions.go:123] node cpu capacity is 2
	I0814 01:10:19.480392   61447 node_conditions.go:105] duration metric: took 3.624166ms to run NodePressure ...
	I0814 01:10:19.480407   61447 start.go:241] waiting for startup goroutines ...
	I0814 01:10:19.480426   61447 start.go:246] waiting for cluster config update ...
	I0814 01:10:19.480440   61447 start.go:255] writing updated cluster config ...
	I0814 01:10:19.480790   61447 ssh_runner.go:195] Run: rm -f paused
	I0814 01:10:19.529809   61447 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0814 01:10:19.531666   61447 out.go:177] * Done! kubectl is now configured to use "no-preload-776907" cluster and "default" namespace by default
	I0814 01:10:15.590230   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:18.089286   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:21.500596   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:10:21.513404   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:10:21.513479   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:10:21.554150   61804 cri.go:89] found id: ""
	I0814 01:10:21.554179   61804 logs.go:276] 0 containers: []
	W0814 01:10:21.554188   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:10:21.554194   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:10:21.554251   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:10:21.588785   61804 cri.go:89] found id: ""
	I0814 01:10:21.588807   61804 logs.go:276] 0 containers: []
	W0814 01:10:21.588815   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:10:21.588820   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:10:21.588870   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:10:21.621537   61804 cri.go:89] found id: ""
	I0814 01:10:21.621572   61804 logs.go:276] 0 containers: []
	W0814 01:10:21.621581   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:10:21.621587   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:10:21.621640   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:10:21.660651   61804 cri.go:89] found id: ""
	I0814 01:10:21.660680   61804 logs.go:276] 0 containers: []
	W0814 01:10:21.660690   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:10:21.660698   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:10:21.660763   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:10:21.697233   61804 cri.go:89] found id: ""
	I0814 01:10:21.697259   61804 logs.go:276] 0 containers: []
	W0814 01:10:21.697269   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:10:21.697276   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:10:21.697347   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:10:21.728389   61804 cri.go:89] found id: ""
	I0814 01:10:21.728416   61804 logs.go:276] 0 containers: []
	W0814 01:10:21.728428   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:10:21.728435   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:10:21.728498   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:10:21.761502   61804 cri.go:89] found id: ""
	I0814 01:10:21.761534   61804 logs.go:276] 0 containers: []
	W0814 01:10:21.761546   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:10:21.761552   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:10:21.761624   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:10:21.796569   61804 cri.go:89] found id: ""
	I0814 01:10:21.796598   61804 logs.go:276] 0 containers: []
	W0814 01:10:21.796610   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:10:21.796621   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:10:21.796637   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:10:21.845444   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:10:21.845483   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:10:21.858017   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:10:21.858057   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:10:21.930417   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:10:21.930443   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:10:21.930460   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:10:22.005912   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:10:22.005951   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 01:10:20.089593   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:22.089797   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:24.591315   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:24.545241   61804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:10:24.559341   61804 kubeadm.go:597] duration metric: took 4m4.643567639s to restartPrimaryControlPlane
	W0814 01:10:24.559407   61804 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0814 01:10:24.559430   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0814 01:10:28.294241   61804 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (3.734785326s)
	I0814 01:10:28.294319   61804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 01:10:28.311148   61804 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 01:10:28.321145   61804 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 01:10:28.335025   61804 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 01:10:28.335042   61804 kubeadm.go:157] found existing configuration files:
	
	I0814 01:10:28.335084   61804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 01:10:28.348778   61804 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 01:10:28.348838   61804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 01:10:28.362209   61804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 01:10:28.374981   61804 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 01:10:28.375054   61804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 01:10:28.385686   61804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 01:10:28.396608   61804 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 01:10:28.396681   61804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 01:10:28.410155   61804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 01:10:28.419462   61804 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 01:10:28.419524   61804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 01:10:28.429089   61804 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0814 01:10:28.506715   61804 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0814 01:10:28.506816   61804 kubeadm.go:310] [preflight] Running pre-flight checks
	I0814 01:10:28.668770   61804 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0814 01:10:28.668908   61804 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0814 01:10:28.669020   61804 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0814 01:10:28.865442   61804 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0814 01:10:28.866971   61804 out.go:204]   - Generating certificates and keys ...
	I0814 01:10:28.867065   61804 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0814 01:10:28.867151   61804 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0814 01:10:28.867270   61804 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0814 01:10:28.867370   61804 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0814 01:10:28.867486   61804 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0814 01:10:28.867575   61804 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0814 01:10:28.867668   61804 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0814 01:10:28.867762   61804 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0814 01:10:28.867854   61804 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0814 01:10:28.867969   61804 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0814 01:10:28.868026   61804 kubeadm.go:310] [certs] Using the existing "sa" key
	I0814 01:10:28.868095   61804 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0814 01:10:29.109820   61804 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0814 01:10:29.305485   61804 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0814 01:10:29.447627   61804 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0814 01:10:29.519749   61804 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0814 01:10:29.534507   61804 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0814 01:10:29.535858   61804 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0814 01:10:29.535915   61804 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0814 01:10:29.679100   61804 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0814 01:10:27.089933   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:29.590579   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:29.681457   61804 out.go:204]   - Booting up control plane ...
	I0814 01:10:29.681596   61804 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0814 01:10:29.686193   61804 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0814 01:10:29.690458   61804 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0814 01:10:29.690602   61804 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0814 01:10:29.692526   61804 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0814 01:10:32.089926   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:34.090129   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:39.266092   61689 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.354324468s)
	I0814 01:10:39.266176   61689 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 01:10:39.281039   61689 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 01:10:39.290328   61689 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 01:10:39.299179   61689 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 01:10:39.299200   61689 kubeadm.go:157] found existing configuration files:
	
	I0814 01:10:39.299240   61689 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0814 01:10:39.307972   61689 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 01:10:39.308029   61689 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 01:10:39.316639   61689 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0814 01:10:39.324834   61689 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 01:10:39.324907   61689 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 01:10:39.333911   61689 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0814 01:10:39.342294   61689 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 01:10:39.342358   61689 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 01:10:39.351209   61689 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0814 01:10:39.361364   61689 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 01:10:39.361429   61689 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 01:10:39.370737   61689 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0814 01:10:39.422751   61689 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0814 01:10:39.422819   61689 kubeadm.go:310] [preflight] Running pre-flight checks
	I0814 01:10:39.536672   61689 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0814 01:10:39.536827   61689 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0814 01:10:39.536965   61689 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0814 01:10:39.546793   61689 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0814 01:10:36.590409   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:39.090160   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:39.548749   61689 out.go:204]   - Generating certificates and keys ...
	I0814 01:10:39.548852   61689 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0814 01:10:39.548936   61689 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0814 01:10:39.549054   61689 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0814 01:10:39.549147   61689 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0814 01:10:39.549236   61689 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0814 01:10:39.549354   61689 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0814 01:10:39.549454   61689 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0814 01:10:39.549540   61689 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0814 01:10:39.549647   61689 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0814 01:10:39.549725   61689 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0814 01:10:39.549779   61689 kubeadm.go:310] [certs] Using the existing "sa" key
	I0814 01:10:39.549857   61689 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0814 01:10:39.626351   61689 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0814 01:10:39.760278   61689 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0814 01:10:39.866008   61689 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0814 01:10:39.999161   61689 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0814 01:10:40.196721   61689 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0814 01:10:40.197188   61689 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0814 01:10:40.199882   61689 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0814 01:10:40.201618   61689 out.go:204]   - Booting up control plane ...
	I0814 01:10:40.201746   61689 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0814 01:10:40.201813   61689 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0814 01:10:40.201869   61689 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0814 01:10:40.219199   61689 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0814 01:10:40.227902   61689 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0814 01:10:40.227973   61689 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0814 01:10:40.361233   61689 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0814 01:10:40.361348   61689 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0814 01:10:40.862332   61689 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.269742ms
	I0814 01:10:40.862432   61689 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0814 01:10:41.590443   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:43.590766   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:45.864038   61689 kubeadm.go:310] [api-check] The API server is healthy after 5.001460061s
	I0814 01:10:45.878388   61689 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0814 01:10:45.896709   61689 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0814 01:10:45.940134   61689 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0814 01:10:45.940348   61689 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-585256 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0814 01:10:45.955748   61689 kubeadm.go:310] [bootstrap-token] Using token: 8dipep.54emqs990as2h2yu
	I0814 01:10:45.957107   61689 out.go:204]   - Configuring RBAC rules ...
	I0814 01:10:45.957260   61689 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0814 01:10:45.967198   61689 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0814 01:10:45.981109   61689 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0814 01:10:45.984971   61689 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0814 01:10:45.990218   61689 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0814 01:10:45.994132   61689 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0814 01:10:46.271392   61689 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0814 01:10:46.713198   61689 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0814 01:10:47.271788   61689 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0814 01:10:47.271821   61689 kubeadm.go:310] 
	I0814 01:10:47.271873   61689 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0814 01:10:47.271880   61689 kubeadm.go:310] 
	I0814 01:10:47.271970   61689 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0814 01:10:47.271983   61689 kubeadm.go:310] 
	I0814 01:10:47.272035   61689 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0814 01:10:47.272118   61689 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0814 01:10:47.272195   61689 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0814 01:10:47.272219   61689 kubeadm.go:310] 
	I0814 01:10:47.272313   61689 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0814 01:10:47.272340   61689 kubeadm.go:310] 
	I0814 01:10:47.272418   61689 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0814 01:10:47.272431   61689 kubeadm.go:310] 
	I0814 01:10:47.272493   61689 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0814 01:10:47.272603   61689 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0814 01:10:47.272718   61689 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0814 01:10:47.272736   61689 kubeadm.go:310] 
	I0814 01:10:47.272851   61689 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0814 01:10:47.272978   61689 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0814 01:10:47.272988   61689 kubeadm.go:310] 
	I0814 01:10:47.273093   61689 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 8dipep.54emqs990as2h2yu \
	I0814 01:10:47.273238   61689 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f608549e9c065598e2d494ba753eb8a7bbe7260a53a76decd30e6e17b5fe24c3 \
	I0814 01:10:47.273276   61689 kubeadm.go:310] 	--control-plane 
	I0814 01:10:47.273290   61689 kubeadm.go:310] 
	I0814 01:10:47.273405   61689 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0814 01:10:47.273413   61689 kubeadm.go:310] 
	I0814 01:10:47.273513   61689 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 8dipep.54emqs990as2h2yu \
	I0814 01:10:47.273659   61689 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f608549e9c065598e2d494ba753eb8a7bbe7260a53a76decd30e6e17b5fe24c3 
	I0814 01:10:47.274832   61689 kubeadm.go:310] W0814 01:10:39.407507    2549 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0814 01:10:47.275253   61689 kubeadm.go:310] W0814 01:10:39.408398    2549 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0814 01:10:47.275402   61689 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0814 01:10:47.275444   61689 cni.go:84] Creating CNI manager for ""
	I0814 01:10:47.275455   61689 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 01:10:47.277239   61689 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0814 01:10:47.278570   61689 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0814 01:10:47.289683   61689 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0814 01:10:47.306392   61689 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0814 01:10:47.306474   61689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:10:47.306474   61689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-585256 minikube.k8s.io/updated_at=2024_08_14T01_10_47_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a5ac17629aabe8562ef9220195ba6559cf416caf minikube.k8s.io/name=default-k8s-diff-port-585256 minikube.k8s.io/primary=true
	I0814 01:10:47.471053   61689 ops.go:34] apiserver oom_adj: -16
	I0814 01:10:47.471227   61689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:10:47.971669   61689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:10:46.089776   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:48.589378   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:48.472147   61689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:10:48.971874   61689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:10:49.471867   61689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:10:49.972002   61689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:10:50.471298   61689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:10:50.971656   61689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:10:51.471610   61689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:10:51.548562   61689 kubeadm.go:1113] duration metric: took 4.24215834s to wait for elevateKubeSystemPrivileges
	I0814 01:10:51.548600   61689 kubeadm.go:394] duration metric: took 4m53.28604263s to StartCluster
	I0814 01:10:51.548621   61689 settings.go:142] acquiring lock: {Name:mkb0f793aa2a6618ff3457f9cd2d34beec5f1b5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 01:10:51.548708   61689 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19429-9425/kubeconfig
	I0814 01:10:51.551834   61689 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19429-9425/kubeconfig: {Name:mkd99f966962f24d3ec49056c344f5320df43dba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 01:10:51.552154   61689 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.110 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0814 01:10:51.552236   61689 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0814 01:10:51.552311   61689 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-585256"
	I0814 01:10:51.552343   61689 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-585256"
	I0814 01:10:51.552341   61689 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-585256"
	W0814 01:10:51.552354   61689 addons.go:243] addon storage-provisioner should already be in state true
	I0814 01:10:51.552384   61689 host.go:66] Checking if "default-k8s-diff-port-585256" exists ...
	I0814 01:10:51.552387   61689 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-585256"
	W0814 01:10:51.552396   61689 addons.go:243] addon metrics-server should already be in state true
	I0814 01:10:51.552416   61689 host.go:66] Checking if "default-k8s-diff-port-585256" exists ...
	I0814 01:10:51.552423   61689 config.go:182] Loaded profile config "default-k8s-diff-port-585256": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 01:10:51.552805   61689 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:10:51.552842   61689 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:10:51.552855   61689 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:10:51.552865   61689 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:10:51.553056   61689 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-585256"
	I0814 01:10:51.553092   61689 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-585256"
	I0814 01:10:51.553476   61689 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:10:51.553519   61689 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:10:51.553870   61689 out.go:177] * Verifying Kubernetes components...
	I0814 01:10:51.555358   61689 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 01:10:51.569380   61689 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36961
	I0814 01:10:51.569570   61689 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38335
	I0814 01:10:51.569920   61689 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:10:51.570057   61689 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:10:51.570516   61689 main.go:141] libmachine: Using API Version  1
	I0814 01:10:51.570536   61689 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:10:51.570648   61689 main.go:141] libmachine: Using API Version  1
	I0814 01:10:51.570672   61689 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:10:51.570891   61689 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:10:51.570981   61689 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:10:51.571148   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetState
	I0814 01:10:51.571564   61689 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:10:51.571600   61689 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:10:51.572161   61689 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40351
	I0814 01:10:51.572637   61689 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:10:51.573134   61689 main.go:141] libmachine: Using API Version  1
	I0814 01:10:51.573153   61689 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:10:51.574142   61689 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:10:51.574576   61689 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:10:51.574600   61689 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:10:51.575008   61689 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-585256"
	W0814 01:10:51.575026   61689 addons.go:243] addon default-storageclass should already be in state true
	I0814 01:10:51.575056   61689 host.go:66] Checking if "default-k8s-diff-port-585256" exists ...
	I0814 01:10:51.575459   61689 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:10:51.575500   61689 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:10:51.587910   61689 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35335
	I0814 01:10:51.588640   61689 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:10:51.589298   61689 main.go:141] libmachine: Using API Version  1
	I0814 01:10:51.589318   61689 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:10:51.589938   61689 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:10:51.590198   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetState
	I0814 01:10:51.591151   61689 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40625
	I0814 01:10:51.591786   61689 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:10:51.592257   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .DriverName
	I0814 01:10:51.592427   61689 main.go:141] libmachine: Using API Version  1
	I0814 01:10:51.592444   61689 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:10:51.592742   61689 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:10:51.592959   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetState
	I0814 01:10:51.594517   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .DriverName
	I0814 01:10:51.594851   61689 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0814 01:10:51.596245   61689 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 01:10:51.596263   61689 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0814 01:10:51.596277   61689 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0814 01:10:51.596296   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHHostname
	I0814 01:10:51.597335   61689 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 01:10:51.597351   61689 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0814 01:10:51.597365   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHHostname
	I0814 01:10:51.599147   61689 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40567
	I0814 01:10:51.599559   61689 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:10:51.600041   61689 main.go:141] libmachine: Using API Version  1
	I0814 01:10:51.600062   61689 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:10:51.600442   61689 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:10:51.601105   61689 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:10:51.601131   61689 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:10:51.601316   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:10:51.601345   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:10:51.601367   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:10:51.601408   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHPort
	I0814 01:10:51.601889   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:10:51.601893   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 01:10:51.602060   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHUsername
	I0814 01:10:51.602226   61689 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/default-k8s-diff-port-585256/id_rsa Username:docker}
	I0814 01:10:51.606415   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:10:51.606437   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:10:51.606582   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHPort
	I0814 01:10:51.606793   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 01:10:51.607035   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHUsername
	I0814 01:10:51.607200   61689 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/default-k8s-diff-port-585256/id_rsa Username:docker}
	I0814 01:10:51.623773   61689 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33265
	I0814 01:10:51.624272   61689 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:10:51.624752   61689 main.go:141] libmachine: Using API Version  1
	I0814 01:10:51.624772   61689 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:10:51.625130   61689 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:10:51.625309   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetState
	I0814 01:10:51.627055   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .DriverName
	I0814 01:10:51.627259   61689 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0814 01:10:51.627272   61689 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0814 01:10:51.627284   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHHostname
	I0814 01:10:51.630492   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:10:51.630890   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:bd:a3", ip: ""} in network mk-default-k8s-diff-port-585256: {Iface:virbr1 ExpiryTime:2024-08-14 02:05:42 +0000 UTC Type:0 Mac:52:54:00:00:bd:a3 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:default-k8s-diff-port-585256 Clientid:01:52:54:00:00:bd:a3}
	I0814 01:10:51.630904   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | domain default-k8s-diff-port-585256 has defined IP address 192.168.39.110 and MAC address 52:54:00:00:bd:a3 in network mk-default-k8s-diff-port-585256
	I0814 01:10:51.631066   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHPort
	I0814 01:10:51.631226   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHKeyPath
	I0814 01:10:51.631389   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .GetSSHUsername
	I0814 01:10:51.631501   61689 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/default-k8s-diff-port-585256/id_rsa Username:docker}
	I0814 01:10:51.744471   61689 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 01:10:51.762256   61689 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-585256" to be "Ready" ...
	I0814 01:10:51.782968   61689 node_ready.go:49] node "default-k8s-diff-port-585256" has status "Ready":"True"
	I0814 01:10:51.782999   61689 node_ready.go:38] duration metric: took 20.706198ms for node "default-k8s-diff-port-585256" to be "Ready" ...
	I0814 01:10:51.783011   61689 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 01:10:51.796967   61689 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-hngz9" in "kube-system" namespace to be "Ready" ...
	I0814 01:10:51.866263   61689 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 01:10:51.867193   61689 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0814 01:10:51.880992   61689 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0814 01:10:51.881017   61689 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0814 01:10:51.927059   61689 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0814 01:10:51.927081   61689 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0814 01:10:51.987114   61689 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0814 01:10:51.987134   61689 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0814 01:10:52.053818   61689 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0814 01:10:52.977726   61689 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.111426777s)
	I0814 01:10:52.977791   61689 main.go:141] libmachine: Making call to close driver server
	I0814 01:10:52.977789   61689 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.110564484s)
	I0814 01:10:52.977844   61689 main.go:141] libmachine: Making call to close driver server
	I0814 01:10:52.977863   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .Close
	I0814 01:10:52.977805   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .Close
	I0814 01:10:52.978191   61689 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:10:52.978210   61689 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:10:52.978217   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | Closing plugin on server side
	I0814 01:10:52.978222   61689 main.go:141] libmachine: Making call to close driver server
	I0814 01:10:52.978230   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | Closing plugin on server side
	I0814 01:10:52.978236   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .Close
	I0814 01:10:52.978282   61689 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:10:52.978310   61689 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:10:52.978325   61689 main.go:141] libmachine: Making call to close driver server
	I0814 01:10:52.978335   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .Close
	I0814 01:10:52.978869   61689 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:10:52.978909   61689 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:10:52.979017   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | Closing plugin on server side
	I0814 01:10:52.981465   61689 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:10:52.981488   61689 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:10:53.039845   61689 main.go:141] libmachine: Making call to close driver server
	I0814 01:10:53.039866   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .Close
	I0814 01:10:53.040156   61689 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:10:53.040174   61689 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:10:53.040217   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) DBG | Closing plugin on server side
	I0814 01:10:53.239968   61689 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.186108272s)
	I0814 01:10:53.240018   61689 main.go:141] libmachine: Making call to close driver server
	I0814 01:10:53.240035   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .Close
	I0814 01:10:53.240360   61689 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:10:53.240378   61689 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:10:53.240387   61689 main.go:141] libmachine: Making call to close driver server
	I0814 01:10:53.240395   61689 main.go:141] libmachine: (default-k8s-diff-port-585256) Calling .Close
	I0814 01:10:53.240672   61689 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:10:53.240686   61689 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:10:53.240696   61689 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-585256"
	I0814 01:10:53.242401   61689 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0814 01:10:50.591245   61115 pod_ready.go:102] pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:52.584492   61115 pod_ready.go:81] duration metric: took 4m0.000968161s for pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace to be "Ready" ...
	E0814 01:10:52.584532   61115 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-82tmq" in "kube-system" namespace to be "Ready" (will not retry!)
	I0814 01:10:52.584557   61115 pod_ready.go:38] duration metric: took 4m8.538973262s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 01:10:52.584585   61115 kubeadm.go:597] duration metric: took 4m16.433276087s to restartPrimaryControlPlane
	W0814 01:10:52.584639   61115 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0814 01:10:52.584666   61115 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0814 01:10:53.243906   61689 addons.go:510] duration metric: took 1.691669156s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0814 01:10:53.804696   61689 pod_ready.go:102] pod "coredns-6f6b679f8f-hngz9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:56.305075   61689 pod_ready.go:102] pod "coredns-6f6b679f8f-hngz9" in "kube-system" namespace has status "Ready":"False"
	I0814 01:10:57.805174   61689 pod_ready.go:92] pod "coredns-6f6b679f8f-hngz9" in "kube-system" namespace has status "Ready":"True"
	I0814 01:10:57.805202   61689 pod_ready.go:81] duration metric: took 6.008208867s for pod "coredns-6f6b679f8f-hngz9" in "kube-system" namespace to be "Ready" ...
	I0814 01:10:57.805214   61689 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-jmqk7" in "kube-system" namespace to be "Ready" ...
	I0814 01:10:57.809693   61689 pod_ready.go:92] pod "coredns-6f6b679f8f-jmqk7" in "kube-system" namespace has status "Ready":"True"
	I0814 01:10:57.809714   61689 pod_ready.go:81] duration metric: took 4.491999ms for pod "coredns-6f6b679f8f-jmqk7" in "kube-system" namespace to be "Ready" ...
	I0814 01:10:57.809726   61689 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-585256" in "kube-system" namespace to be "Ready" ...
	I0814 01:10:59.816199   61689 pod_ready.go:92] pod "etcd-default-k8s-diff-port-585256" in "kube-system" namespace has status "Ready":"True"
	I0814 01:10:59.816228   61689 pod_ready.go:81] duration metric: took 2.006493576s for pod "etcd-default-k8s-diff-port-585256" in "kube-system" namespace to be "Ready" ...
	I0814 01:10:59.816241   61689 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-585256" in "kube-system" namespace to be "Ready" ...
	I0814 01:10:59.821351   61689 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-585256" in "kube-system" namespace has status "Ready":"True"
	I0814 01:10:59.821374   61689 pod_ready.go:81] duration metric: took 5.126272ms for pod "kube-apiserver-default-k8s-diff-port-585256" in "kube-system" namespace to be "Ready" ...
	I0814 01:10:59.821384   61689 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-585256" in "kube-system" namespace to be "Ready" ...
	I0814 01:10:59.825182   61689 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-585256" in "kube-system" namespace has status "Ready":"True"
	I0814 01:10:59.825200   61689 pod_ready.go:81] duration metric: took 3.810193ms for pod "kube-controller-manager-default-k8s-diff-port-585256" in "kube-system" namespace to be "Ready" ...
	I0814 01:10:59.825209   61689 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rg8h9" in "kube-system" namespace to be "Ready" ...
	I0814 01:10:59.829240   61689 pod_ready.go:92] pod "kube-proxy-rg8h9" in "kube-system" namespace has status "Ready":"True"
	I0814 01:10:59.829259   61689 pod_ready.go:81] duration metric: took 4.043044ms for pod "kube-proxy-rg8h9" in "kube-system" namespace to be "Ready" ...
	I0814 01:10:59.829269   61689 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-585256" in "kube-system" namespace to be "Ready" ...
	I0814 01:11:00.602253   61689 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-585256" in "kube-system" namespace has status "Ready":"True"
	I0814 01:11:00.602276   61689 pod_ready.go:81] duration metric: took 773.000181ms for pod "kube-scheduler-default-k8s-diff-port-585256" in "kube-system" namespace to be "Ready" ...
	I0814 01:11:00.602285   61689 pod_ready.go:38] duration metric: took 8.819260447s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 01:11:00.602301   61689 api_server.go:52] waiting for apiserver process to appear ...
	I0814 01:11:00.602352   61689 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:11:00.620930   61689 api_server.go:72] duration metric: took 9.068741768s to wait for apiserver process to appear ...
	I0814 01:11:00.620954   61689 api_server.go:88] waiting for apiserver healthz status ...
	I0814 01:11:00.620973   61689 api_server.go:253] Checking apiserver healthz at https://192.168.39.110:8444/healthz ...
	I0814 01:11:00.625960   61689 api_server.go:279] https://192.168.39.110:8444/healthz returned 200:
	ok
	I0814 01:11:00.626930   61689 api_server.go:141] control plane version: v1.31.0
	I0814 01:11:00.626948   61689 api_server.go:131] duration metric: took 5.98825ms to wait for apiserver health ...
	I0814 01:11:00.626956   61689 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 01:11:00.805157   61689 system_pods.go:59] 9 kube-system pods found
	I0814 01:11:00.805183   61689 system_pods.go:61] "coredns-6f6b679f8f-hngz9" [213f9a45-596b-47b3-9c37-ceae021433ea] Running
	I0814 01:11:00.805187   61689 system_pods.go:61] "coredns-6f6b679f8f-jmqk7" [397fb54b-40cd-4c4e-9503-c077f814c6e5] Running
	I0814 01:11:00.805190   61689 system_pods.go:61] "etcd-default-k8s-diff-port-585256" [2fa04b3c-b311-4f0f-82e5-e512db3dd11b] Running
	I0814 01:11:00.805194   61689 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-585256" [ef1c1aeb-9cee-47d6-8cf5-14535208af62] Running
	I0814 01:11:00.805197   61689 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-585256" [ff5c5123-b01f-4023-b8ec-169065ddb88a] Running
	I0814 01:11:00.805200   61689 system_pods.go:61] "kube-proxy-rg8h9" [b2601104-a6f5-4065-87d5-c027d583f647] Running
	I0814 01:11:00.805203   61689 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-585256" [31e655e4-00c7-443a-9ee8-058a4020852d] Running
	I0814 01:11:00.805209   61689 system_pods.go:61] "metrics-server-6867b74b74-lzfpz" [2dd31ad2-c384-4edd-8d5c-561bc2fa72e4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 01:11:00.805213   61689 system_pods.go:61] "storage-provisioner" [1636777b-2347-4c48-b72a-3b5445c4862a] Running
	I0814 01:11:00.805219   61689 system_pods.go:74] duration metric: took 178.259422ms to wait for pod list to return data ...
	I0814 01:11:00.805226   61689 default_sa.go:34] waiting for default service account to be created ...
	I0814 01:11:01.001973   61689 default_sa.go:45] found service account: "default"
	I0814 01:11:01.002000   61689 default_sa.go:55] duration metric: took 196.764266ms for default service account to be created ...
	I0814 01:11:01.002010   61689 system_pods.go:116] waiting for k8s-apps to be running ...
	I0814 01:11:01.203660   61689 system_pods.go:86] 9 kube-system pods found
	I0814 01:11:01.203683   61689 system_pods.go:89] "coredns-6f6b679f8f-hngz9" [213f9a45-596b-47b3-9c37-ceae021433ea] Running
	I0814 01:11:01.203688   61689 system_pods.go:89] "coredns-6f6b679f8f-jmqk7" [397fb54b-40cd-4c4e-9503-c077f814c6e5] Running
	I0814 01:11:01.203695   61689 system_pods.go:89] "etcd-default-k8s-diff-port-585256" [2fa04b3c-b311-4f0f-82e5-e512db3dd11b] Running
	I0814 01:11:01.203702   61689 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-585256" [ef1c1aeb-9cee-47d6-8cf5-14535208af62] Running
	I0814 01:11:01.203708   61689 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-585256" [ff5c5123-b01f-4023-b8ec-169065ddb88a] Running
	I0814 01:11:01.203713   61689 system_pods.go:89] "kube-proxy-rg8h9" [b2601104-a6f5-4065-87d5-c027d583f647] Running
	I0814 01:11:01.203719   61689 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-585256" [31e655e4-00c7-443a-9ee8-058a4020852d] Running
	I0814 01:11:01.203727   61689 system_pods.go:89] "metrics-server-6867b74b74-lzfpz" [2dd31ad2-c384-4edd-8d5c-561bc2fa72e4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 01:11:01.203733   61689 system_pods.go:89] "storage-provisioner" [1636777b-2347-4c48-b72a-3b5445c4862a] Running
	I0814 01:11:01.203744   61689 system_pods.go:126] duration metric: took 201.72785ms to wait for k8s-apps to be running ...
	I0814 01:11:01.203752   61689 system_svc.go:44] waiting for kubelet service to be running ....
	I0814 01:11:01.203810   61689 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 01:11:01.218903   61689 system_svc.go:56] duration metric: took 15.144054ms WaitForService to wait for kubelet
	I0814 01:11:01.218925   61689 kubeadm.go:582] duration metric: took 9.666741267s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 01:11:01.218950   61689 node_conditions.go:102] verifying NodePressure condition ...
	I0814 01:11:01.403320   61689 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0814 01:11:01.403350   61689 node_conditions.go:123] node cpu capacity is 2
	I0814 01:11:01.403363   61689 node_conditions.go:105] duration metric: took 184.40754ms to run NodePressure ...
	I0814 01:11:01.403377   61689 start.go:241] waiting for startup goroutines ...
	I0814 01:11:01.403385   61689 start.go:246] waiting for cluster config update ...
	I0814 01:11:01.403398   61689 start.go:255] writing updated cluster config ...
	I0814 01:11:01.403690   61689 ssh_runner.go:195] Run: rm -f paused
	I0814 01:11:01.451211   61689 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0814 01:11:01.453288   61689 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-585256" cluster and "default" namespace by default
	I0814 01:11:09.693028   61804 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0814 01:11:09.693700   61804 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 01:11:09.693975   61804 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 01:11:18.892614   61115 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.307924274s)
	I0814 01:11:18.892692   61115 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 01:11:18.907571   61115 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 01:11:18.917775   61115 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 01:11:18.927492   61115 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 01:11:18.927521   61115 kubeadm.go:157] found existing configuration files:
	
	I0814 01:11:18.927588   61115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 01:11:18.936787   61115 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 01:11:18.936840   61115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 01:11:18.946163   61115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 01:11:18.954567   61115 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 01:11:18.954613   61115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 01:11:18.963437   61115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 01:11:18.971647   61115 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 01:11:18.971691   61115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 01:11:18.980676   61115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 01:11:18.989638   61115 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 01:11:18.989681   61115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 01:11:18.998834   61115 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0814 01:11:19.044209   61115 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0814 01:11:19.044286   61115 kubeadm.go:310] [preflight] Running pre-flight checks
	I0814 01:11:19.152983   61115 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0814 01:11:19.153147   61115 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0814 01:11:19.153253   61115 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0814 01:11:19.160933   61115 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0814 01:11:14.694223   61804 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 01:11:14.694446   61804 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 01:11:19.162856   61115 out.go:204]   - Generating certificates and keys ...
	I0814 01:11:19.162972   61115 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0814 01:11:19.163044   61115 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0814 01:11:19.163121   61115 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0814 01:11:19.163213   61115 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0814 01:11:19.163322   61115 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0814 01:11:19.163396   61115 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0814 01:11:19.163467   61115 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0814 01:11:19.163527   61115 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0814 01:11:19.163755   61115 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0814 01:11:19.163860   61115 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0814 01:11:19.163917   61115 kubeadm.go:310] [certs] Using the existing "sa" key
	I0814 01:11:19.163987   61115 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0814 01:11:19.615014   61115 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0814 01:11:19.777877   61115 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0814 01:11:19.917278   61115 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0814 01:11:20.190113   61115 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0814 01:11:20.351945   61115 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0814 01:11:20.352522   61115 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0814 01:11:20.355239   61115 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0814 01:11:20.356550   61115 out.go:204]   - Booting up control plane ...
	I0814 01:11:20.356683   61115 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0814 01:11:20.356784   61115 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0814 01:11:20.356993   61115 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0814 01:11:20.376382   61115 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0814 01:11:20.381926   61115 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0814 01:11:20.382001   61115 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0814 01:11:20.510283   61115 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0814 01:11:20.510394   61115 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0814 01:11:21.016575   61115 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 505.997518ms
	I0814 01:11:21.016716   61115 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0814 01:11:26.018203   61115 kubeadm.go:310] [api-check] The API server is healthy after 5.00166081s
	I0814 01:11:26.035867   61115 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0814 01:11:26.053660   61115 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0814 01:11:26.084727   61115 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0814 01:11:26.084987   61115 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-901410 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0814 01:11:26.100115   61115 kubeadm.go:310] [bootstrap-token] Using token: t7ews1.hirn7pq8otu9l2lh
	I0814 01:11:26.101532   61115 out.go:204]   - Configuring RBAC rules ...
	I0814 01:11:26.101691   61115 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0814 01:11:26.107165   61115 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0814 01:11:26.117715   61115 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0814 01:11:26.121222   61115 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0814 01:11:26.124371   61115 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0814 01:11:26.128216   61115 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0814 01:11:26.426496   61115 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0814 01:11:26.868163   61115 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0814 01:11:27.426401   61115 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0814 01:11:27.427484   61115 kubeadm.go:310] 
	I0814 01:11:27.427587   61115 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0814 01:11:27.427604   61115 kubeadm.go:310] 
	I0814 01:11:27.427727   61115 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0814 01:11:27.427743   61115 kubeadm.go:310] 
	I0814 01:11:27.427770   61115 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0814 01:11:27.427846   61115 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0814 01:11:27.427928   61115 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0814 01:11:27.427939   61115 kubeadm.go:310] 
	I0814 01:11:27.428020   61115 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0814 01:11:27.428027   61115 kubeadm.go:310] 
	I0814 01:11:27.428109   61115 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0814 01:11:27.428116   61115 kubeadm.go:310] 
	I0814 01:11:27.428192   61115 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0814 01:11:27.428289   61115 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0814 01:11:27.428389   61115 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0814 01:11:27.428397   61115 kubeadm.go:310] 
	I0814 01:11:27.428511   61115 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0814 01:11:27.428625   61115 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0814 01:11:27.428640   61115 kubeadm.go:310] 
	I0814 01:11:27.428778   61115 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token t7ews1.hirn7pq8otu9l2lh \
	I0814 01:11:27.428920   61115 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f608549e9c065598e2d494ba753eb8a7bbe7260a53a76decd30e6e17b5fe24c3 \
	I0814 01:11:27.428964   61115 kubeadm.go:310] 	--control-plane 
	I0814 01:11:27.428971   61115 kubeadm.go:310] 
	I0814 01:11:27.429085   61115 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0814 01:11:27.429097   61115 kubeadm.go:310] 
	I0814 01:11:27.429229   61115 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token t7ews1.hirn7pq8otu9l2lh \
	I0814 01:11:27.429381   61115 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f608549e9c065598e2d494ba753eb8a7bbe7260a53a76decd30e6e17b5fe24c3 
	I0814 01:11:27.430485   61115 kubeadm.go:310] W0814 01:11:19.012996    2597 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0814 01:11:27.430895   61115 kubeadm.go:310] W0814 01:11:19.013634    2597 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0814 01:11:27.431062   61115 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0814 01:11:27.431092   61115 cni.go:84] Creating CNI manager for ""
	I0814 01:11:27.431102   61115 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 01:11:27.432987   61115 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0814 01:11:24.694861   61804 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 01:11:24.695123   61804 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 01:11:27.434183   61115 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0814 01:11:27.446168   61115 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0814 01:11:27.466651   61115 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0814 01:11:27.466760   61115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-901410 minikube.k8s.io/updated_at=2024_08_14T01_11_27_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a5ac17629aabe8562ef9220195ba6559cf416caf minikube.k8s.io/name=embed-certs-901410 minikube.k8s.io/primary=true
	I0814 01:11:27.466760   61115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:11:27.495784   61115 ops.go:34] apiserver oom_adj: -16
	I0814 01:11:27.670097   61115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:11:28.170891   61115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:11:28.670320   61115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:11:29.170197   61115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:11:29.670157   61115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:11:30.170664   61115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:11:30.670254   61115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:11:31.170767   61115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:11:31.671004   61115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 01:11:31.762872   61115 kubeadm.go:1113] duration metric: took 4.296174293s to wait for elevateKubeSystemPrivileges
	I0814 01:11:31.762902   61115 kubeadm.go:394] duration metric: took 4m55.664668706s to StartCluster
	I0814 01:11:31.762924   61115 settings.go:142] acquiring lock: {Name:mkb0f793aa2a6618ff3457f9cd2d34beec5f1b5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 01:11:31.763010   61115 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19429-9425/kubeconfig
	I0814 01:11:31.764625   61115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19429-9425/kubeconfig: {Name:mkd99f966962f24d3ec49056c344f5320df43dba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 01:11:31.764876   61115 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.210 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0814 01:11:31.764951   61115 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0814 01:11:31.765038   61115 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-901410"
	I0814 01:11:31.765052   61115 addons.go:69] Setting default-storageclass=true in profile "embed-certs-901410"
	I0814 01:11:31.765070   61115 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-901410"
	I0814 01:11:31.765068   61115 addons.go:69] Setting metrics-server=true in profile "embed-certs-901410"
	I0814 01:11:31.765086   61115 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-901410"
	I0814 01:11:31.765092   61115 config.go:182] Loaded profile config "embed-certs-901410": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 01:11:31.765111   61115 addons.go:234] Setting addon metrics-server=true in "embed-certs-901410"
	W0814 01:11:31.765126   61115 addons.go:243] addon metrics-server should already be in state true
	I0814 01:11:31.765163   61115 host.go:66] Checking if "embed-certs-901410" exists ...
	W0814 01:11:31.765083   61115 addons.go:243] addon storage-provisioner should already be in state true
	I0814 01:11:31.765199   61115 host.go:66] Checking if "embed-certs-901410" exists ...
	I0814 01:11:31.765481   61115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:11:31.765516   61115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:11:31.765554   61115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:11:31.765570   61115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:11:31.765588   61115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:11:31.765614   61115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:11:31.766459   61115 out.go:177] * Verifying Kubernetes components...
	I0814 01:11:31.767835   61115 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 01:11:31.781637   61115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34599
	I0814 01:11:31.782146   61115 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:11:31.782517   61115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32983
	I0814 01:11:31.782700   61115 main.go:141] libmachine: Using API Version  1
	I0814 01:11:31.782732   61115 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:11:31.783038   61115 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:11:31.783052   61115 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:11:31.783213   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetState
	I0814 01:11:31.783540   61115 main.go:141] libmachine: Using API Version  1
	I0814 01:11:31.783569   61115 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:11:31.783897   61115 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:11:31.784326   61115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39503
	I0814 01:11:31.784458   61115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:11:31.784487   61115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:11:31.784791   61115 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:11:31.785281   61115 main.go:141] libmachine: Using API Version  1
	I0814 01:11:31.785306   61115 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:11:31.785665   61115 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:11:31.786175   61115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:11:31.786218   61115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:11:31.786466   61115 addons.go:234] Setting addon default-storageclass=true in "embed-certs-901410"
	W0814 01:11:31.786484   61115 addons.go:243] addon default-storageclass should already be in state true
	I0814 01:11:31.786513   61115 host.go:66] Checking if "embed-certs-901410" exists ...
	I0814 01:11:31.786853   61115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:11:31.786881   61115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:11:31.801208   61115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41561
	I0814 01:11:31.801592   61115 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:11:31.802016   61115 main.go:141] libmachine: Using API Version  1
	I0814 01:11:31.802032   61115 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:11:31.802382   61115 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:11:31.802555   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetState
	I0814 01:11:31.803106   61115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40669
	I0814 01:11:31.803589   61115 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:11:31.804133   61115 main.go:141] libmachine: Using API Version  1
	I0814 01:11:31.804159   61115 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:11:31.804462   61115 main.go:141] libmachine: (embed-certs-901410) Calling .DriverName
	I0814 01:11:31.804532   61115 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:11:31.804716   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetState
	I0814 01:11:31.805759   61115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39529
	I0814 01:11:31.806197   61115 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:11:31.806546   61115 main.go:141] libmachine: (embed-certs-901410) Calling .DriverName
	I0814 01:11:31.806590   61115 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0814 01:11:31.806667   61115 main.go:141] libmachine: Using API Version  1
	I0814 01:11:31.806692   61115 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:11:31.806982   61115 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:11:31.807572   61115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 01:11:31.807609   61115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 01:11:31.808223   61115 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 01:11:31.808225   61115 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0814 01:11:31.808301   61115 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0814 01:11:31.808335   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHHostname
	I0814 01:11:31.810018   61115 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 01:11:31.810057   61115 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0814 01:11:31.810125   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHHostname
	I0814 01:11:31.812029   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:11:31.812728   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:11:31.812862   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:11:31.813062   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHPort
	I0814 01:11:31.813261   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 01:11:31.813284   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:11:31.813420   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHUsername
	I0814 01:11:31.813562   61115 sshutil.go:53] new ssh client: &{IP:192.168.50.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/embed-certs-901410/id_rsa Username:docker}
	I0814 01:11:31.813864   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:11:31.813880   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:11:31.814032   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHPort
	I0814 01:11:31.814236   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 01:11:31.814398   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHUsername
	I0814 01:11:31.814542   61115 sshutil.go:53] new ssh client: &{IP:192.168.50.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/embed-certs-901410/id_rsa Username:docker}
	I0814 01:11:31.825081   61115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34269
	I0814 01:11:31.825523   61115 main.go:141] libmachine: () Calling .GetVersion
	I0814 01:11:31.825944   61115 main.go:141] libmachine: Using API Version  1
	I0814 01:11:31.825967   61115 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 01:11:31.826327   61115 main.go:141] libmachine: () Calling .GetMachineName
	I0814 01:11:31.826537   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetState
	I0814 01:11:31.831060   61115 main.go:141] libmachine: (embed-certs-901410) Calling .DriverName
	I0814 01:11:31.831292   61115 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0814 01:11:31.831315   61115 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0814 01:11:31.831334   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHHostname
	I0814 01:11:31.834552   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:11:31.834934   61115 main.go:141] libmachine: (embed-certs-901410) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:4e:56", ip: ""} in network mk-embed-certs-901410: {Iface:virbr2 ExpiryTime:2024-08-14 02:06:23 +0000 UTC Type:0 Mac:52:54:00:fa:4e:56 Iaid: IPaddr:192.168.50.210 Prefix:24 Hostname:embed-certs-901410 Clientid:01:52:54:00:fa:4e:56}
	I0814 01:11:31.834962   61115 main.go:141] libmachine: (embed-certs-901410) DBG | domain embed-certs-901410 has defined IP address 192.168.50.210 and MAC address 52:54:00:fa:4e:56 in network mk-embed-certs-901410
	I0814 01:11:31.835102   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHPort
	I0814 01:11:31.835304   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHKeyPath
	I0814 01:11:31.835476   61115 main.go:141] libmachine: (embed-certs-901410) Calling .GetSSHUsername
	I0814 01:11:31.835610   61115 sshutil.go:53] new ssh client: &{IP:192.168.50.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/embed-certs-901410/id_rsa Username:docker}
	I0814 01:11:31.960224   61115 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 01:11:31.980097   61115 node_ready.go:35] waiting up to 6m0s for node "embed-certs-901410" to be "Ready" ...
	I0814 01:11:31.993130   61115 node_ready.go:49] node "embed-certs-901410" has status "Ready":"True"
	I0814 01:11:31.993152   61115 node_ready.go:38] duration metric: took 13.020022ms for node "embed-certs-901410" to be "Ready" ...
	I0814 01:11:31.993164   61115 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 01:11:31.998448   61115 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-901410" in "kube-system" namespace to be "Ready" ...
	I0814 01:11:32.075908   61115 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0814 01:11:32.075933   61115 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0814 01:11:32.114559   61115 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 01:11:32.137251   61115 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0814 01:11:32.144383   61115 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0814 01:11:32.144404   61115 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0814 01:11:32.207930   61115 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0814 01:11:32.207957   61115 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0814 01:11:32.235306   61115 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0814 01:11:32.769968   61115 main.go:141] libmachine: Making call to close driver server
	I0814 01:11:32.769994   61115 main.go:141] libmachine: (embed-certs-901410) Calling .Close
	I0814 01:11:32.770140   61115 main.go:141] libmachine: Making call to close driver server
	I0814 01:11:32.770164   61115 main.go:141] libmachine: (embed-certs-901410) Calling .Close
	I0814 01:11:32.770300   61115 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:11:32.770337   61115 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:11:32.770348   61115 main.go:141] libmachine: Making call to close driver server
	I0814 01:11:32.770351   61115 main.go:141] libmachine: (embed-certs-901410) DBG | Closing plugin on server side
	I0814 01:11:32.770360   61115 main.go:141] libmachine: (embed-certs-901410) Calling .Close
	I0814 01:11:32.770412   61115 main.go:141] libmachine: (embed-certs-901410) DBG | Closing plugin on server side
	I0814 01:11:32.770434   61115 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:11:32.770447   61115 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:11:32.770461   61115 main.go:141] libmachine: Making call to close driver server
	I0814 01:11:32.770472   61115 main.go:141] libmachine: (embed-certs-901410) Calling .Close
	I0814 01:11:32.770656   61115 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:11:32.770696   61115 main.go:141] libmachine: (embed-certs-901410) DBG | Closing plugin on server side
	I0814 01:11:32.770706   61115 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:11:32.770767   61115 main.go:141] libmachine: (embed-certs-901410) DBG | Closing plugin on server side
	I0814 01:11:32.770945   61115 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:11:32.770960   61115 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:11:32.779423   61115 main.go:141] libmachine: Making call to close driver server
	I0814 01:11:32.779437   61115 main.go:141] libmachine: (embed-certs-901410) Calling .Close
	I0814 01:11:32.779661   61115 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:11:32.779675   61115 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:11:32.779702   61115 main.go:141] libmachine: (embed-certs-901410) DBG | Closing plugin on server side
	I0814 01:11:33.063157   61115 main.go:141] libmachine: Making call to close driver server
	I0814 01:11:33.063187   61115 main.go:141] libmachine: (embed-certs-901410) Calling .Close
	I0814 01:11:33.064055   61115 main.go:141] libmachine: (embed-certs-901410) DBG | Closing plugin on server side
	I0814 01:11:33.064101   61115 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:11:33.064110   61115 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:11:33.064120   61115 main.go:141] libmachine: Making call to close driver server
	I0814 01:11:33.064127   61115 main.go:141] libmachine: (embed-certs-901410) Calling .Close
	I0814 01:11:33.064378   61115 main.go:141] libmachine: Successfully made call to close driver server
	I0814 01:11:33.064397   61115 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 01:11:33.064409   61115 addons.go:475] Verifying addon metrics-server=true in "embed-certs-901410"
	I0814 01:11:33.064458   61115 main.go:141] libmachine: (embed-certs-901410) DBG | Closing plugin on server side
	I0814 01:11:33.066122   61115 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0814 01:11:33.067534   61115 addons.go:510] duration metric: took 1.302585898s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0814 01:11:34.004078   61115 pod_ready.go:102] pod "etcd-embed-certs-901410" in "kube-system" namespace has status "Ready":"False"
	I0814 01:11:36.005391   61115 pod_ready.go:102] pod "etcd-embed-certs-901410" in "kube-system" namespace has status "Ready":"False"
	I0814 01:11:38.505031   61115 pod_ready.go:102] pod "etcd-embed-certs-901410" in "kube-system" namespace has status "Ready":"False"
	I0814 01:11:39.507006   61115 pod_ready.go:92] pod "etcd-embed-certs-901410" in "kube-system" namespace has status "Ready":"True"
	I0814 01:11:39.507026   61115 pod_ready.go:81] duration metric: took 7.508554233s for pod "etcd-embed-certs-901410" in "kube-system" namespace to be "Ready" ...
	I0814 01:11:39.507035   61115 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-901410" in "kube-system" namespace to be "Ready" ...
	I0814 01:11:39.517719   61115 pod_ready.go:92] pod "kube-apiserver-embed-certs-901410" in "kube-system" namespace has status "Ready":"True"
	I0814 01:11:39.517739   61115 pod_ready.go:81] duration metric: took 10.698211ms for pod "kube-apiserver-embed-certs-901410" in "kube-system" namespace to be "Ready" ...
	I0814 01:11:39.517751   61115 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-901410" in "kube-system" namespace to be "Ready" ...
	I0814 01:11:39.522245   61115 pod_ready.go:92] pod "kube-controller-manager-embed-certs-901410" in "kube-system" namespace has status "Ready":"True"
	I0814 01:11:39.522267   61115 pod_ready.go:81] duration metric: took 4.507786ms for pod "kube-controller-manager-embed-certs-901410" in "kube-system" namespace to be "Ready" ...
	I0814 01:11:39.522280   61115 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fqmzw" in "kube-system" namespace to be "Ready" ...
	I0814 01:11:39.527880   61115 pod_ready.go:92] pod "kube-proxy-fqmzw" in "kube-system" namespace has status "Ready":"True"
	I0814 01:11:39.527897   61115 pod_ready.go:81] duration metric: took 5.609617ms for pod "kube-proxy-fqmzw" in "kube-system" namespace to be "Ready" ...
	I0814 01:11:39.527904   61115 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-901410" in "kube-system" namespace to be "Ready" ...
	I0814 01:11:39.532430   61115 pod_ready.go:92] pod "kube-scheduler-embed-certs-901410" in "kube-system" namespace has status "Ready":"True"
	I0814 01:11:39.532448   61115 pod_ready.go:81] duration metric: took 4.536902ms for pod "kube-scheduler-embed-certs-901410" in "kube-system" namespace to be "Ready" ...
	I0814 01:11:39.532456   61115 pod_ready.go:38] duration metric: took 7.539280742s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 01:11:39.532471   61115 api_server.go:52] waiting for apiserver process to appear ...
	I0814 01:11:39.532537   61115 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 01:11:39.547608   61115 api_server.go:72] duration metric: took 7.782698582s to wait for apiserver process to appear ...
	I0814 01:11:39.547635   61115 api_server.go:88] waiting for apiserver healthz status ...
	I0814 01:11:39.547652   61115 api_server.go:253] Checking apiserver healthz at https://192.168.50.210:8443/healthz ...
	I0814 01:11:39.552021   61115 api_server.go:279] https://192.168.50.210:8443/healthz returned 200:
	ok
	I0814 01:11:39.552955   61115 api_server.go:141] control plane version: v1.31.0
	I0814 01:11:39.552972   61115 api_server.go:131] duration metric: took 5.330974ms to wait for apiserver health ...
	I0814 01:11:39.552979   61115 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 01:11:39.704928   61115 system_pods.go:59] 9 kube-system pods found
	I0814 01:11:39.704952   61115 system_pods.go:61] "coredns-6f6b679f8f-bq2xk" [6593bc2b-ef8f-4738-8674-dcaea675b88b] Running
	I0814 01:11:39.704959   61115 system_pods.go:61] "coredns-6f6b679f8f-lwd2j" [75f6e3fe-c5ac-4dbc-bbbb-bfb91796aaff] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0814 01:11:39.704964   61115 system_pods.go:61] "etcd-embed-certs-901410" [60eb6469-1be4-401b-9382-977428a0ead5] Running
	I0814 01:11:39.704970   61115 system_pods.go:61] "kube-apiserver-embed-certs-901410" [802d6cc2-d1d4-485c-98d8-e5b4afa9e632] Running
	I0814 01:11:39.704974   61115 system_pods.go:61] "kube-controller-manager-embed-certs-901410" [12e308db-7ca5-4d33-b62a-e144e7dd06c5] Running
	I0814 01:11:39.704977   61115 system_pods.go:61] "kube-proxy-fqmzw" [f9d63b14-ce56-4d0b-8511-1198b306b70e] Running
	I0814 01:11:39.704980   61115 system_pods.go:61] "kube-scheduler-embed-certs-901410" [668258a9-02d2-416d-ac07-b2b87deea00d] Running
	I0814 01:11:39.704985   61115 system_pods.go:61] "metrics-server-6867b74b74-mwl74" [065b6973-cd9d-4091-96b9-8dff2c5f85eb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 01:11:39.704989   61115 system_pods.go:61] "storage-provisioner" [e0f82856-b50c-4a5f-b0c7-4cd81e4b896e] Running
	I0814 01:11:39.704995   61115 system_pods.go:74] duration metric: took 152.010903ms to wait for pod list to return data ...
	I0814 01:11:39.705004   61115 default_sa.go:34] waiting for default service account to be created ...
	I0814 01:11:39.902622   61115 default_sa.go:45] found service account: "default"
	I0814 01:11:39.902662   61115 default_sa.go:55] duration metric: took 197.651811ms for default service account to be created ...
	I0814 01:11:39.902674   61115 system_pods.go:116] waiting for k8s-apps to be running ...
	I0814 01:11:40.105740   61115 system_pods.go:86] 9 kube-system pods found
	I0814 01:11:40.105767   61115 system_pods.go:89] "coredns-6f6b679f8f-bq2xk" [6593bc2b-ef8f-4738-8674-dcaea675b88b] Running
	I0814 01:11:40.105775   61115 system_pods.go:89] "coredns-6f6b679f8f-lwd2j" [75f6e3fe-c5ac-4dbc-bbbb-bfb91796aaff] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0814 01:11:40.105781   61115 system_pods.go:89] "etcd-embed-certs-901410" [60eb6469-1be4-401b-9382-977428a0ead5] Running
	I0814 01:11:40.105787   61115 system_pods.go:89] "kube-apiserver-embed-certs-901410" [802d6cc2-d1d4-485c-98d8-e5b4afa9e632] Running
	I0814 01:11:40.105791   61115 system_pods.go:89] "kube-controller-manager-embed-certs-901410" [12e308db-7ca5-4d33-b62a-e144e7dd06c5] Running
	I0814 01:11:40.105794   61115 system_pods.go:89] "kube-proxy-fqmzw" [f9d63b14-ce56-4d0b-8511-1198b306b70e] Running
	I0814 01:11:40.105798   61115 system_pods.go:89] "kube-scheduler-embed-certs-901410" [668258a9-02d2-416d-ac07-b2b87deea00d] Running
	I0814 01:11:40.105804   61115 system_pods.go:89] "metrics-server-6867b74b74-mwl74" [065b6973-cd9d-4091-96b9-8dff2c5f85eb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 01:11:40.105809   61115 system_pods.go:89] "storage-provisioner" [e0f82856-b50c-4a5f-b0c7-4cd81e4b896e] Running
	I0814 01:11:40.105815   61115 system_pods.go:126] duration metric: took 203.134555ms to wait for k8s-apps to be running ...
	I0814 01:11:40.105824   61115 system_svc.go:44] waiting for kubelet service to be running ....
	I0814 01:11:40.105866   61115 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 01:11:40.121399   61115 system_svc.go:56] duration metric: took 15.565745ms WaitForService to wait for kubelet
	I0814 01:11:40.121427   61115 kubeadm.go:582] duration metric: took 8.356517219s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 01:11:40.121445   61115 node_conditions.go:102] verifying NodePressure condition ...
	I0814 01:11:40.303687   61115 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0814 01:11:40.303720   61115 node_conditions.go:123] node cpu capacity is 2
	I0814 01:11:40.303732   61115 node_conditions.go:105] duration metric: took 182.281943ms to run NodePressure ...
	I0814 01:11:40.303745   61115 start.go:241] waiting for startup goroutines ...
	I0814 01:11:40.303754   61115 start.go:246] waiting for cluster config update ...
	I0814 01:11:40.303768   61115 start.go:255] writing updated cluster config ...
	I0814 01:11:40.304122   61115 ssh_runner.go:195] Run: rm -f paused
	I0814 01:11:40.350855   61115 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0814 01:11:40.352610   61115 out.go:177] * Done! kubectl is now configured to use "embed-certs-901410" cluster and "default" namespace by default
	I0814 01:11:44.695887   61804 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 01:11:44.696122   61804 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 01:12:24.697922   61804 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 01:12:24.698217   61804 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 01:12:24.698256   61804 kubeadm.go:310] 
	I0814 01:12:24.698318   61804 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0814 01:12:24.698406   61804 kubeadm.go:310] 		timed out waiting for the condition
	I0814 01:12:24.698434   61804 kubeadm.go:310] 
	I0814 01:12:24.698484   61804 kubeadm.go:310] 	This error is likely caused by:
	I0814 01:12:24.698530   61804 kubeadm.go:310] 		- The kubelet is not running
	I0814 01:12:24.698640   61804 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0814 01:12:24.698651   61804 kubeadm.go:310] 
	I0814 01:12:24.698784   61804 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0814 01:12:24.698841   61804 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0814 01:12:24.698874   61804 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0814 01:12:24.698878   61804 kubeadm.go:310] 
	I0814 01:12:24.699009   61804 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0814 01:12:24.699119   61804 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0814 01:12:24.699128   61804 kubeadm.go:310] 
	I0814 01:12:24.699294   61804 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0814 01:12:24.699431   61804 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0814 01:12:24.699536   61804 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0814 01:12:24.699635   61804 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0814 01:12:24.699647   61804 kubeadm.go:310] 
	I0814 01:12:24.700201   61804 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0814 01:12:24.700300   61804 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0814 01:12:24.700391   61804 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0814 01:12:24.700527   61804 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0814 01:12:24.700577   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0814 01:12:30.038180   61804 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.337582505s)
	I0814 01:12:30.038256   61804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 01:12:30.052476   61804 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 01:12:30.062330   61804 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 01:12:30.062357   61804 kubeadm.go:157] found existing configuration files:
	
	I0814 01:12:30.062409   61804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 01:12:30.072303   61804 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 01:12:30.072355   61804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 01:12:30.081331   61804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 01:12:30.090105   61804 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 01:12:30.090163   61804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 01:12:30.099446   61804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 01:12:30.108290   61804 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 01:12:30.108346   61804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 01:12:30.117872   61804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 01:12:30.126357   61804 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 01:12:30.126424   61804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 01:12:30.136277   61804 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0814 01:12:30.342736   61804 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0814 01:14:26.274820   61804 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0814 01:14:26.274958   61804 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0814 01:14:26.276512   61804 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0814 01:14:26.276601   61804 kubeadm.go:310] [preflight] Running pre-flight checks
	I0814 01:14:26.276743   61804 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0814 01:14:26.276887   61804 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0814 01:14:26.277017   61804 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0814 01:14:26.277097   61804 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0814 01:14:26.278845   61804 out.go:204]   - Generating certificates and keys ...
	I0814 01:14:26.278935   61804 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0814 01:14:26.279005   61804 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0814 01:14:26.279103   61804 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0814 01:14:26.279187   61804 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0814 01:14:26.279278   61804 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0814 01:14:26.279351   61804 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0814 01:14:26.279433   61804 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0814 01:14:26.279515   61804 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0814 01:14:26.279623   61804 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0814 01:14:26.279725   61804 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0814 01:14:26.279776   61804 kubeadm.go:310] [certs] Using the existing "sa" key
	I0814 01:14:26.279858   61804 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0814 01:14:26.279933   61804 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0814 01:14:26.280086   61804 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0814 01:14:26.280188   61804 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0814 01:14:26.280289   61804 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0814 01:14:26.280424   61804 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0814 01:14:26.280517   61804 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0814 01:14:26.280573   61804 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0814 01:14:26.280648   61804 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0814 01:14:26.281982   61804 out.go:204]   - Booting up control plane ...
	I0814 01:14:26.282070   61804 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0814 01:14:26.282159   61804 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0814 01:14:26.282249   61804 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0814 01:14:26.282389   61804 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0814 01:14:26.282564   61804 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0814 01:14:26.282624   61804 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0814 01:14:26.282685   61804 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 01:14:26.282866   61804 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 01:14:26.282971   61804 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 01:14:26.283161   61804 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 01:14:26.283235   61804 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 01:14:26.283494   61804 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 01:14:26.283611   61804 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 01:14:26.283768   61804 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 01:14:26.283830   61804 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 01:14:26.284021   61804 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 01:14:26.284032   61804 kubeadm.go:310] 
	I0814 01:14:26.284069   61804 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0814 01:14:26.284126   61804 kubeadm.go:310] 		timed out waiting for the condition
	I0814 01:14:26.284135   61804 kubeadm.go:310] 
	I0814 01:14:26.284188   61804 kubeadm.go:310] 	This error is likely caused by:
	I0814 01:14:26.284234   61804 kubeadm.go:310] 		- The kubelet is not running
	I0814 01:14:26.284336   61804 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0814 01:14:26.284344   61804 kubeadm.go:310] 
	I0814 01:14:26.284429   61804 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0814 01:14:26.284463   61804 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0814 01:14:26.284490   61804 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0814 01:14:26.284499   61804 kubeadm.go:310] 
	I0814 01:14:26.284587   61804 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0814 01:14:26.284726   61804 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0814 01:14:26.284747   61804 kubeadm.go:310] 
	I0814 01:14:26.284889   61804 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0814 01:14:26.285007   61804 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0814 01:14:26.285083   61804 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0814 01:14:26.285158   61804 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0814 01:14:26.285174   61804 kubeadm.go:310] 
	I0814 01:14:26.285220   61804 kubeadm.go:394] duration metric: took 8m6.417053649s to StartCluster
	I0814 01:14:26.285266   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 01:14:26.285318   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 01:14:26.327320   61804 cri.go:89] found id: ""
	I0814 01:14:26.327351   61804 logs.go:276] 0 containers: []
	W0814 01:14:26.327359   61804 logs.go:278] No container was found matching "kube-apiserver"
	I0814 01:14:26.327366   61804 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 01:14:26.327435   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 01:14:26.362074   61804 cri.go:89] found id: ""
	I0814 01:14:26.362101   61804 logs.go:276] 0 containers: []
	W0814 01:14:26.362109   61804 logs.go:278] No container was found matching "etcd"
	I0814 01:14:26.362115   61804 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 01:14:26.362192   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 01:14:26.395777   61804 cri.go:89] found id: ""
	I0814 01:14:26.395802   61804 logs.go:276] 0 containers: []
	W0814 01:14:26.395814   61804 logs.go:278] No container was found matching "coredns"
	I0814 01:14:26.395821   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 01:14:26.395884   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 01:14:26.429263   61804 cri.go:89] found id: ""
	I0814 01:14:26.429290   61804 logs.go:276] 0 containers: []
	W0814 01:14:26.429299   61804 logs.go:278] No container was found matching "kube-scheduler"
	I0814 01:14:26.429307   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 01:14:26.429370   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 01:14:26.463278   61804 cri.go:89] found id: ""
	I0814 01:14:26.463307   61804 logs.go:276] 0 containers: []
	W0814 01:14:26.463314   61804 logs.go:278] No container was found matching "kube-proxy"
	I0814 01:14:26.463321   61804 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 01:14:26.463381   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 01:14:26.496454   61804 cri.go:89] found id: ""
	I0814 01:14:26.496493   61804 logs.go:276] 0 containers: []
	W0814 01:14:26.496513   61804 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 01:14:26.496521   61804 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 01:14:26.496591   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 01:14:26.530536   61804 cri.go:89] found id: ""
	I0814 01:14:26.530567   61804 logs.go:276] 0 containers: []
	W0814 01:14:26.530579   61804 logs.go:278] No container was found matching "kindnet"
	I0814 01:14:26.530587   61804 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 01:14:26.530659   61804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 01:14:26.564201   61804 cri.go:89] found id: ""
	I0814 01:14:26.564232   61804 logs.go:276] 0 containers: []
	W0814 01:14:26.564245   61804 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 01:14:26.564258   61804 logs.go:123] Gathering logs for kubelet ...
	I0814 01:14:26.564274   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 01:14:26.614225   61804 logs.go:123] Gathering logs for dmesg ...
	I0814 01:14:26.614263   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 01:14:26.632126   61804 logs.go:123] Gathering logs for describe nodes ...
	I0814 01:14:26.632162   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 01:14:26.733732   61804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 01:14:26.733757   61804 logs.go:123] Gathering logs for CRI-O ...
	I0814 01:14:26.733773   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 01:14:26.849177   61804 logs.go:123] Gathering logs for container status ...
	I0814 01:14:26.849218   61804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0814 01:14:26.885741   61804 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0814 01:14:26.885794   61804 out.go:239] * 
	W0814 01:14:26.885846   61804 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0814 01:14:26.885871   61804 out.go:239] * 
	W0814 01:14:26.886747   61804 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0814 01:14:26.889874   61804 out.go:177] 
	W0814 01:14:26.891040   61804 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0814 01:14:26.891083   61804 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0814 01:14:26.891101   61804 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0814 01:14:26.892501   61804 out.go:177] 
	
	
	==> CRI-O <==
	Aug 14 01:25:05 old-k8s-version-179312 crio[648]: time="2024-08-14 01:25:05.305370129Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598705305337846,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c962e9c0-a580-4481-b885-ed515df315fa name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 01:25:05 old-k8s-version-179312 crio[648]: time="2024-08-14 01:25:05.305924656Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6e8243f0-1e7d-403e-97ad-c0d2e5687bbd name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 01:25:05 old-k8s-version-179312 crio[648]: time="2024-08-14 01:25:05.305975209Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6e8243f0-1e7d-403e-97ad-c0d2e5687bbd name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 01:25:05 old-k8s-version-179312 crio[648]: time="2024-08-14 01:25:05.306013966Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=6e8243f0-1e7d-403e-97ad-c0d2e5687bbd name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 01:25:05 old-k8s-version-179312 crio[648]: time="2024-08-14 01:25:05.335819858Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=407eaa1b-1d03-473b-a1e2-18efee7eda11 name=/runtime.v1.RuntimeService/Version
	Aug 14 01:25:05 old-k8s-version-179312 crio[648]: time="2024-08-14 01:25:05.335911448Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=407eaa1b-1d03-473b-a1e2-18efee7eda11 name=/runtime.v1.RuntimeService/Version
	Aug 14 01:25:05 old-k8s-version-179312 crio[648]: time="2024-08-14 01:25:05.336734179Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c0d69c58-ac1f-4271-b0ed-76c60d8f9bdf name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 01:25:05 old-k8s-version-179312 crio[648]: time="2024-08-14 01:25:05.337118237Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598705337093546,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c0d69c58-ac1f-4271-b0ed-76c60d8f9bdf name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 01:25:05 old-k8s-version-179312 crio[648]: time="2024-08-14 01:25:05.337560151Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d8e6dd22-ba06-44ef-bba5-addef85b9513 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 01:25:05 old-k8s-version-179312 crio[648]: time="2024-08-14 01:25:05.337607875Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d8e6dd22-ba06-44ef-bba5-addef85b9513 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 01:25:05 old-k8s-version-179312 crio[648]: time="2024-08-14 01:25:05.337638358Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=d8e6dd22-ba06-44ef-bba5-addef85b9513 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 01:25:05 old-k8s-version-179312 crio[648]: time="2024-08-14 01:25:05.367106582Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8c3bf7e8-03d8-4da0-93c6-9af5f09eafdc name=/runtime.v1.RuntimeService/Version
	Aug 14 01:25:05 old-k8s-version-179312 crio[648]: time="2024-08-14 01:25:05.367177481Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8c3bf7e8-03d8-4da0-93c6-9af5f09eafdc name=/runtime.v1.RuntimeService/Version
	Aug 14 01:25:05 old-k8s-version-179312 crio[648]: time="2024-08-14 01:25:05.369524408Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=95c730a5-4778-47d5-a0e1-eb3bcdaf84fe name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 01:25:05 old-k8s-version-179312 crio[648]: time="2024-08-14 01:25:05.369902501Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598705369880063,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=95c730a5-4778-47d5-a0e1-eb3bcdaf84fe name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 01:25:05 old-k8s-version-179312 crio[648]: time="2024-08-14 01:25:05.370706287Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6e0b2b77-59cf-4224-a658-38fcb20015e6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 01:25:05 old-k8s-version-179312 crio[648]: time="2024-08-14 01:25:05.370761937Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6e0b2b77-59cf-4224-a658-38fcb20015e6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 01:25:05 old-k8s-version-179312 crio[648]: time="2024-08-14 01:25:05.370796162Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=6e0b2b77-59cf-4224-a658-38fcb20015e6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 01:25:05 old-k8s-version-179312 crio[648]: time="2024-08-14 01:25:05.399639144Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1d87de61-b6c1-493d-9887-730464181d04 name=/runtime.v1.RuntimeService/Version
	Aug 14 01:25:05 old-k8s-version-179312 crio[648]: time="2024-08-14 01:25:05.399720349Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1d87de61-b6c1-493d-9887-730464181d04 name=/runtime.v1.RuntimeService/Version
	Aug 14 01:25:05 old-k8s-version-179312 crio[648]: time="2024-08-14 01:25:05.400537089Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2455d966-18fa-4158-8cd5-0c7e6d257aee name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 01:25:05 old-k8s-version-179312 crio[648]: time="2024-08-14 01:25:05.400917229Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723598705400898272,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2455d966-18fa-4158-8cd5-0c7e6d257aee name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 01:25:05 old-k8s-version-179312 crio[648]: time="2024-08-14 01:25:05.401337125Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bc5ad55e-751a-4054-a079-6648bca28d18 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 01:25:05 old-k8s-version-179312 crio[648]: time="2024-08-14 01:25:05.401384793Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bc5ad55e-751a-4054-a079-6648bca28d18 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 01:25:05 old-k8s-version-179312 crio[648]: time="2024-08-14 01:25:05.401420445Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=bc5ad55e-751a-4054-a079-6648bca28d18 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Aug14 01:05] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051654] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037900] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Aug14 01:06] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.069039] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.556159] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.745693] systemd-fstab-generator[568]: Ignoring "noauto" option for root device
	[  +0.067571] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.073344] systemd-fstab-generator[580]: Ignoring "noauto" option for root device
	[  +0.191121] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.114642] systemd-fstab-generator[606]: Ignoring "noauto" option for root device
	[  +0.237276] systemd-fstab-generator[632]: Ignoring "noauto" option for root device
	[  +6.127376] systemd-fstab-generator[900]: Ignoring "noauto" option for root device
	[  +0.063905] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.036138] systemd-fstab-generator[1027]: Ignoring "noauto" option for root device
	[ +12.708573] kauditd_printk_skb: 46 callbacks suppressed
	[Aug14 01:10] systemd-fstab-generator[5126]: Ignoring "noauto" option for root device
	[Aug14 01:12] systemd-fstab-generator[5405]: Ignoring "noauto" option for root device
	[  +0.068703] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 01:25:05 up 19 min,  0 users,  load average: 0.04, 0.05, 0.00
	Linux old-k8s-version-179312 5.10.207 #1 SMP Tue Aug 13 22:05:29 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Aug 14 01:25:04 old-k8s-version-179312 kubelet[6831]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc0001000c0, 0xc000aef5f0)
	Aug 14 01:25:04 old-k8s-version-179312 kubelet[6831]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:130 +0x34
	Aug 14 01:25:04 old-k8s-version-179312 kubelet[6831]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	Aug 14 01:25:04 old-k8s-version-179312 kubelet[6831]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129 +0xa5
	Aug 14 01:25:04 old-k8s-version-179312 kubelet[6831]: goroutine 153 [select]:
	Aug 14 01:25:04 old-k8s-version-179312 kubelet[6831]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000a2def0, 0x4f0ac20, 0xc000b0f810, 0x1, 0xc0001000c0)
	Aug 14 01:25:04 old-k8s-version-179312 kubelet[6831]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	Aug 14 01:25:04 old-k8s-version-179312 kubelet[6831]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc0000e0700, 0xc0001000c0)
	Aug 14 01:25:04 old-k8s-version-179312 kubelet[6831]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Aug 14 01:25:04 old-k8s-version-179312 kubelet[6831]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Aug 14 01:25:04 old-k8s-version-179312 kubelet[6831]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Aug 14 01:25:04 old-k8s-version-179312 kubelet[6831]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc00078a6f0, 0xc00075d100)
	Aug 14 01:25:04 old-k8s-version-179312 kubelet[6831]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Aug 14 01:25:04 old-k8s-version-179312 kubelet[6831]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Aug 14 01:25:04 old-k8s-version-179312 kubelet[6831]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Aug 14 01:25:04 old-k8s-version-179312 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Aug 14 01:25:04 old-k8s-version-179312 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Aug 14 01:25:05 old-k8s-version-179312 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 131.
	Aug 14 01:25:05 old-k8s-version-179312 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Aug 14 01:25:05 old-k8s-version-179312 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Aug 14 01:25:05 old-k8s-version-179312 kubelet[6863]: I0814 01:25:05.266168    6863 server.go:416] Version: v1.20.0
	Aug 14 01:25:05 old-k8s-version-179312 kubelet[6863]: I0814 01:25:05.266497    6863 server.go:837] Client rotation is on, will bootstrap in background
	Aug 14 01:25:05 old-k8s-version-179312 kubelet[6863]: I0814 01:25:05.268236    6863 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Aug 14 01:25:05 old-k8s-version-179312 kubelet[6863]: W0814 01:25:05.269524    6863 manager.go:159] Cannot detect current cgroup on cgroup v2
	Aug 14 01:25:05 old-k8s-version-179312 kubelet[6863]: I0814 01:25:05.269766    6863 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-179312 -n old-k8s-version-179312
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-179312 -n old-k8s-version-179312: exit status 2 (218.554427ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-179312" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (93.28s)

                                                
                                    

Test pass (254/318)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 49.86
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.31.0/json-events 14.76
13 TestDownloadOnly/v1.31.0/preload-exists 0
17 TestDownloadOnly/v1.31.0/LogsDuration 0.06
18 TestDownloadOnly/v1.31.0/DeleteAll 0.12
19 TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 0.61
22 TestOffline 52.65
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 129.18
31 TestAddons/serial/GCPAuth/Namespaces 0.14
33 TestAddons/parallel/Registry 15.68
35 TestAddons/parallel/InspektorGadget 10.77
37 TestAddons/parallel/HelmTiller 12.14
39 TestAddons/parallel/CSI 82.35
40 TestAddons/parallel/Headlamp 18.71
41 TestAddons/parallel/CloudSpanner 5.58
42 TestAddons/parallel/LocalPath 12.08
43 TestAddons/parallel/NvidiaDevicePlugin 6.67
44 TestAddons/parallel/Yakd 10.83
46 TestCertOptions 70.08
47 TestCertExpiration 275.69
49 TestForceSystemdFlag 67.19
50 TestForceSystemdEnv 46.13
52 TestKVMDriverInstallOrUpdate 4.87
56 TestErrorSpam/setup 38.64
57 TestErrorSpam/start 0.32
58 TestErrorSpam/status 0.72
59 TestErrorSpam/pause 1.5
60 TestErrorSpam/unpause 1.65
61 TestErrorSpam/stop 4.57
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 51.3
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 34.19
68 TestFunctional/serial/KubeContext 0.04
69 TestFunctional/serial/KubectlGetPods 0.07
72 TestFunctional/serial/CacheCmd/cache/add_remote 3.66
73 TestFunctional/serial/CacheCmd/cache/add_local 2.08
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
75 TestFunctional/serial/CacheCmd/cache/list 0.04
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.69
78 TestFunctional/serial/CacheCmd/cache/delete 0.09
79 TestFunctional/serial/MinikubeKubectlCmd 0.1
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.09
81 TestFunctional/serial/ExtraConfig 41.52
82 TestFunctional/serial/ComponentHealth 0.06
83 TestFunctional/serial/LogsCmd 1.33
84 TestFunctional/serial/LogsFileCmd 1.37
85 TestFunctional/serial/InvalidService 3.94
87 TestFunctional/parallel/ConfigCmd 0.29
88 TestFunctional/parallel/DashboardCmd 10.27
89 TestFunctional/parallel/DryRun 0.25
90 TestFunctional/parallel/InternationalLanguage 0.12
91 TestFunctional/parallel/StatusCmd 0.74
95 TestFunctional/parallel/ServiceCmdConnect 51.44
96 TestFunctional/parallel/AddonsCmd 0.12
97 TestFunctional/parallel/PersistentVolumeClaim 45.14
99 TestFunctional/parallel/SSHCmd 0.38
100 TestFunctional/parallel/CpCmd 1.24
101 TestFunctional/parallel/MySQL 21.96
102 TestFunctional/parallel/FileSync 0.21
103 TestFunctional/parallel/CertSync 1.23
107 TestFunctional/parallel/NodeLabels 0.06
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.42
111 TestFunctional/parallel/License 0.57
112 TestFunctional/parallel/Version/short 0.05
113 TestFunctional/parallel/Version/components 0.59
114 TestFunctional/parallel/ImageCommands/ImageListShort 0.26
115 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
116 TestFunctional/parallel/ImageCommands/ImageListJson 0.21
117 TestFunctional/parallel/ImageCommands/ImageListYaml 0.25
118 TestFunctional/parallel/ImageCommands/ImageBuild 5.41
121 TestFunctional/parallel/ImageCommands/Setup 1.79
129 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
130 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
131 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
132 TestFunctional/parallel/ProfileCmd/profile_not_create 0.29
133 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.49
134 TestFunctional/parallel/ProfileCmd/profile_list 0.28
135 TestFunctional/parallel/ProfileCmd/profile_json_output 0.28
136 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.11
137 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 8.52
138 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.59
139 TestFunctional/parallel/ImageCommands/ImageRemove 1.13
140 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.54
141 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.54
142 TestFunctional/parallel/ServiceCmd/DeployApp 8.15
143 TestFunctional/parallel/MountCmd/any-port 8.4
144 TestFunctional/parallel/ServiceCmd/List 0.42
145 TestFunctional/parallel/ServiceCmd/JSONOutput 0.42
146 TestFunctional/parallel/ServiceCmd/HTTPS 0.31
147 TestFunctional/parallel/ServiceCmd/Format 0.28
148 TestFunctional/parallel/ServiceCmd/URL 0.27
149 TestFunctional/parallel/MountCmd/specific-port 1.63
150 TestFunctional/parallel/MountCmd/VerifyCleanup 1.15
151 TestFunctional/delete_echo-server_images 0.03
152 TestFunctional/delete_my-image_image 0.01
153 TestFunctional/delete_minikube_cached_images 0.01
157 TestMultiControlPlane/serial/StartCluster 224.4
158 TestMultiControlPlane/serial/DeployApp 6.6
159 TestMultiControlPlane/serial/PingHostFromPods 1.16
160 TestMultiControlPlane/serial/AddWorkerNode 56.31
161 TestMultiControlPlane/serial/NodeLabels 0.07
162 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.51
163 TestMultiControlPlane/serial/CopyFile 12.4
164 TestMultiControlPlane/serial/StopSecondaryNode 2.89
165 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.39
166 TestMultiControlPlane/serial/RestartSecondaryNode 45.54
167 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.51
169 TestMultiControlPlane/serial/DeleteSecondaryNode 16.72
170 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.36
172 TestMultiControlPlane/serial/RestartCluster 348.81
173 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.36
174 TestMultiControlPlane/serial/AddSecondaryNode 73.07
175 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.54
179 TestJSONOutput/start/Command 47.89
180 TestJSONOutput/start/Audit 0
182 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
183 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
185 TestJSONOutput/pause/Command 0.66
186 TestJSONOutput/pause/Audit 0
188 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/unpause/Command 0.57
192 TestJSONOutput/unpause/Audit 0
194 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/stop/Command 6.58
198 TestJSONOutput/stop/Audit 0
200 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
202 TestErrorJSONOutput 0.18
207 TestMainNoArgs 0.04
208 TestMinikubeProfile 84.78
211 TestMountStart/serial/StartWithMountFirst 24.54
212 TestMountStart/serial/VerifyMountFirst 0.35
213 TestMountStart/serial/StartWithMountSecond 24.45
214 TestMountStart/serial/VerifyMountSecond 0.35
215 TestMountStart/serial/DeleteFirst 0.88
216 TestMountStart/serial/VerifyMountPostDelete 0.35
217 TestMountStart/serial/Stop 1.26
218 TestMountStart/serial/RestartStopped 23.02
219 TestMountStart/serial/VerifyMountPostStop 0.36
222 TestMultiNode/serial/FreshStart2Nodes 110.98
223 TestMultiNode/serial/DeployApp2Nodes 4.71
224 TestMultiNode/serial/PingHostFrom2Pods 0.75
225 TestMultiNode/serial/AddNode 47.5
226 TestMultiNode/serial/MultiNodeLabels 0.06
227 TestMultiNode/serial/ProfileList 0.21
228 TestMultiNode/serial/CopyFile 6.79
229 TestMultiNode/serial/StopNode 2.19
230 TestMultiNode/serial/StartAfterStop 38.95
232 TestMultiNode/serial/DeleteNode 2.16
234 TestMultiNode/serial/RestartMultiNode 179.11
235 TestMultiNode/serial/ValidateNameConflict 41.11
242 TestScheduledStopUnix 110.23
246 TestRunningBinaryUpgrade 216.01
256 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
259 TestNoKubernetes/serial/StartWithK8s 85.19
260 TestStoppedBinaryUpgrade/Setup 2.27
261 TestStoppedBinaryUpgrade/Upgrade 137.83
262 TestNoKubernetes/serial/StartWithStopK8s 36.64
263 TestNoKubernetes/serial/Start 27.99
264 TestNoKubernetes/serial/VerifyK8sNotRunning 0.2
265 TestNoKubernetes/serial/ProfileList 4.37
266 TestNoKubernetes/serial/Stop 1.27
267 TestNoKubernetes/serial/StartNoArgs 33.94
268 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.19
269 TestStoppedBinaryUpgrade/MinikubeLogs 0.81
277 TestNetworkPlugins/group/false 3.03
282 TestPause/serial/Start 89.96
285 TestPause/serial/SecondStartNoReconfiguration 40.44
287 TestStartStop/group/embed-certs/serial/FirstStart 86.28
288 TestPause/serial/Pause 0.8
289 TestPause/serial/VerifyStatus 0.26
290 TestPause/serial/Unpause 0.69
291 TestPause/serial/PauseAgain 0.82
292 TestPause/serial/DeletePaused 1.64
293 TestPause/serial/VerifyDeletedResources 0.59
295 TestStartStop/group/no-preload/serial/FirstStart 97.69
296 TestStartStop/group/embed-certs/serial/DeployApp 10.31
298 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 54.19
299 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.99
301 TestStartStop/group/no-preload/serial/DeployApp 10.51
302 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.93
304 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.24
305 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.88
310 TestStartStop/group/embed-certs/serial/SecondStart 665.61
312 TestStartStop/group/no-preload/serial/SecondStart 543.31
314 TestStartStop/group/old-k8s-version/serial/Stop 6.29
315 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 563.73
316 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
327 TestStartStop/group/newest-cni/serial/FirstStart 45.44
328 TestStartStop/group/newest-cni/serial/DeployApp 0
329 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1
330 TestStartStop/group/newest-cni/serial/Stop 10.49
331 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.17
332 TestStartStop/group/newest-cni/serial/SecondStart 36.79
333 TestNetworkPlugins/group/auto/Start 49.12
334 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
335 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
336 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.21
337 TestStartStop/group/newest-cni/serial/Pause 2.32
338 TestNetworkPlugins/group/kindnet/Start 67.88
339 TestNetworkPlugins/group/calico/Start 97.98
340 TestNetworkPlugins/group/auto/KubeletFlags 0.26
341 TestNetworkPlugins/group/auto/NetCatPod 14.3
342 TestNetworkPlugins/group/auto/DNS 22.07
343 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
344 TestNetworkPlugins/group/auto/Localhost 0.13
345 TestNetworkPlugins/group/auto/HairPin 0.13
346 TestNetworkPlugins/group/kindnet/KubeletFlags 0.21
347 TestNetworkPlugins/group/kindnet/NetCatPod 11.23
348 TestNetworkPlugins/group/custom-flannel/Start 70.09
349 TestNetworkPlugins/group/kindnet/DNS 0.19
350 TestNetworkPlugins/group/kindnet/Localhost 0.15
351 TestNetworkPlugins/group/kindnet/HairPin 0.17
352 TestNetworkPlugins/group/enable-default-cni/Start 73.78
353 TestNetworkPlugins/group/flannel/Start 102.02
354 TestNetworkPlugins/group/calico/ControllerPod 6.01
355 TestNetworkPlugins/group/calico/KubeletFlags 0.21
356 TestNetworkPlugins/group/calico/NetCatPod 11.22
357 TestNetworkPlugins/group/calico/DNS 0.15
358 TestNetworkPlugins/group/calico/Localhost 0.18
359 TestNetworkPlugins/group/calico/HairPin 0.14
360 TestNetworkPlugins/group/bridge/Start 97.63
361 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.21
362 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.22
363 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.22
364 TestNetworkPlugins/group/custom-flannel/DNS 0.18
365 TestNetworkPlugins/group/custom-flannel/Localhost 0.17
366 TestNetworkPlugins/group/enable-default-cni/NetCatPod 13.35
367 TestNetworkPlugins/group/custom-flannel/HairPin 0.19
368 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
369 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
370 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
371 TestNetworkPlugins/group/flannel/ControllerPod 6.01
372 TestNetworkPlugins/group/flannel/KubeletFlags 0.19
373 TestNetworkPlugins/group/flannel/NetCatPod 10.2
374 TestNetworkPlugins/group/flannel/DNS 0.19
375 TestNetworkPlugins/group/flannel/Localhost 0.12
376 TestNetworkPlugins/group/flannel/HairPin 0.12
377 TestNetworkPlugins/group/bridge/KubeletFlags 0.19
378 TestNetworkPlugins/group/bridge/NetCatPod 9.2
379 TestNetworkPlugins/group/bridge/DNS 0.15
380 TestNetworkPlugins/group/bridge/Localhost 0.11
381 TestNetworkPlugins/group/bridge/HairPin 0.12
x
+
TestDownloadOnly/v1.20.0/json-events (49.86s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-343093 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-343093 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (49.855749904s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (49.86s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-343093
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-343093: exit status 85 (54.680322ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-343093 | jenkins | v1.33.1 | 13 Aug 24 23:46 UTC |          |
	|         | -p download-only-343093        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/13 23:46:49
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 23:46:49.908135   16601 out.go:291] Setting OutFile to fd 1 ...
	I0813 23:46:49.908380   16601 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 23:46:49.908388   16601 out.go:304] Setting ErrFile to fd 2...
	I0813 23:46:49.908392   16601 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 23:46:49.908550   16601 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19429-9425/.minikube/bin
	W0813 23:46:49.908661   16601 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19429-9425/.minikube/config/config.json: open /home/jenkins/minikube-integration/19429-9425/.minikube/config/config.json: no such file or directory
	I0813 23:46:49.909207   16601 out.go:298] Setting JSON to true
	I0813 23:46:49.910061   16601 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1756,"bootTime":1723591054,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0813 23:46:49.910120   16601 start.go:139] virtualization: kvm guest
	I0813 23:46:49.912627   16601 out.go:97] [download-only-343093] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	W0813 23:46:49.912761   16601 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19429-9425/.minikube/cache/preloaded-tarball: no such file or directory
	I0813 23:46:49.912784   16601 notify.go:220] Checking for updates...
	I0813 23:46:49.914091   16601 out.go:169] MINIKUBE_LOCATION=19429
	I0813 23:46:49.915670   16601 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0813 23:46:49.916957   16601 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19429-9425/kubeconfig
	I0813 23:46:49.918211   16601 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19429-9425/.minikube
	I0813 23:46:49.919577   16601 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0813 23:46:49.921836   16601 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0813 23:46:49.922083   16601 driver.go:392] Setting default libvirt URI to qemu:///system
	I0813 23:46:50.015814   16601 out.go:97] Using the kvm2 driver based on user configuration
	I0813 23:46:50.015848   16601 start.go:297] selected driver: kvm2
	I0813 23:46:50.015855   16601 start.go:901] validating driver "kvm2" against <nil>
	I0813 23:46:50.016208   16601 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 23:46:50.016339   16601 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19429-9425/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0813 23:46:50.031074   16601 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0813 23:46:50.031133   16601 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0813 23:46:50.031622   16601 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0813 23:46:50.031761   16601 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0813 23:46:50.031790   16601 cni.go:84] Creating CNI manager for ""
	I0813 23:46:50.031797   16601 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0813 23:46:50.031807   16601 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0813 23:46:50.031856   16601 start.go:340] cluster config:
	{Name:download-only-343093 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-343093 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0813 23:46:50.032021   16601 iso.go:125] acquiring lock: {Name:mk654171f0e78c238a265344dbbd1eacb21d0f1b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 23:46:50.033903   16601 out.go:97] Downloading VM boot image ...
	I0813 23:46:50.033945   16601 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19429-9425/.minikube/cache/iso/amd64/minikube-v1.33.1-1723567878-19429-amd64.iso
	I0813 23:47:02.719473   16601 out.go:97] Starting "download-only-343093" primary control-plane node in "download-only-343093" cluster
	I0813 23:47:02.719502   16601 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0813 23:47:03.207431   16601 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0813 23:47:03.207460   16601 cache.go:56] Caching tarball of preloaded images
	I0813 23:47:03.207599   16601 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0813 23:47:03.209019   16601 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0813 23:47:03.209039   16601 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0813 23:47:03.308721   16601 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19429-9425/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0813 23:47:14.947955   16601 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0813 23:47:14.948653   16601 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19429-9425/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0813 23:47:15.847439   16601 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0813 23:47:15.847763   16601 profile.go:143] Saving config to /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/download-only-343093/config.json ...
	I0813 23:47:15.847792   16601 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/download-only-343093/config.json: {Name:mkaf22cb3c6f58c2c33655955310d232e883623c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 23:47:15.847944   16601 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0813 23:47:15.848139   16601 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19429-9425/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-343093 host does not exist
	  To start a cluster, run: "minikube start -p download-only-343093"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-343093
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/json-events (14.76s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-307809 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-307809 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (14.760642318s)
--- PASS: TestDownloadOnly/v1.31.0/json-events (14.76s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-307809
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-307809: exit status 85 (56.152603ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-343093 | jenkins | v1.33.1 | 13 Aug 24 23:46 UTC |                     |
	|         | -p download-only-343093        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 13 Aug 24 23:47 UTC | 13 Aug 24 23:47 UTC |
	| delete  | -p download-only-343093        | download-only-343093 | jenkins | v1.33.1 | 13 Aug 24 23:47 UTC | 13 Aug 24 23:47 UTC |
	| start   | -o=json --download-only        | download-only-307809 | jenkins | v1.33.1 | 13 Aug 24 23:47 UTC |                     |
	|         | -p download-only-307809        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/13 23:47:40
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 23:47:40.068576   16939 out.go:291] Setting OutFile to fd 1 ...
	I0813 23:47:40.068809   16939 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 23:47:40.068817   16939 out.go:304] Setting ErrFile to fd 2...
	I0813 23:47:40.068821   16939 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0813 23:47:40.069007   16939 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19429-9425/.minikube/bin
	I0813 23:47:40.069548   16939 out.go:298] Setting JSON to true
	I0813 23:47:40.070419   16939 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1806,"bootTime":1723591054,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0813 23:47:40.070473   16939 start.go:139] virtualization: kvm guest
	I0813 23:47:40.072606   16939 out.go:97] [download-only-307809] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0813 23:47:40.072712   16939 notify.go:220] Checking for updates...
	I0813 23:47:40.074099   16939 out.go:169] MINIKUBE_LOCATION=19429
	I0813 23:47:40.075450   16939 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0813 23:47:40.076671   16939 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19429-9425/kubeconfig
	I0813 23:47:40.077802   16939 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19429-9425/.minikube
	I0813 23:47:40.079056   16939 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0813 23:47:40.081329   16939 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0813 23:47:40.081563   16939 driver.go:392] Setting default libvirt URI to qemu:///system
	I0813 23:47:40.112323   16939 out.go:97] Using the kvm2 driver based on user configuration
	I0813 23:47:40.112354   16939 start.go:297] selected driver: kvm2
	I0813 23:47:40.112360   16939 start.go:901] validating driver "kvm2" against <nil>
	I0813 23:47:40.112674   16939 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 23:47:40.112755   16939 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19429-9425/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0813 23:47:40.127293   16939 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0813 23:47:40.127349   16939 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0813 23:47:40.127923   16939 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0813 23:47:40.128058   16939 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0813 23:47:40.128124   16939 cni.go:84] Creating CNI manager for ""
	I0813 23:47:40.128136   16939 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0813 23:47:40.128143   16939 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0813 23:47:40.128198   16939 start.go:340] cluster config:
	{Name:download-only-307809 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:download-only-307809 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0813 23:47:40.128319   16939 iso.go:125] acquiring lock: {Name:mk654171f0e78c238a265344dbbd1eacb21d0f1b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 23:47:40.129907   16939 out.go:97] Starting "download-only-307809" primary control-plane node in "download-only-307809" cluster
	I0813 23:47:40.129921   16939 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0813 23:47:40.637769   16939 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0813 23:47:40.637802   16939 cache.go:56] Caching tarball of preloaded images
	I0813 23:47:40.637970   16939 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0813 23:47:40.639933   16939 out.go:97] Downloading Kubernetes v1.31.0 preload ...
	I0813 23:47:40.639947   16939 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 ...
	I0813 23:47:40.742617   16939 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:4a2ae163f7665ceaa95dee8ffc8efdba -> /home/jenkins/minikube-integration/19429-9425/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-307809 host does not exist
	  To start a cluster, run: "minikube start -p download-only-307809"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-307809
--- PASS: TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.61s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-857485 --alsologtostderr --binary-mirror http://127.0.0.1:46401 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-857485" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-857485
--- PASS: TestBinaryMirror (0.61s)

                                                
                                    
x
+
TestOffline (52.65s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-068840 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-068840 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (51.018525429s)
helpers_test.go:175: Cleaning up "offline-crio-068840" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-068840
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-068840: (1.627755239s)
--- PASS: TestOffline (52.65s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-937866
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-937866: exit status 85 (48.94862ms)

                                                
                                                
-- stdout --
	* Profile "addons-937866" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-937866"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-937866
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-937866: exit status 85 (50.156332ms)

                                                
                                                
-- stdout --
	* Profile "addons-937866" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-937866"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (129.18s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-937866 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-937866 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m9.180581753s)
--- PASS: TestAddons/Setup (129.18s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-937866 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-937866 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.68s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 4.422952ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6fb4cdfc84-d8ptz" [03e452f4-85d3-486e-bf4e-30e1bf8b8929] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.004940764s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-9lq9k" [1cb9d48b-73e5-4500-bb30-902eac13720e] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004050693s
addons_test.go:342: (dbg) Run:  kubectl --context addons-937866 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-937866 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-937866 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.936064432s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-937866 ip
2024/08/13 23:50:40 [DEBUG] GET http://192.168.39.8:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-937866 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.68s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.77s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-hsjpm" [4bc4989d-dafb-4d80-9eb0-069fa8e4f527] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.014618242s
addons_test.go:851: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-937866
addons_test.go:851: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-937866: (5.754952169s)
--- PASS: TestAddons/parallel/InspektorGadget (10.77s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (12.14s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 2.358146ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-b48cc5f79-p2hvc" [66ce562c-db93-4b51-b8be-ce14bacba0f8] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 6.003700433s
addons_test.go:475: (dbg) Run:  kubectl --context addons-937866 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-937866 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.578442709s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-937866 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (12.14s)

                                                
                                    
x
+
TestAddons/parallel/CSI (82.35s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 7.379437ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-937866 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-937866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-937866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-937866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-937866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-937866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-937866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-937866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-937866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-937866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-937866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-937866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-937866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-937866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-937866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-937866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-937866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-937866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-937866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-937866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-937866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-937866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-937866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-937866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-937866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-937866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-937866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-937866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-937866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-937866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-937866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-937866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-937866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-937866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-937866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-937866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-937866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-937866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-937866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-937866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-937866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-937866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-937866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-937866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-937866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-937866 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-937866 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-937866 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [1a7bd2cf-0703-464c-a9dd-d84beb798428] Pending
helpers_test.go:344: "task-pv-pod" [1a7bd2cf-0703-464c-a9dd-d84beb798428] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [1a7bd2cf-0703-464c-a9dd-d84beb798428] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.004804299s
addons_test.go:590: (dbg) Run:  kubectl --context addons-937866 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-937866 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-937866 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-937866 delete pod task-pv-pod
addons_test.go:600: (dbg) Done: kubectl --context addons-937866 delete pod task-pv-pod: (1.098551455s)
addons_test.go:606: (dbg) Run:  kubectl --context addons-937866 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-937866 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-937866 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-937866 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-937866 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-937866 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-937866 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-937866 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [7f35ee67-b326-4fbb-9c7b-87e1db4470fb] Pending
helpers_test.go:344: "task-pv-pod-restore" [7f35ee67-b326-4fbb-9c7b-87e1db4470fb] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [7f35ee67-b326-4fbb-9c7b-87e1db4470fb] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004891566s
addons_test.go:632: (dbg) Run:  kubectl --context addons-937866 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-937866 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-937866 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-937866 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-amd64 -p addons-937866 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.695688095s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-amd64 -p addons-937866 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (82.35s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (18.71s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-937866 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-bkxbp" [7bbb22ab-c4e2-4b47-bc40-aaa1da195884] Pending
helpers_test.go:344: "headlamp-57fb76fcdb-bkxbp" [7bbb22ab-c4e2-4b47-bc40-aaa1da195884] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-bkxbp" [7bbb22ab-c4e2-4b47-bc40-aaa1da195884] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.003679279s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 -p addons-937866 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-amd64 -p addons-937866 addons disable headlamp --alsologtostderr -v=1: (5.715374101s)
--- PASS: TestAddons/parallel/Headlamp (18.71s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.58s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-c4bc9b5f8-zbplq" [70a1d079-b225-441a-960a-61684fa9f04a] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004534028s
addons_test.go:870: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-937866
--- PASS: TestAddons/parallel/CloudSpanner (5.58s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (12.08s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-937866 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-937866 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-937866 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-937866 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-937866 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-937866 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-937866 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-937866 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-937866 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [4a87d31a-6948-4857-86c4-977745f416f5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [4a87d31a-6948-4857-86c4-977745f416f5] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [4a87d31a-6948-4857-86c4-977745f416f5] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.00387647s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-937866 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-amd64 -p addons-937866 ssh "cat /opt/local-path-provisioner/pvc-a7fb6e01-e9d6-4ee0-9569-672424823465_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-937866 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-937866 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 -p addons-937866 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (12.08s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.67s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-mg5kj" [decbf56f-a46d-4b32-a963-1abb25adfab9] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004245575s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-937866
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.67s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.83s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-f2w89" [9beac97f-c375-4a37-b079-170a0c18719e] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004672551s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-amd64 -p addons-937866 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-amd64 -p addons-937866 addons disable yakd --alsologtostderr -v=1: (5.821140639s)
--- PASS: TestAddons/parallel/Yakd (10.83s)

                                                
                                    
x
+
TestCertOptions (70.08s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-314451 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-314451 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m8.672006488s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-314451 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-314451 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-314451 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-314451" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-314451
--- PASS: TestCertOptions (70.08s)

                                                
                                    
x
+
TestCertExpiration (275.69s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-769488 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-769488 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (47.807628183s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-769488 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
E0814 00:57:14.186182   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/functional-770612/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-769488 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (46.897771775s)
helpers_test.go:175: Cleaning up "cert-expiration-769488" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-769488
--- PASS: TestCertExpiration (275.69s)

                                                
                                    
x
+
TestForceSystemdFlag (67.19s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-288470 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-288470 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m6.020993399s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-288470 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-288470" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-288470
--- PASS: TestForceSystemdFlag (67.19s)

                                                
                                    
x
+
TestForceSystemdEnv (46.13s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-900037 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-900037 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (45.156941672s)
helpers_test.go:175: Cleaning up "force-systemd-env-900037" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-900037
--- PASS: TestForceSystemdEnv (46.13s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.87s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (4.87s)

                                                
                                    
x
+
TestErrorSpam/setup (38.64s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-100227 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-100227 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-100227 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-100227 --driver=kvm2  --container-runtime=crio: (38.636742445s)
--- PASS: TestErrorSpam/setup (38.64s)

                                                
                                    
x
+
TestErrorSpam/start (0.32s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-100227 --log_dir /tmp/nospam-100227 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-100227 --log_dir /tmp/nospam-100227 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-100227 --log_dir /tmp/nospam-100227 start --dry-run
--- PASS: TestErrorSpam/start (0.32s)

                                                
                                    
x
+
TestErrorSpam/status (0.72s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-100227 --log_dir /tmp/nospam-100227 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-100227 --log_dir /tmp/nospam-100227 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-100227 --log_dir /tmp/nospam-100227 status
--- PASS: TestErrorSpam/status (0.72s)

                                                
                                    
x
+
TestErrorSpam/pause (1.5s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-100227 --log_dir /tmp/nospam-100227 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-100227 --log_dir /tmp/nospam-100227 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-100227 --log_dir /tmp/nospam-100227 pause
--- PASS: TestErrorSpam/pause (1.50s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.65s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-100227 --log_dir /tmp/nospam-100227 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-100227 --log_dir /tmp/nospam-100227 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-100227 --log_dir /tmp/nospam-100227 unpause
--- PASS: TestErrorSpam/unpause (1.65s)

                                                
                                    
x
+
TestErrorSpam/stop (4.57s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-100227 --log_dir /tmp/nospam-100227 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-100227 --log_dir /tmp/nospam-100227 stop: (1.575008564s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-100227 --log_dir /tmp/nospam-100227 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-100227 --log_dir /tmp/nospam-100227 stop: (1.667294759s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-100227 --log_dir /tmp/nospam-100227 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-100227 --log_dir /tmp/nospam-100227 stop: (1.327321208s)
--- PASS: TestErrorSpam/stop (4.57s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19429-9425/.minikube/files/etc/test/nested/copy/16589/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (51.3s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-770612 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0814 00:00:05.518672   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/addons-937866/client.crt: no such file or directory" logger="UnhandledError"
E0814 00:00:05.525704   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/addons-937866/client.crt: no such file or directory" logger="UnhandledError"
E0814 00:00:05.537141   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/addons-937866/client.crt: no such file or directory" logger="UnhandledError"
E0814 00:00:05.558599   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/addons-937866/client.crt: no such file or directory" logger="UnhandledError"
E0814 00:00:05.600035   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/addons-937866/client.crt: no such file or directory" logger="UnhandledError"
E0814 00:00:05.681546   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/addons-937866/client.crt: no such file or directory" logger="UnhandledError"
E0814 00:00:05.843054   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/addons-937866/client.crt: no such file or directory" logger="UnhandledError"
E0814 00:00:06.164798   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/addons-937866/client.crt: no such file or directory" logger="UnhandledError"
E0814 00:00:06.806888   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/addons-937866/client.crt: no such file or directory" logger="UnhandledError"
E0814 00:00:08.088502   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/addons-937866/client.crt: no such file or directory" logger="UnhandledError"
E0814 00:00:10.649906   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/addons-937866/client.crt: no such file or directory" logger="UnhandledError"
E0814 00:00:15.771665   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/addons-937866/client.crt: no such file or directory" logger="UnhandledError"
E0814 00:00:26.013350   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/addons-937866/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-770612 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (51.303016955s)
--- PASS: TestFunctional/serial/StartWithProxy (51.30s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (34.19s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-770612 --alsologtostderr -v=8
E0814 00:00:46.495413   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/addons-937866/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-770612 --alsologtostderr -v=8: (34.19122889s)
functional_test.go:663: soft start took 34.191931736s for "functional-770612" cluster.
--- PASS: TestFunctional/serial/SoftStart (34.19s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-770612 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.66s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-770612 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-770612 cache add registry.k8s.io/pause:3.1: (1.126339299s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-770612 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-770612 cache add registry.k8s.io/pause:3.3: (1.287692402s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-770612 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-770612 cache add registry.k8s.io/pause:latest: (1.242881177s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.66s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-770612 /tmp/TestFunctionalserialCacheCmdcacheadd_local3411422622/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-770612 cache add minikube-local-cache-test:functional-770612
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-770612 cache add minikube-local-cache-test:functional-770612: (1.777018358s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-770612 cache delete minikube-local-cache-test:functional-770612
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-770612
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-770612 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.69s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-770612 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-770612 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-770612 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (213.498875ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-770612 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-amd64 -p functional-770612 cache reload: (1.018160801s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-770612 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.69s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-770612 kubectl -- --context functional-770612 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-770612 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.09s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (41.52s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-770612 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0814 00:01:27.457930   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/addons-937866/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-770612 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (41.524177496s)
functional_test.go:761: restart took 41.524306573s for "functional-770612" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (41.52s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-770612 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.33s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-770612 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-770612 logs: (1.333402194s)
--- PASS: TestFunctional/serial/LogsCmd (1.33s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.37s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-770612 logs --file /tmp/TestFunctionalserialLogsFileCmd2983453662/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-770612 logs --file /tmp/TestFunctionalserialLogsFileCmd2983453662/001/logs.txt: (1.371295937s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.37s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.94s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-770612 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-770612
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-770612: exit status 115 (270.958353ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.191:32106 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-770612 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.94s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-770612 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-770612 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-770612 config get cpus: exit status 14 (43.800521ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-770612 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-770612 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-770612 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-770612 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-770612 config get cpus: exit status 14 (42.745669ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-770612 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-770612 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 26178: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.27s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-770612 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-770612 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (123.634476ms)

                                                
                                                
-- stdout --
	* [functional-770612] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19429
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19429-9425/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19429-9425/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 00:02:41.912405   26025 out.go:291] Setting OutFile to fd 1 ...
	I0814 00:02:41.912654   26025 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 00:02:41.912662   26025 out.go:304] Setting ErrFile to fd 2...
	I0814 00:02:41.912666   26025 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 00:02:41.912819   26025 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19429-9425/.minikube/bin
	I0814 00:02:41.913331   26025 out.go:298] Setting JSON to false
	I0814 00:02:41.914286   26025 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2708,"bootTime":1723591054,"procs":224,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0814 00:02:41.914340   26025 start.go:139] virtualization: kvm guest
	I0814 00:02:41.916344   26025 out.go:177] * [functional-770612] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0814 00:02:41.917720   26025 notify.go:220] Checking for updates...
	I0814 00:02:41.917735   26025 out.go:177]   - MINIKUBE_LOCATION=19429
	I0814 00:02:41.918984   26025 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0814 00:02:41.920177   26025 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19429-9425/kubeconfig
	I0814 00:02:41.921321   26025 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19429-9425/.minikube
	I0814 00:02:41.922550   26025 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0814 00:02:41.923806   26025 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0814 00:02:41.925361   26025 config.go:182] Loaded profile config "functional-770612": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 00:02:41.925764   26025 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 00:02:41.925836   26025 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 00:02:41.941497   26025 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37035
	I0814 00:02:41.941824   26025 main.go:141] libmachine: () Calling .GetVersion
	I0814 00:02:41.942325   26025 main.go:141] libmachine: Using API Version  1
	I0814 00:02:41.942348   26025 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 00:02:41.942717   26025 main.go:141] libmachine: () Calling .GetMachineName
	I0814 00:02:41.942918   26025 main.go:141] libmachine: (functional-770612) Calling .DriverName
	I0814 00:02:41.943195   26025 driver.go:392] Setting default libvirt URI to qemu:///system
	I0814 00:02:41.943658   26025 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 00:02:41.943724   26025 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 00:02:41.958435   26025 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45337
	I0814 00:02:41.958816   26025 main.go:141] libmachine: () Calling .GetVersion
	I0814 00:02:41.959211   26025 main.go:141] libmachine: Using API Version  1
	I0814 00:02:41.959228   26025 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 00:02:41.959604   26025 main.go:141] libmachine: () Calling .GetMachineName
	I0814 00:02:41.959775   26025 main.go:141] libmachine: (functional-770612) Calling .DriverName
	I0814 00:02:41.991480   26025 out.go:177] * Using the kvm2 driver based on existing profile
	I0814 00:02:41.992608   26025 start.go:297] selected driver: kvm2
	I0814 00:02:41.992620   26025 start.go:901] validating driver "kvm2" against &{Name:functional-770612 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:functional-770612 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.191 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 00:02:41.992759   26025 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0814 00:02:41.994578   26025 out.go:177] 
	W0814 00:02:41.995768   26025 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0814 00:02:41.996799   26025 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-770612 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-770612 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-770612 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (124.626497ms)

                                                
                                                
-- stdout --
	* [functional-770612] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19429
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19429-9425/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19429-9425/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 00:02:35.995223   25572 out.go:291] Setting OutFile to fd 1 ...
	I0814 00:02:35.995339   25572 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 00:02:35.995350   25572 out.go:304] Setting ErrFile to fd 2...
	I0814 00:02:35.995356   25572 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 00:02:35.995656   25572 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19429-9425/.minikube/bin
	I0814 00:02:35.996188   25572 out.go:298] Setting JSON to false
	I0814 00:02:35.997070   25572 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2702,"bootTime":1723591054,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0814 00:02:35.997129   25572 start.go:139] virtualization: kvm guest
	I0814 00:02:35.999277   25572 out.go:177] * [functional-770612] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	I0814 00:02:36.000551   25572 notify.go:220] Checking for updates...
	I0814 00:02:36.000586   25572 out.go:177]   - MINIKUBE_LOCATION=19429
	I0814 00:02:36.001993   25572 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0814 00:02:36.003223   25572 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19429-9425/kubeconfig
	I0814 00:02:36.004358   25572 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19429-9425/.minikube
	I0814 00:02:36.005734   25572 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0814 00:02:36.006920   25572 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0814 00:02:36.008633   25572 config.go:182] Loaded profile config "functional-770612": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 00:02:36.009213   25572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 00:02:36.009267   25572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 00:02:36.023603   25572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33061
	I0814 00:02:36.024011   25572 main.go:141] libmachine: () Calling .GetVersion
	I0814 00:02:36.024514   25572 main.go:141] libmachine: Using API Version  1
	I0814 00:02:36.024531   25572 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 00:02:36.024899   25572 main.go:141] libmachine: () Calling .GetMachineName
	I0814 00:02:36.025108   25572 main.go:141] libmachine: (functional-770612) Calling .DriverName
	I0814 00:02:36.025342   25572 driver.go:392] Setting default libvirt URI to qemu:///system
	I0814 00:02:36.025688   25572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 00:02:36.025731   25572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 00:02:36.039915   25572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39899
	I0814 00:02:36.040370   25572 main.go:141] libmachine: () Calling .GetVersion
	I0814 00:02:36.040856   25572 main.go:141] libmachine: Using API Version  1
	I0814 00:02:36.040874   25572 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 00:02:36.041154   25572 main.go:141] libmachine: () Calling .GetMachineName
	I0814 00:02:36.041322   25572 main.go:141] libmachine: (functional-770612) Calling .DriverName
	I0814 00:02:36.072443   25572 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0814 00:02:36.073613   25572 start.go:297] selected driver: kvm2
	I0814 00:02:36.073630   25572 start.go:901] validating driver "kvm2" against &{Name:functional-770612 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:functional-770612 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.191 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 00:02:36.073753   25572 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0814 00:02:36.075770   25572 out.go:177] 
	W0814 00:02:36.076881   25572 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0814 00:02:36.078033   25572 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-770612 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-770612 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-770612 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (51.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-770612 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-770612 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-pvs77" [3755e259-9358-4e32-b588-f70398255a8e] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-pvs77" [3755e259-9358-4e32-b588-f70398255a8e] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 51.003821738s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-770612 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.191:32645
functional_test.go:1675: http://192.168.39.191:32645: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-pvs77

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.191:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.191:32645
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (51.44s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-770612 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-770612 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (45.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [f3274ad6-6a03-4c76-857e-b3744f69110f] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003851518s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-770612 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-770612 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-770612 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-770612 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-770612 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [a261ca65-2fd7-4b70-b5cc-5b66a89d2708] Pending
helpers_test.go:344: "sp-pod" [a261ca65-2fd7-4b70-b5cc-5b66a89d2708] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [a261ca65-2fd7-4b70-b5cc-5b66a89d2708] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 18.005107234s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-770612 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-770612 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-770612 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [e032ab21-95f9-45d9-ab3e-ed28d3f1c0a5] Pending
helpers_test.go:344: "sp-pod" [e032ab21-95f9-45d9-ab3e-ed28d3f1c0a5] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [e032ab21-95f9-45d9-ab3e-ed28d3f1c0a5] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 17.009831133s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-770612 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (45.14s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-770612 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-770612 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-770612 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-770612 ssh -n functional-770612 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-770612 cp functional-770612:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd4106661918/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-770612 ssh -n functional-770612 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-770612 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-770612 ssh -n functional-770612 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.24s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (21.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-770612 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-7s48z" [13b5cffb-cef7-46e6-a058-892b7cb82e65] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-7s48z" [13b5cffb-cef7-46e6-a058-892b7cb82e65] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 20.003681887s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-770612 exec mysql-6cdb49bbb-7s48z -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-770612 exec mysql-6cdb49bbb-7s48z -- mysql -ppassword -e "show databases;": exit status 1 (287.147408ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-770612 exec mysql-6cdb49bbb-7s48z -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (21.96s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/16589/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-770612 ssh "sudo cat /etc/test/nested/copy/16589/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/16589.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-770612 ssh "sudo cat /etc/ssl/certs/16589.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/16589.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-770612 ssh "sudo cat /usr/share/ca-certificates/16589.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-770612 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/165892.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-770612 ssh "sudo cat /etc/ssl/certs/165892.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/165892.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-770612 ssh "sudo cat /usr/share/ca-certificates/165892.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-770612 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-770612 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-770612 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-770612 ssh "sudo systemctl is-active docker": exit status 1 (218.476324ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-770612 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-770612 ssh "sudo systemctl is-active containerd": exit status 1 (202.598147ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-770612 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-770612 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-770612 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-770612 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.0
registry.k8s.io/kube-proxy:v1.31.0
registry.k8s.io/kube-controller-manager:v1.31.0
registry.k8s.io/kube-apiserver:v1.31.0
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-770612
localhost/kicbase/echo-server:functional-770612
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20240730-75a5af0c
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-770612 image ls --format short --alsologtostderr:
I0814 00:02:48.216401   26723 out.go:291] Setting OutFile to fd 1 ...
I0814 00:02:48.216855   26723 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0814 00:02:48.216869   26723 out.go:304] Setting ErrFile to fd 2...
I0814 00:02:48.216875   26723 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0814 00:02:48.217079   26723 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19429-9425/.minikube/bin
I0814 00:02:48.217634   26723 config.go:182] Loaded profile config "functional-770612": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0814 00:02:48.217725   26723 config.go:182] Loaded profile config "functional-770612": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0814 00:02:48.218109   26723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0814 00:02:48.218154   26723 main.go:141] libmachine: Launching plugin server for driver kvm2
I0814 00:02:48.232900   26723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41119
I0814 00:02:48.233341   26723 main.go:141] libmachine: () Calling .GetVersion
I0814 00:02:48.233962   26723 main.go:141] libmachine: Using API Version  1
I0814 00:02:48.233991   26723 main.go:141] libmachine: () Calling .SetConfigRaw
I0814 00:02:48.234348   26723 main.go:141] libmachine: () Calling .GetMachineName
I0814 00:02:48.234511   26723 main.go:141] libmachine: (functional-770612) Calling .GetState
I0814 00:02:48.236344   26723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0814 00:02:48.236387   26723 main.go:141] libmachine: Launching plugin server for driver kvm2
I0814 00:02:48.251107   26723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43099
I0814 00:02:48.251489   26723 main.go:141] libmachine: () Calling .GetVersion
I0814 00:02:48.251936   26723 main.go:141] libmachine: Using API Version  1
I0814 00:02:48.251960   26723 main.go:141] libmachine: () Calling .SetConfigRaw
I0814 00:02:48.252255   26723 main.go:141] libmachine: () Calling .GetMachineName
I0814 00:02:48.252437   26723 main.go:141] libmachine: (functional-770612) Calling .DriverName
I0814 00:02:48.252604   26723 ssh_runner.go:195] Run: systemctl --version
I0814 00:02:48.252625   26723 main.go:141] libmachine: (functional-770612) Calling .GetSSHHostname
I0814 00:02:48.255988   26723 main.go:141] libmachine: (functional-770612) DBG | domain functional-770612 has defined MAC address 52:54:00:72:9d:7e in network mk-functional-770612
I0814 00:02:48.256431   26723 main.go:141] libmachine: (functional-770612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:9d:7e", ip: ""} in network mk-functional-770612: {Iface:virbr1 ExpiryTime:2024-08-14 01:00:05 +0000 UTC Type:0 Mac:52:54:00:72:9d:7e Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:functional-770612 Clientid:01:52:54:00:72:9d:7e}
I0814 00:02:48.256460   26723 main.go:141] libmachine: (functional-770612) DBG | domain functional-770612 has defined IP address 192.168.39.191 and MAC address 52:54:00:72:9d:7e in network mk-functional-770612
I0814 00:02:48.256578   26723 main.go:141] libmachine: (functional-770612) Calling .GetSSHPort
I0814 00:02:48.256772   26723 main.go:141] libmachine: (functional-770612) Calling .GetSSHKeyPath
I0814 00:02:48.256913   26723 main.go:141] libmachine: (functional-770612) Calling .GetSSHUsername
I0814 00:02:48.257058   26723 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/functional-770612/id_rsa Username:docker}
I0814 00:02:48.363634   26723 ssh_runner.go:195] Run: sudo crictl images --output json
I0814 00:02:48.430526   26723 main.go:141] libmachine: Making call to close driver server
I0814 00:02:48.430542   26723 main.go:141] libmachine: (functional-770612) Calling .Close
I0814 00:02:48.430807   26723 main.go:141] libmachine: Successfully made call to close driver server
I0814 00:02:48.430828   26723 main.go:141] libmachine: Making call to close connection to plugin binary
I0814 00:02:48.430837   26723 main.go:141] libmachine: Making call to close driver server
I0814 00:02:48.430838   26723 main.go:141] libmachine: (functional-770612) DBG | Closing plugin on server side
I0814 00:02:48.430847   26723 main.go:141] libmachine: (functional-770612) Calling .Close
I0814 00:02:48.431057   26723 main.go:141] libmachine: Successfully made call to close driver server
I0814 00:02:48.431075   26723 main.go:141] libmachine: Making call to close connection to plugin binary
I0814 00:02:48.431078   26723 main.go:141] libmachine: (functional-770612) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-770612 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-770612 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/library/nginx                 | latest             | 900dca2a61f57 | 192MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| localhost/minikube-local-cache-test     | functional-770612  | 49eb5193bf102 | 3.33kB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/kindest/kindnetd              | v20240730-75a5af0c | 917d7814b9b5b | 87.2MB |
| localhost/kicbase/echo-server           | functional-770612  | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/coredns/coredns         | v1.11.1            | cbb01a7bd410d | 61.2MB |
| registry.k8s.io/kube-apiserver          | v1.31.0            | 604f5db92eaa8 | 95.2MB |
| registry.k8s.io/kube-proxy              | v1.31.0            | ad83b2ca7b09e | 92.7MB |
| registry.k8s.io/kube-scheduler          | v1.31.0            | 1766f54c897f0 | 68.4MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/etcd                    | 3.5.15-0           | 2e96e5913fc06 | 149MB  |
| registry.k8s.io/kube-controller-manager | v1.31.0            | 045733566833c | 89.4MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-770612 image ls --format table --alsologtostderr:
I0814 00:02:52.652990   26863 out.go:291] Setting OutFile to fd 1 ...
I0814 00:02:52.653094   26863 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0814 00:02:52.653103   26863 out.go:304] Setting ErrFile to fd 2...
I0814 00:02:52.653107   26863 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0814 00:02:52.653272   26863 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19429-9425/.minikube/bin
I0814 00:02:52.653794   26863 config.go:182] Loaded profile config "functional-770612": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0814 00:02:52.653894   26863 config.go:182] Loaded profile config "functional-770612": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0814 00:02:52.654267   26863 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0814 00:02:52.654311   26863 main.go:141] libmachine: Launching plugin server for driver kvm2
I0814 00:02:52.669010   26863 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36665
I0814 00:02:52.669426   26863 main.go:141] libmachine: () Calling .GetVersion
I0814 00:02:52.671259   26863 main.go:141] libmachine: Using API Version  1
I0814 00:02:52.671308   26863 main.go:141] libmachine: () Calling .SetConfigRaw
I0814 00:02:52.671800   26863 main.go:141] libmachine: () Calling .GetMachineName
I0814 00:02:52.672020   26863 main.go:141] libmachine: (functional-770612) Calling .GetState
I0814 00:02:52.674096   26863 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0814 00:02:52.674131   26863 main.go:141] libmachine: Launching plugin server for driver kvm2
I0814 00:02:52.688786   26863 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45737
I0814 00:02:52.689135   26863 main.go:141] libmachine: () Calling .GetVersion
I0814 00:02:52.689600   26863 main.go:141] libmachine: Using API Version  1
I0814 00:02:52.689624   26863 main.go:141] libmachine: () Calling .SetConfigRaw
I0814 00:02:52.689916   26863 main.go:141] libmachine: () Calling .GetMachineName
I0814 00:02:52.690129   26863 main.go:141] libmachine: (functional-770612) Calling .DriverName
I0814 00:02:52.690325   26863 ssh_runner.go:195] Run: systemctl --version
I0814 00:02:52.690353   26863 main.go:141] libmachine: (functional-770612) Calling .GetSSHHostname
I0814 00:02:52.693143   26863 main.go:141] libmachine: (functional-770612) DBG | domain functional-770612 has defined MAC address 52:54:00:72:9d:7e in network mk-functional-770612
I0814 00:02:52.693541   26863 main.go:141] libmachine: (functional-770612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:9d:7e", ip: ""} in network mk-functional-770612: {Iface:virbr1 ExpiryTime:2024-08-14 01:00:05 +0000 UTC Type:0 Mac:52:54:00:72:9d:7e Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:functional-770612 Clientid:01:52:54:00:72:9d:7e}
I0814 00:02:52.693567   26863 main.go:141] libmachine: (functional-770612) DBG | domain functional-770612 has defined IP address 192.168.39.191 and MAC address 52:54:00:72:9d:7e in network mk-functional-770612
I0814 00:02:52.693706   26863 main.go:141] libmachine: (functional-770612) Calling .GetSSHPort
I0814 00:02:52.693849   26863 main.go:141] libmachine: (functional-770612) Calling .GetSSHKeyPath
I0814 00:02:52.693979   26863 main.go:141] libmachine: (functional-770612) Calling .GetSSHUsername
I0814 00:02:52.694101   26863 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/functional-770612/id_rsa Username:docker}
I0814 00:02:52.788236   26863 ssh_runner.go:195] Run: sudo crictl images --output json
I0814 00:02:52.833903   26863 main.go:141] libmachine: Making call to close driver server
I0814 00:02:52.833923   26863 main.go:141] libmachine: (functional-770612) Calling .Close
I0814 00:02:52.834204   26863 main.go:141] libmachine: Successfully made call to close driver server
I0814 00:02:52.834222   26863 main.go:141] libmachine: Making call to close connection to plugin binary
I0814 00:02:52.834234   26863 main.go:141] libmachine: Making call to close driver server
I0814 00:02:52.834242   26863 main.go:141] libmachine: (functional-770612) Calling .Close
I0814 00:02:52.835560   26863 main.go:141] libmachine: Successfully made call to close driver server
I0814 00:02:52.835577   26863 main.go:141] libmachine: Making call to close connection to plugin binary
I0814 00:02:52.835659   26863 main.go:141] libmachine: (functional-770612) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-770612 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-770612 image ls --format json --alsologtostderr:
[{"id":"49eb5193bf1021f0eb65a9d6c8334a9e9f5dad1996825e15a3e788caa4895595","repoDigests":["localhost/minikube-local-cache-test@sha256:00502ea04a4240cfd130ca2fdd3a21652b6731d0612daed17307ac8261e2994c"],"repoTags":["localhost/minikube-local-cache-test:functional-770612"],"size":"3330"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":["registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d","registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"149009664"},{"id":"1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94","repoDigests":["registry.k8s.io/kube-scheduler@sha256:882f6d
647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a","registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.0"],"size":"68420936"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"900dca2a61f5799aabe662339a940cf444dfd39777648ca6a953f82b685997ed","repoDigests":["docker.io/library/nginx@sha256:98f8ec75657d21b924fe4f69b6b9bff2f6550ea48838af479d8894a852000e40","docker.io/library/nginx@sha256:a3ab061d6909191271bcf24b9ab6eee9e8fc5f2fbf1525c5bd84d21f27a9d708"],"repoTags":["docker.io/library/nginx:latest"],"size":"191750286"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","
gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-770612"],"size":"4943877"},{"id":"604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3","repoDigests":["registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf","registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.0"],"size":"95233506"},{"id":"045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d","regis
try.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.0"],"size":"89437512"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494","repoDigests":["registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf","registry.k8s.io/kube-proxy@sha256
:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.0"],"size":"92728217"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"
],"repoTags":[],"size":"249229937"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:2169b3b96af9
88cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"61245718"},{"id":"917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557","repoDigests":["docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3","docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"],"repoTags":["docker.io/kindest/kindnetd:v20240730-75a5af0c"],"size":"87165492"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-770612 image ls --format json --alsologtostderr:
I0814 00:02:52.433920   26840 out.go:291] Setting OutFile to fd 1 ...
I0814 00:02:52.434030   26840 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0814 00:02:52.434059   26840 out.go:304] Setting ErrFile to fd 2...
I0814 00:02:52.434068   26840 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0814 00:02:52.434245   26840 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19429-9425/.minikube/bin
I0814 00:02:52.434800   26840 config.go:182] Loaded profile config "functional-770612": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0814 00:02:52.434915   26840 config.go:182] Loaded profile config "functional-770612": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0814 00:02:52.435283   26840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0814 00:02:52.435336   26840 main.go:141] libmachine: Launching plugin server for driver kvm2
I0814 00:02:52.450621   26840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39847
I0814 00:02:52.451063   26840 main.go:141] libmachine: () Calling .GetVersion
I0814 00:02:52.451581   26840 main.go:141] libmachine: Using API Version  1
I0814 00:02:52.451602   26840 main.go:141] libmachine: () Calling .SetConfigRaw
I0814 00:02:52.451923   26840 main.go:141] libmachine: () Calling .GetMachineName
I0814 00:02:52.452104   26840 main.go:141] libmachine: (functional-770612) Calling .GetState
I0814 00:02:52.453999   26840 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0814 00:02:52.454077   26840 main.go:141] libmachine: Launching plugin server for driver kvm2
I0814 00:02:52.469763   26840 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42597
I0814 00:02:52.470174   26840 main.go:141] libmachine: () Calling .GetVersion
I0814 00:02:52.470672   26840 main.go:141] libmachine: Using API Version  1
I0814 00:02:52.470701   26840 main.go:141] libmachine: () Calling .SetConfigRaw
I0814 00:02:52.471020   26840 main.go:141] libmachine: () Calling .GetMachineName
I0814 00:02:52.471215   26840 main.go:141] libmachine: (functional-770612) Calling .DriverName
I0814 00:02:52.471432   26840 ssh_runner.go:195] Run: systemctl --version
I0814 00:02:52.471454   26840 main.go:141] libmachine: (functional-770612) Calling .GetSSHHostname
I0814 00:02:52.474511   26840 main.go:141] libmachine: (functional-770612) DBG | domain functional-770612 has defined MAC address 52:54:00:72:9d:7e in network mk-functional-770612
I0814 00:02:52.474933   26840 main.go:141] libmachine: (functional-770612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:9d:7e", ip: ""} in network mk-functional-770612: {Iface:virbr1 ExpiryTime:2024-08-14 01:00:05 +0000 UTC Type:0 Mac:52:54:00:72:9d:7e Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:functional-770612 Clientid:01:52:54:00:72:9d:7e}
I0814 00:02:52.474963   26840 main.go:141] libmachine: (functional-770612) DBG | domain functional-770612 has defined IP address 192.168.39.191 and MAC address 52:54:00:72:9d:7e in network mk-functional-770612
I0814 00:02:52.475121   26840 main.go:141] libmachine: (functional-770612) Calling .GetSSHPort
I0814 00:02:52.475302   26840 main.go:141] libmachine: (functional-770612) Calling .GetSSHKeyPath
I0814 00:02:52.475495   26840 main.go:141] libmachine: (functional-770612) Calling .GetSSHUsername
I0814 00:02:52.475630   26840 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/functional-770612/id_rsa Username:docker}
I0814 00:02:52.559453   26840 ssh_runner.go:195] Run: sudo crictl images --output json
I0814 00:02:52.601421   26840 main.go:141] libmachine: Making call to close driver server
I0814 00:02:52.601442   26840 main.go:141] libmachine: (functional-770612) Calling .Close
I0814 00:02:52.601692   26840 main.go:141] libmachine: Successfully made call to close driver server
I0814 00:02:52.601708   26840 main.go:141] libmachine: Making call to close connection to plugin binary
I0814 00:02:52.601721   26840 main.go:141] libmachine: Making call to close driver server
I0814 00:02:52.601728   26840 main.go:141] libmachine: (functional-770612) Calling .Close
I0814 00:02:52.601727   26840 main.go:141] libmachine: (functional-770612) DBG | Closing plugin on server side
I0814 00:02:52.601954   26840 main.go:141] libmachine: (functional-770612) DBG | Closing plugin on server side
I0814 00:02:52.601989   26840 main.go:141] libmachine: Successfully made call to close driver server
I0814 00:02:52.602002   26840 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-770612 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-770612 image ls --format yaml --alsologtostderr:
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557
repoDigests:
- docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3
- docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a
repoTags:
- docker.io/kindest/kindnetd:v20240730-75a5af0c
size: "87165492"
- id: 045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d
- registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.0
size: "89437512"
- id: ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494
repoDigests:
- registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf
- registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe
repoTags:
- registry.k8s.io/kube-proxy:v1.31.0
size: "92728217"
- id: 1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a
- registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.0
size: "68420936"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 49eb5193bf1021f0eb65a9d6c8334a9e9f5dad1996825e15a3e788caa4895595
repoDigests:
- localhost/minikube-local-cache-test@sha256:00502ea04a4240cfd130ca2fdd3a21652b6731d0612daed17307ac8261e2994c
repoTags:
- localhost/minikube-local-cache-test:functional-770612
size: "3330"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests:
- registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "149009664"
- id: 900dca2a61f5799aabe662339a940cf444dfd39777648ca6a953f82b685997ed
repoDigests:
- docker.io/library/nginx@sha256:98f8ec75657d21b924fe4f69b6b9bff2f6550ea48838af479d8894a852000e40
- docker.io/library/nginx@sha256:a3ab061d6909191271bcf24b9ab6eee9e8fc5f2fbf1525c5bd84d21f27a9d708
repoTags:
- docker.io/library/nginx:latest
size: "191750286"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-770612
size: "4943877"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "61245718"
- id: 604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf
- registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.0
size: "95233506"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-770612 image ls --format yaml --alsologtostderr:
I0814 00:02:48.478398   26747 out.go:291] Setting OutFile to fd 1 ...
I0814 00:02:48.478512   26747 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0814 00:02:48.478523   26747 out.go:304] Setting ErrFile to fd 2...
I0814 00:02:48.478529   26747 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0814 00:02:48.478724   26747 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19429-9425/.minikube/bin
I0814 00:02:48.479264   26747 config.go:182] Loaded profile config "functional-770612": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0814 00:02:48.479385   26747 config.go:182] Loaded profile config "functional-770612": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0814 00:02:48.479789   26747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0814 00:02:48.479854   26747 main.go:141] libmachine: Launching plugin server for driver kvm2
I0814 00:02:48.498953   26747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43467
I0814 00:02:48.499430   26747 main.go:141] libmachine: () Calling .GetVersion
I0814 00:02:48.500034   26747 main.go:141] libmachine: Using API Version  1
I0814 00:02:48.500049   26747 main.go:141] libmachine: () Calling .SetConfigRaw
I0814 00:02:48.500403   26747 main.go:141] libmachine: () Calling .GetMachineName
I0814 00:02:48.500603   26747 main.go:141] libmachine: (functional-770612) Calling .GetState
I0814 00:02:48.502462   26747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0814 00:02:48.502500   26747 main.go:141] libmachine: Launching plugin server for driver kvm2
I0814 00:02:48.519638   26747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38007
I0814 00:02:48.520200   26747 main.go:141] libmachine: () Calling .GetVersion
I0814 00:02:48.520807   26747 main.go:141] libmachine: Using API Version  1
I0814 00:02:48.520827   26747 main.go:141] libmachine: () Calling .SetConfigRaw
I0814 00:02:48.521185   26747 main.go:141] libmachine: () Calling .GetMachineName
I0814 00:02:48.521373   26747 main.go:141] libmachine: (functional-770612) Calling .DriverName
I0814 00:02:48.521590   26747 ssh_runner.go:195] Run: systemctl --version
I0814 00:02:48.521633   26747 main.go:141] libmachine: (functional-770612) Calling .GetSSHHostname
I0814 00:02:48.524305   26747 main.go:141] libmachine: (functional-770612) DBG | domain functional-770612 has defined MAC address 52:54:00:72:9d:7e in network mk-functional-770612
I0814 00:02:48.524723   26747 main.go:141] libmachine: (functional-770612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:9d:7e", ip: ""} in network mk-functional-770612: {Iface:virbr1 ExpiryTime:2024-08-14 01:00:05 +0000 UTC Type:0 Mac:52:54:00:72:9d:7e Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:functional-770612 Clientid:01:52:54:00:72:9d:7e}
I0814 00:02:48.524758   26747 main.go:141] libmachine: (functional-770612) DBG | domain functional-770612 has defined IP address 192.168.39.191 and MAC address 52:54:00:72:9d:7e in network mk-functional-770612
I0814 00:02:48.524886   26747 main.go:141] libmachine: (functional-770612) Calling .GetSSHPort
I0814 00:02:48.525056   26747 main.go:141] libmachine: (functional-770612) Calling .GetSSHKeyPath
I0814 00:02:48.525206   26747 main.go:141] libmachine: (functional-770612) Calling .GetSSHUsername
I0814 00:02:48.525364   26747 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/functional-770612/id_rsa Username:docker}
I0814 00:02:48.626847   26747 ssh_runner.go:195] Run: sudo crictl images --output json
I0814 00:02:48.679668   26747 main.go:141] libmachine: Making call to close driver server
I0814 00:02:48.679696   26747 main.go:141] libmachine: (functional-770612) Calling .Close
I0814 00:02:48.679954   26747 main.go:141] libmachine: Successfully made call to close driver server
I0814 00:02:48.680029   26747 main.go:141] libmachine: Making call to close connection to plugin binary
I0814 00:02:48.680047   26747 main.go:141] libmachine: Making call to close driver server
I0814 00:02:48.680057   26747 main.go:141] libmachine: (functional-770612) Calling .Close
I0814 00:02:48.679989   26747 main.go:141] libmachine: (functional-770612) DBG | Closing plugin on server side
I0814 00:02:48.680296   26747 main.go:141] libmachine: Successfully made call to close driver server
I0814 00:02:48.680305   26747 main.go:141] libmachine: (functional-770612) DBG | Closing plugin on server side
I0814 00:02:48.680319   26747 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-770612 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-770612 ssh pgrep buildkitd: exit status 1 (287.536132ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-770612 image build -t localhost/my-image:functional-770612 testdata/build --alsologtostderr
E0814 00:02:49.380175   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/addons-937866/client.crt: no such file or directory" logger="UnhandledError"
2024/08/14 00:02:51 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-770612 image build -t localhost/my-image:functional-770612 testdata/build --alsologtostderr: (4.885877214s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-770612 image build -t localhost/my-image:functional-770612 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 3317c08f2fb
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-770612
--> 990f186ecec
Successfully tagged localhost/my-image:functional-770612
990f186ecec7d84c2751d4b30c4f4c68ea9dfbab0dce4178c2b015da0590f038
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-770612 image build -t localhost/my-image:functional-770612 testdata/build --alsologtostderr:
I0814 00:02:49.029905   26800 out.go:291] Setting OutFile to fd 1 ...
I0814 00:02:49.030230   26800 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0814 00:02:49.030259   26800 out.go:304] Setting ErrFile to fd 2...
I0814 00:02:49.030271   26800 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0814 00:02:49.030459   26800 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19429-9425/.minikube/bin
I0814 00:02:49.031046   26800 config.go:182] Loaded profile config "functional-770612": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0814 00:02:49.031594   26800 config.go:182] Loaded profile config "functional-770612": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0814 00:02:49.031998   26800 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0814 00:02:49.032082   26800 main.go:141] libmachine: Launching plugin server for driver kvm2
I0814 00:02:49.049809   26800 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33315
I0814 00:02:49.050355   26800 main.go:141] libmachine: () Calling .GetVersion
I0814 00:02:49.050979   26800 main.go:141] libmachine: Using API Version  1
I0814 00:02:49.050998   26800 main.go:141] libmachine: () Calling .SetConfigRaw
I0814 00:02:49.051335   26800 main.go:141] libmachine: () Calling .GetMachineName
I0814 00:02:49.051520   26800 main.go:141] libmachine: (functional-770612) Calling .GetState
I0814 00:02:49.053196   26800 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0814 00:02:49.053238   26800 main.go:141] libmachine: Launching plugin server for driver kvm2
I0814 00:02:49.068205   26800 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43879
I0814 00:02:49.068664   26800 main.go:141] libmachine: () Calling .GetVersion
I0814 00:02:49.069138   26800 main.go:141] libmachine: Using API Version  1
I0814 00:02:49.069162   26800 main.go:141] libmachine: () Calling .SetConfigRaw
I0814 00:02:49.069583   26800 main.go:141] libmachine: () Calling .GetMachineName
I0814 00:02:49.069747   26800 main.go:141] libmachine: (functional-770612) Calling .DriverName
I0814 00:02:49.069916   26800 ssh_runner.go:195] Run: systemctl --version
I0814 00:02:49.069940   26800 main.go:141] libmachine: (functional-770612) Calling .GetSSHHostname
I0814 00:02:49.073068   26800 main.go:141] libmachine: (functional-770612) DBG | domain functional-770612 has defined MAC address 52:54:00:72:9d:7e in network mk-functional-770612
I0814 00:02:49.073458   26800 main.go:141] libmachine: (functional-770612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:9d:7e", ip: ""} in network mk-functional-770612: {Iface:virbr1 ExpiryTime:2024-08-14 01:00:05 +0000 UTC Type:0 Mac:52:54:00:72:9d:7e Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:functional-770612 Clientid:01:52:54:00:72:9d:7e}
I0814 00:02:49.073481   26800 main.go:141] libmachine: (functional-770612) DBG | domain functional-770612 has defined IP address 192.168.39.191 and MAC address 52:54:00:72:9d:7e in network mk-functional-770612
I0814 00:02:49.073647   26800 main.go:141] libmachine: (functional-770612) Calling .GetSSHPort
I0814 00:02:49.073781   26800 main.go:141] libmachine: (functional-770612) Calling .GetSSHKeyPath
I0814 00:02:49.073920   26800 main.go:141] libmachine: (functional-770612) Calling .GetSSHUsername
I0814 00:02:49.074116   26800 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/functional-770612/id_rsa Username:docker}
I0814 00:02:49.181634   26800 build_images.go:161] Building image from path: /tmp/build.1614078508.tar
I0814 00:02:49.181696   26800 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0814 00:02:49.197908   26800 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1614078508.tar
I0814 00:02:49.211390   26800 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1614078508.tar: stat -c "%s %y" /var/lib/minikube/build/build.1614078508.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1614078508.tar': No such file or directory
I0814 00:02:49.211429   26800 ssh_runner.go:362] scp /tmp/build.1614078508.tar --> /var/lib/minikube/build/build.1614078508.tar (3072 bytes)
I0814 00:02:49.265192   26800 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1614078508
I0814 00:02:49.296434   26800 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1614078508 -xf /var/lib/minikube/build/build.1614078508.tar
I0814 00:02:49.313713   26800 crio.go:315] Building image: /var/lib/minikube/build/build.1614078508
I0814 00:02:49.313836   26800 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-770612 /var/lib/minikube/build/build.1614078508 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0814 00:02:53.808472   26800 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-770612 /var/lib/minikube/build/build.1614078508 --cgroup-manager=cgroupfs: (4.494605192s)
I0814 00:02:53.808542   26800 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1614078508
I0814 00:02:53.840517   26800 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1614078508.tar
I0814 00:02:53.853580   26800 build_images.go:217] Built localhost/my-image:functional-770612 from /tmp/build.1614078508.tar
I0814 00:02:53.853613   26800 build_images.go:133] succeeded building to: functional-770612
I0814 00:02:53.853626   26800 build_images.go:134] failed building to: 
I0814 00:02:53.853652   26800 main.go:141] libmachine: Making call to close driver server
I0814 00:02:53.853669   26800 main.go:141] libmachine: (functional-770612) Calling .Close
I0814 00:02:53.853933   26800 main.go:141] libmachine: Successfully made call to close driver server
I0814 00:02:53.853947   26800 main.go:141] libmachine: (functional-770612) DBG | Closing plugin on server side
I0814 00:02:53.853954   26800 main.go:141] libmachine: Making call to close connection to plugin binary
I0814 00:02:53.853965   26800 main.go:141] libmachine: Making call to close driver server
I0814 00:02:53.853975   26800 main.go:141] libmachine: (functional-770612) Calling .Close
I0814 00:02:53.854216   26800 main.go:141] libmachine: (functional-770612) DBG | Closing plugin on server side
I0814 00:02:53.854228   26800 main.go:141] libmachine: Successfully made call to close driver server
I0814 00:02:53.854253   26800 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-770612 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (5.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.765339313s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-770612
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.79s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-770612 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-770612 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-770612 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-770612 image load --daemon kicbase/echo-server:functional-770612 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-770612 image load --daemon kicbase/echo-server:functional-770612 --alsologtostderr: (1.253827166s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-770612 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.49s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "228.619989ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "49.002577ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "240.495365ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "43.235352ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-770612 image load --daemon kicbase/echo-server:functional-770612 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-770612 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (8.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-770612
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-770612 image load --daemon kicbase/echo-server:functional-770612 --alsologtostderr
functional_test.go:245: (dbg) Done: out/minikube-linux-amd64 -p functional-770612 image load --daemon kicbase/echo-server:functional-770612 --alsologtostderr: (7.443475121s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-770612 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (8.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-770612 image save kicbase/echo-server:functional-770612 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (1.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-770612 image rm kicbase/echo-server:functional-770612 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-770612 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (1.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-770612 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:409: (dbg) Done: out/minikube-linux-amd64 -p functional-770612 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (2.345437357s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-770612 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-770612
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-770612 image save --daemon kicbase/echo-server:functional-770612 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-770612
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-770612 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-770612 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-z9wj8" [d2f01ef4-cf9d-484b-a545-b8deb1cb32e8] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-z9wj8" [d2f01ef4-cf9d-484b-a545-b8deb1cb32e8] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.003014922s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.15s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-770612 /tmp/TestFunctionalparallelMountCmdany-port1485225286/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1723593756082539378" to /tmp/TestFunctionalparallelMountCmdany-port1485225286/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1723593756082539378" to /tmp/TestFunctionalparallelMountCmdany-port1485225286/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1723593756082539378" to /tmp/TestFunctionalparallelMountCmdany-port1485225286/001/test-1723593756082539378
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-770612 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-770612 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (222.468653ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-770612 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-770612 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 14 00:02 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 14 00:02 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 14 00:02 test-1723593756082539378
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-770612 ssh cat /mount-9p/test-1723593756082539378
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-770612 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [30f856ad-4f0b-4519-b54e-aaa1675ce9c2] Pending
helpers_test.go:344: "busybox-mount" [30f856ad-4f0b-4519-b54e-aaa1675ce9c2] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [30f856ad-4f0b-4519-b54e-aaa1675ce9c2] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [30f856ad-4f0b-4519-b54e-aaa1675ce9c2] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.014159815s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-770612 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-770612 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-770612 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-770612 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-770612 /tmp/TestFunctionalparallelMountCmdany-port1485225286/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-770612 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-770612 service list -o json
functional_test.go:1494: Took "420.11443ms" to run "out/minikube-linux-amd64 -p functional-770612 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-770612 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.191:30371
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-770612 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-770612 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.191:30371
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-770612 /tmp/TestFunctionalparallelMountCmdspecific-port384635910/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-770612 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-770612 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (179.826556ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-770612 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-770612 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-770612 /tmp/TestFunctionalparallelMountCmdspecific-port384635910/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-770612 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-770612 ssh "sudo umount -f /mount-9p": exit status 1 (187.139973ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-770612 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-770612 /tmp/TestFunctionalparallelMountCmdspecific-port384635910/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.63s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-770612 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3919354838/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-770612 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3919354838/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-770612 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3919354838/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-770612 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-770612 ssh "findmnt -T" /mount1: exit status 1 (235.929342ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-770612 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-770612 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-770612 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-770612 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-770612 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3919354838/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-770612 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3919354838/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-770612 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3919354838/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.15s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-770612
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-770612
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-770612
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (224.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-105013 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0814 00:05:05.519376   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/addons-937866/client.crt: no such file or directory" logger="UnhandledError"
E0814 00:05:33.222463   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/addons-937866/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-105013 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m43.738979389s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-105013 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (224.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-105013 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-105013 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-105013 -- rollout status deployment/busybox: (4.555993146s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-105013 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-105013 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-105013 -- exec busybox-7dff88458-5px5v -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-105013 -- exec busybox-7dff88458-b6xdd -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-105013 -- exec busybox-7dff88458-lq24p -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-105013 -- exec busybox-7dff88458-5px5v -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-105013 -- exec busybox-7dff88458-b6xdd -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-105013 -- exec busybox-7dff88458-lq24p -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-105013 -- exec busybox-7dff88458-5px5v -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-105013 -- exec busybox-7dff88458-b6xdd -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-105013 -- exec busybox-7dff88458-lq24p -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-105013 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-105013 -- exec busybox-7dff88458-5px5v -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-105013 -- exec busybox-7dff88458-5px5v -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-105013 -- exec busybox-7dff88458-b6xdd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-105013 -- exec busybox-7dff88458-b6xdd -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-105013 -- exec busybox-7dff88458-lq24p -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-105013 -- exec busybox-7dff88458-lq24p -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (56.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-105013 -v=7 --alsologtostderr
E0814 00:07:14.185388   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/functional-770612/client.crt: no such file or directory" logger="UnhandledError"
E0814 00:07:14.191777   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/functional-770612/client.crt: no such file or directory" logger="UnhandledError"
E0814 00:07:14.203124   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/functional-770612/client.crt: no such file or directory" logger="UnhandledError"
E0814 00:07:14.224512   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/functional-770612/client.crt: no such file or directory" logger="UnhandledError"
E0814 00:07:14.265995   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/functional-770612/client.crt: no such file or directory" logger="UnhandledError"
E0814 00:07:14.347527   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/functional-770612/client.crt: no such file or directory" logger="UnhandledError"
E0814 00:07:14.509053   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/functional-770612/client.crt: no such file or directory" logger="UnhandledError"
E0814 00:07:14.830883   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/functional-770612/client.crt: no such file or directory" logger="UnhandledError"
E0814 00:07:15.472934   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/functional-770612/client.crt: no such file or directory" logger="UnhandledError"
E0814 00:07:16.755014   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/functional-770612/client.crt: no such file or directory" logger="UnhandledError"
E0814 00:07:19.317257   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/functional-770612/client.crt: no such file or directory" logger="UnhandledError"
E0814 00:07:24.439276   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/functional-770612/client.crt: no such file or directory" logger="UnhandledError"
E0814 00:07:34.680767   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/functional-770612/client.crt: no such file or directory" logger="UnhandledError"
E0814 00:07:55.162222   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/functional-770612/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-105013 -v=7 --alsologtostderr: (55.503051768s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-105013 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (56.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-105013 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-105013 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-105013 cp testdata/cp-test.txt ha-105013:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-105013 ssh -n ha-105013 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-105013 cp ha-105013:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2764919469/001/cp-test_ha-105013.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-105013 ssh -n ha-105013 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-105013 cp ha-105013:/home/docker/cp-test.txt ha-105013-m02:/home/docker/cp-test_ha-105013_ha-105013-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-105013 ssh -n ha-105013 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-105013 ssh -n ha-105013-m02 "sudo cat /home/docker/cp-test_ha-105013_ha-105013-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-105013 cp ha-105013:/home/docker/cp-test.txt ha-105013-m03:/home/docker/cp-test_ha-105013_ha-105013-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-105013 ssh -n ha-105013 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-105013 ssh -n ha-105013-m03 "sudo cat /home/docker/cp-test_ha-105013_ha-105013-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-105013 cp ha-105013:/home/docker/cp-test.txt ha-105013-m04:/home/docker/cp-test_ha-105013_ha-105013-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-105013 ssh -n ha-105013 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-105013 ssh -n ha-105013-m04 "sudo cat /home/docker/cp-test_ha-105013_ha-105013-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-105013 cp testdata/cp-test.txt ha-105013-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-105013 ssh -n ha-105013-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-105013 cp ha-105013-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2764919469/001/cp-test_ha-105013-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-105013 ssh -n ha-105013-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-105013 cp ha-105013-m02:/home/docker/cp-test.txt ha-105013:/home/docker/cp-test_ha-105013-m02_ha-105013.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-105013 ssh -n ha-105013-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-105013 ssh -n ha-105013 "sudo cat /home/docker/cp-test_ha-105013-m02_ha-105013.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-105013 cp ha-105013-m02:/home/docker/cp-test.txt ha-105013-m03:/home/docker/cp-test_ha-105013-m02_ha-105013-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-105013 ssh -n ha-105013-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-105013 ssh -n ha-105013-m03 "sudo cat /home/docker/cp-test_ha-105013-m02_ha-105013-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-105013 cp ha-105013-m02:/home/docker/cp-test.txt ha-105013-m04:/home/docker/cp-test_ha-105013-m02_ha-105013-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-105013 ssh -n ha-105013-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-105013 ssh -n ha-105013-m04 "sudo cat /home/docker/cp-test_ha-105013-m02_ha-105013-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-105013 cp testdata/cp-test.txt ha-105013-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-105013 ssh -n ha-105013-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-105013 cp ha-105013-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2764919469/001/cp-test_ha-105013-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-105013 ssh -n ha-105013-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-105013 cp ha-105013-m03:/home/docker/cp-test.txt ha-105013:/home/docker/cp-test_ha-105013-m03_ha-105013.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-105013 ssh -n ha-105013-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-105013 ssh -n ha-105013 "sudo cat /home/docker/cp-test_ha-105013-m03_ha-105013.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-105013 cp ha-105013-m03:/home/docker/cp-test.txt ha-105013-m02:/home/docker/cp-test_ha-105013-m03_ha-105013-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-105013 ssh -n ha-105013-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-105013 ssh -n ha-105013-m02 "sudo cat /home/docker/cp-test_ha-105013-m03_ha-105013-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-105013 cp ha-105013-m03:/home/docker/cp-test.txt ha-105013-m04:/home/docker/cp-test_ha-105013-m03_ha-105013-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-105013 ssh -n ha-105013-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-105013 ssh -n ha-105013-m04 "sudo cat /home/docker/cp-test_ha-105013-m03_ha-105013-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-105013 cp testdata/cp-test.txt ha-105013-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-105013 ssh -n ha-105013-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-105013 cp ha-105013-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2764919469/001/cp-test_ha-105013-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-105013 ssh -n ha-105013-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-105013 cp ha-105013-m04:/home/docker/cp-test.txt ha-105013:/home/docker/cp-test_ha-105013-m04_ha-105013.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-105013 ssh -n ha-105013-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-105013 ssh -n ha-105013 "sudo cat /home/docker/cp-test_ha-105013-m04_ha-105013.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-105013 cp ha-105013-m04:/home/docker/cp-test.txt ha-105013-m02:/home/docker/cp-test_ha-105013-m04_ha-105013-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-105013 ssh -n ha-105013-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-105013 ssh -n ha-105013-m02 "sudo cat /home/docker/cp-test_ha-105013-m04_ha-105013-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-105013 cp ha-105013-m04:/home/docker/cp-test.txt ha-105013-m03:/home/docker/cp-test_ha-105013-m04_ha-105013-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-105013 ssh -n ha-105013-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-105013 ssh -n ha-105013-m03 "sudo cat /home/docker/cp-test_ha-105013-m04_ha-105013-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (2.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-105013 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-amd64 -p ha-105013 node stop m02 -v=7 --alsologtostderr: (2.27968486s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-105013 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-105013 status -v=7 --alsologtostderr: exit status 7 (605.819842ms)

                                                
                                                
-- stdout --
	ha-105013
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-105013-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-105013-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-105013-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 00:08:12.700488   31303 out.go:291] Setting OutFile to fd 1 ...
	I0814 00:08:12.700599   31303 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 00:08:12.700610   31303 out.go:304] Setting ErrFile to fd 2...
	I0814 00:08:12.700616   31303 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 00:08:12.700793   31303 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19429-9425/.minikube/bin
	I0814 00:08:12.700987   31303 out.go:298] Setting JSON to false
	I0814 00:08:12.701021   31303 mustload.go:65] Loading cluster: ha-105013
	I0814 00:08:12.701110   31303 notify.go:220] Checking for updates...
	I0814 00:08:12.701498   31303 config.go:182] Loaded profile config "ha-105013": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 00:08:12.701514   31303 status.go:255] checking status of ha-105013 ...
	I0814 00:08:12.701906   31303 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 00:08:12.701975   31303 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 00:08:12.721942   31303 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35965
	I0814 00:08:12.722385   31303 main.go:141] libmachine: () Calling .GetVersion
	I0814 00:08:12.722888   31303 main.go:141] libmachine: Using API Version  1
	I0814 00:08:12.722906   31303 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 00:08:12.723407   31303 main.go:141] libmachine: () Calling .GetMachineName
	I0814 00:08:12.723623   31303 main.go:141] libmachine: (ha-105013) Calling .GetState
	I0814 00:08:12.725361   31303 status.go:330] ha-105013 host status = "Running" (err=<nil>)
	I0814 00:08:12.725375   31303 host.go:66] Checking if "ha-105013" exists ...
	I0814 00:08:12.725668   31303 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 00:08:12.725703   31303 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 00:08:12.741655   31303 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33239
	I0814 00:08:12.742083   31303 main.go:141] libmachine: () Calling .GetVersion
	I0814 00:08:12.742501   31303 main.go:141] libmachine: Using API Version  1
	I0814 00:08:12.742524   31303 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 00:08:12.742784   31303 main.go:141] libmachine: () Calling .GetMachineName
	I0814 00:08:12.742952   31303 main.go:141] libmachine: (ha-105013) Calling .GetIP
	I0814 00:08:12.745408   31303 main.go:141] libmachine: (ha-105013) DBG | domain ha-105013 has defined MAC address 52:54:00:e0:21:6c in network mk-ha-105013
	I0814 00:08:12.745832   31303 main.go:141] libmachine: (ha-105013) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:21:6c", ip: ""} in network mk-ha-105013: {Iface:virbr1 ExpiryTime:2024-08-14 01:03:22 +0000 UTC Type:0 Mac:52:54:00:e0:21:6c Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-105013 Clientid:01:52:54:00:e0:21:6c}
	I0814 00:08:12.745870   31303 main.go:141] libmachine: (ha-105013) DBG | domain ha-105013 has defined IP address 192.168.39.79 and MAC address 52:54:00:e0:21:6c in network mk-ha-105013
	I0814 00:08:12.746007   31303 host.go:66] Checking if "ha-105013" exists ...
	I0814 00:08:12.746304   31303 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 00:08:12.746340   31303 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 00:08:12.761422   31303 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40357
	I0814 00:08:12.761759   31303 main.go:141] libmachine: () Calling .GetVersion
	I0814 00:08:12.762168   31303 main.go:141] libmachine: Using API Version  1
	I0814 00:08:12.762193   31303 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 00:08:12.762467   31303 main.go:141] libmachine: () Calling .GetMachineName
	I0814 00:08:12.762688   31303 main.go:141] libmachine: (ha-105013) Calling .DriverName
	I0814 00:08:12.762835   31303 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0814 00:08:12.762864   31303 main.go:141] libmachine: (ha-105013) Calling .GetSSHHostname
	I0814 00:08:12.765404   31303 main.go:141] libmachine: (ha-105013) DBG | domain ha-105013 has defined MAC address 52:54:00:e0:21:6c in network mk-ha-105013
	I0814 00:08:12.765863   31303 main.go:141] libmachine: (ha-105013) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:21:6c", ip: ""} in network mk-ha-105013: {Iface:virbr1 ExpiryTime:2024-08-14 01:03:22 +0000 UTC Type:0 Mac:52:54:00:e0:21:6c Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-105013 Clientid:01:52:54:00:e0:21:6c}
	I0814 00:08:12.765888   31303 main.go:141] libmachine: (ha-105013) DBG | domain ha-105013 has defined IP address 192.168.39.79 and MAC address 52:54:00:e0:21:6c in network mk-ha-105013
	I0814 00:08:12.766105   31303 main.go:141] libmachine: (ha-105013) Calling .GetSSHPort
	I0814 00:08:12.766278   31303 main.go:141] libmachine: (ha-105013) Calling .GetSSHKeyPath
	I0814 00:08:12.766453   31303 main.go:141] libmachine: (ha-105013) Calling .GetSSHUsername
	I0814 00:08:12.766597   31303 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/ha-105013/id_rsa Username:docker}
	I0814 00:08:12.850413   31303 ssh_runner.go:195] Run: systemctl --version
	I0814 00:08:12.856374   31303 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 00:08:12.871521   31303 kubeconfig.go:125] found "ha-105013" server: "https://192.168.39.254:8443"
	I0814 00:08:12.871555   31303 api_server.go:166] Checking apiserver status ...
	I0814 00:08:12.871611   31303 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 00:08:12.886595   31303 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1087/cgroup
	W0814 00:08:12.895799   31303 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1087/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0814 00:08:12.895851   31303 ssh_runner.go:195] Run: ls
	I0814 00:08:12.899609   31303 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0814 00:08:12.905642   31303 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0814 00:08:12.905663   31303 status.go:422] ha-105013 apiserver status = Running (err=<nil>)
	I0814 00:08:12.905675   31303 status.go:257] ha-105013 status: &{Name:ha-105013 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0814 00:08:12.905696   31303 status.go:255] checking status of ha-105013-m02 ...
	I0814 00:08:12.905996   31303 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 00:08:12.906065   31303 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 00:08:12.921146   31303 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37627
	I0814 00:08:12.921516   31303 main.go:141] libmachine: () Calling .GetVersion
	I0814 00:08:12.921928   31303 main.go:141] libmachine: Using API Version  1
	I0814 00:08:12.921951   31303 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 00:08:12.922324   31303 main.go:141] libmachine: () Calling .GetMachineName
	I0814 00:08:12.922509   31303 main.go:141] libmachine: (ha-105013-m02) Calling .GetState
	I0814 00:08:12.923954   31303 status.go:330] ha-105013-m02 host status = "Stopped" (err=<nil>)
	I0814 00:08:12.923969   31303 status.go:343] host is not running, skipping remaining checks
	I0814 00:08:12.923987   31303 status.go:257] ha-105013-m02 status: &{Name:ha-105013-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0814 00:08:12.924009   31303 status.go:255] checking status of ha-105013-m03 ...
	I0814 00:08:12.924293   31303 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 00:08:12.924339   31303 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 00:08:12.939489   31303 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46801
	I0814 00:08:12.939860   31303 main.go:141] libmachine: () Calling .GetVersion
	I0814 00:08:12.940337   31303 main.go:141] libmachine: Using API Version  1
	I0814 00:08:12.940356   31303 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 00:08:12.940629   31303 main.go:141] libmachine: () Calling .GetMachineName
	I0814 00:08:12.940841   31303 main.go:141] libmachine: (ha-105013-m03) Calling .GetState
	I0814 00:08:12.942431   31303 status.go:330] ha-105013-m03 host status = "Running" (err=<nil>)
	I0814 00:08:12.942450   31303 host.go:66] Checking if "ha-105013-m03" exists ...
	I0814 00:08:12.942753   31303 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 00:08:12.942784   31303 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 00:08:12.957800   31303 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37199
	I0814 00:08:12.958183   31303 main.go:141] libmachine: () Calling .GetVersion
	I0814 00:08:12.958573   31303 main.go:141] libmachine: Using API Version  1
	I0814 00:08:12.958594   31303 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 00:08:12.958921   31303 main.go:141] libmachine: () Calling .GetMachineName
	I0814 00:08:12.959111   31303 main.go:141] libmachine: (ha-105013-m03) Calling .GetIP
	I0814 00:08:12.961697   31303 main.go:141] libmachine: (ha-105013-m03) DBG | domain ha-105013-m03 has defined MAC address 52:54:00:b1:67:1f in network mk-ha-105013
	I0814 00:08:12.962035   31303 main.go:141] libmachine: (ha-105013-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:67:1f", ip: ""} in network mk-ha-105013: {Iface:virbr1 ExpiryTime:2024-08-14 01:05:54 +0000 UTC Type:0 Mac:52:54:00:b1:67:1f Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:ha-105013-m03 Clientid:01:52:54:00:b1:67:1f}
	I0814 00:08:12.962082   31303 main.go:141] libmachine: (ha-105013-m03) DBG | domain ha-105013-m03 has defined IP address 192.168.39.177 and MAC address 52:54:00:b1:67:1f in network mk-ha-105013
	I0814 00:08:12.962185   31303 host.go:66] Checking if "ha-105013-m03" exists ...
	I0814 00:08:12.962608   31303 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 00:08:12.962653   31303 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 00:08:12.977234   31303 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37475
	I0814 00:08:12.977669   31303 main.go:141] libmachine: () Calling .GetVersion
	I0814 00:08:12.978136   31303 main.go:141] libmachine: Using API Version  1
	I0814 00:08:12.978156   31303 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 00:08:12.978424   31303 main.go:141] libmachine: () Calling .GetMachineName
	I0814 00:08:12.978583   31303 main.go:141] libmachine: (ha-105013-m03) Calling .DriverName
	I0814 00:08:12.978784   31303 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0814 00:08:12.978805   31303 main.go:141] libmachine: (ha-105013-m03) Calling .GetSSHHostname
	I0814 00:08:12.981587   31303 main.go:141] libmachine: (ha-105013-m03) DBG | domain ha-105013-m03 has defined MAC address 52:54:00:b1:67:1f in network mk-ha-105013
	I0814 00:08:12.982026   31303 main.go:141] libmachine: (ha-105013-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:67:1f", ip: ""} in network mk-ha-105013: {Iface:virbr1 ExpiryTime:2024-08-14 01:05:54 +0000 UTC Type:0 Mac:52:54:00:b1:67:1f Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:ha-105013-m03 Clientid:01:52:54:00:b1:67:1f}
	I0814 00:08:12.982065   31303 main.go:141] libmachine: (ha-105013-m03) DBG | domain ha-105013-m03 has defined IP address 192.168.39.177 and MAC address 52:54:00:b1:67:1f in network mk-ha-105013
	I0814 00:08:12.982208   31303 main.go:141] libmachine: (ha-105013-m03) Calling .GetSSHPort
	I0814 00:08:12.982363   31303 main.go:141] libmachine: (ha-105013-m03) Calling .GetSSHKeyPath
	I0814 00:08:12.982504   31303 main.go:141] libmachine: (ha-105013-m03) Calling .GetSSHUsername
	I0814 00:08:12.982612   31303 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/ha-105013-m03/id_rsa Username:docker}
	I0814 00:08:13.065160   31303 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 00:08:13.080990   31303 kubeconfig.go:125] found "ha-105013" server: "https://192.168.39.254:8443"
	I0814 00:08:13.081016   31303 api_server.go:166] Checking apiserver status ...
	I0814 00:08:13.081050   31303 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 00:08:13.094332   31303 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1439/cgroup
	W0814 00:08:13.103403   31303 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1439/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0814 00:08:13.103454   31303 ssh_runner.go:195] Run: ls
	I0814 00:08:13.107849   31303 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0814 00:08:13.114238   31303 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0814 00:08:13.114259   31303 status.go:422] ha-105013-m03 apiserver status = Running (err=<nil>)
	I0814 00:08:13.114267   31303 status.go:257] ha-105013-m03 status: &{Name:ha-105013-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0814 00:08:13.114288   31303 status.go:255] checking status of ha-105013-m04 ...
	I0814 00:08:13.114587   31303 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 00:08:13.114631   31303 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 00:08:13.129320   31303 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46225
	I0814 00:08:13.129723   31303 main.go:141] libmachine: () Calling .GetVersion
	I0814 00:08:13.130171   31303 main.go:141] libmachine: Using API Version  1
	I0814 00:08:13.130189   31303 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 00:08:13.130495   31303 main.go:141] libmachine: () Calling .GetMachineName
	I0814 00:08:13.130675   31303 main.go:141] libmachine: (ha-105013-m04) Calling .GetState
	I0814 00:08:13.132122   31303 status.go:330] ha-105013-m04 host status = "Running" (err=<nil>)
	I0814 00:08:13.132135   31303 host.go:66] Checking if "ha-105013-m04" exists ...
	I0814 00:08:13.132411   31303 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 00:08:13.132450   31303 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 00:08:13.147082   31303 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34741
	I0814 00:08:13.147481   31303 main.go:141] libmachine: () Calling .GetVersion
	I0814 00:08:13.147874   31303 main.go:141] libmachine: Using API Version  1
	I0814 00:08:13.147894   31303 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 00:08:13.148188   31303 main.go:141] libmachine: () Calling .GetMachineName
	I0814 00:08:13.148373   31303 main.go:141] libmachine: (ha-105013-m04) Calling .GetIP
	I0814 00:08:13.150836   31303 main.go:141] libmachine: (ha-105013-m04) DBG | domain ha-105013-m04 has defined MAC address 52:54:00:36:47:1b in network mk-ha-105013
	I0814 00:08:13.151243   31303 main.go:141] libmachine: (ha-105013-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:47:1b", ip: ""} in network mk-ha-105013: {Iface:virbr1 ExpiryTime:2024-08-14 01:07:15 +0000 UTC Type:0 Mac:52:54:00:36:47:1b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-105013-m04 Clientid:01:52:54:00:36:47:1b}
	I0814 00:08:13.151280   31303 main.go:141] libmachine: (ha-105013-m04) DBG | domain ha-105013-m04 has defined IP address 192.168.39.102 and MAC address 52:54:00:36:47:1b in network mk-ha-105013
	I0814 00:08:13.151402   31303 host.go:66] Checking if "ha-105013-m04" exists ...
	I0814 00:08:13.151734   31303 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 00:08:13.151786   31303 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 00:08:13.166884   31303 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34987
	I0814 00:08:13.167372   31303 main.go:141] libmachine: () Calling .GetVersion
	I0814 00:08:13.167868   31303 main.go:141] libmachine: Using API Version  1
	I0814 00:08:13.167893   31303 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 00:08:13.168271   31303 main.go:141] libmachine: () Calling .GetMachineName
	I0814 00:08:13.168487   31303 main.go:141] libmachine: (ha-105013-m04) Calling .DriverName
	I0814 00:08:13.168702   31303 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0814 00:08:13.168721   31303 main.go:141] libmachine: (ha-105013-m04) Calling .GetSSHHostname
	I0814 00:08:13.171934   31303 main.go:141] libmachine: (ha-105013-m04) DBG | domain ha-105013-m04 has defined MAC address 52:54:00:36:47:1b in network mk-ha-105013
	I0814 00:08:13.172338   31303 main.go:141] libmachine: (ha-105013-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:47:1b", ip: ""} in network mk-ha-105013: {Iface:virbr1 ExpiryTime:2024-08-14 01:07:15 +0000 UTC Type:0 Mac:52:54:00:36:47:1b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-105013-m04 Clientid:01:52:54:00:36:47:1b}
	I0814 00:08:13.172365   31303 main.go:141] libmachine: (ha-105013-m04) DBG | domain ha-105013-m04 has defined IP address 192.168.39.102 and MAC address 52:54:00:36:47:1b in network mk-ha-105013
	I0814 00:08:13.172505   31303 main.go:141] libmachine: (ha-105013-m04) Calling .GetSSHPort
	I0814 00:08:13.172650   31303 main.go:141] libmachine: (ha-105013-m04) Calling .GetSSHKeyPath
	I0814 00:08:13.172772   31303 main.go:141] libmachine: (ha-105013-m04) Calling .GetSSHUsername
	I0814 00:08:13.172879   31303 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/ha-105013-m04/id_rsa Username:docker}
	I0814 00:08:13.252190   31303 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 00:08:13.265208   31303 status.go:257] ha-105013-m04 status: &{Name:ha-105013-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (2.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (45.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-105013 node start m02 -v=7 --alsologtostderr
E0814 00:08:36.124325   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/functional-770612/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:420: (dbg) Done: out/minikube-linux-amd64 -p ha-105013 node start m02 -v=7 --alsologtostderr: (44.679225909s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-105013 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (45.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (16.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-105013 node delete m03 -v=7 --alsologtostderr
E0814 00:15:05.519641   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/addons-937866/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-105013 node delete m03 -v=7 --alsologtostderr: (16.008669952s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-105013 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (16.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (348.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-105013 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0814 00:20:05.519049   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/addons-937866/client.crt: no such file or directory" logger="UnhandledError"
E0814 00:22:14.185526   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/functional-770612/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-105013 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (5m48.097896288s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-105013 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (348.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (73.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-105013 --control-plane -v=7 --alsologtostderr
E0814 00:23:37.249483   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/functional-770612/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-105013 --control-plane -v=7 --alsologtostderr: (1m12.257705252s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-105013 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (73.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.54s)

                                                
                                    
x
+
TestJSONOutput/start/Command (47.89s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-027475 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0814 00:25:05.519413   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/addons-937866/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-027475 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (47.892358684s)
--- PASS: TestJSONOutput/start/Command (47.89s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-027475 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.57s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-027475 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.57s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.58s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-027475 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-027475 --output=json --user=testUser: (6.574923389s)
--- PASS: TestJSONOutput/stop/Command (6.58s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.18s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-021097 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-021097 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (55.669339ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"30ff9e53-e0e4-478e-81c6-d0fb2c127dc5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-021097] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"6baa886c-f939-4c17-a018-452fbb259dbf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19429"}}
	{"specversion":"1.0","id":"b027e68a-a290-4e99-a2e7-942b1fea19c0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ea09c4f3-1f38-4cc7-a7a1-01c6b7d0aba6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19429-9425/kubeconfig"}}
	{"specversion":"1.0","id":"7378fb87-c327-44d5-a06c-7cc94c6cfc5f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19429-9425/.minikube"}}
	{"specversion":"1.0","id":"f000955f-6e31-4ad8-87dd-15c834ae323d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"31aa1e93-965c-4363-80e7-1d0879938ff2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"cfde2809-e5d7-42a9-8f31-d67e74b153bc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-021097" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-021097
--- PASS: TestErrorJSONOutput (0.18s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (84.78s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-403141 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-403141 --driver=kvm2  --container-runtime=crio: (40.389841596s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-405727 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-405727 --driver=kvm2  --container-runtime=crio: (41.815922521s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-403141
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-405727
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-405727" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-405727
helpers_test.go:175: Cleaning up "first-403141" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-403141
--- PASS: TestMinikubeProfile (84.78s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (24.54s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-933671 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0814 00:27:14.188823   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/functional-770612/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-933671 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (23.537298301s)
--- PASS: TestMountStart/serial/StartWithMountFirst (24.54s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.35s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-933671 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-933671 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.35s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (24.45s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-950406 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-950406 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (23.453444695s)
--- PASS: TestMountStart/serial/StartWithMountSecond (24.45s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.35s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-950406 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-950406 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.35s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.88s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-933671 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.88s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.35s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-950406 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-950406 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.35s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-950406
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-950406: (1.259075638s)
--- PASS: TestMountStart/serial/Stop (1.26s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.02s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-950406
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-950406: (22.016179331s)
--- PASS: TestMountStart/serial/RestartStopped (23.02s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-950406 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-950406 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (110.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-745925 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0814 00:30:05.519365   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/addons-937866/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-745925 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m50.587174019s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-745925 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (110.98s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-745925 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-745925 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-745925 -- rollout status deployment/busybox: (3.331962273s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-745925 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-745925 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-745925 -- exec busybox-7dff88458-q5qs4 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-745925 -- exec busybox-7dff88458-zkbzf -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-745925 -- exec busybox-7dff88458-q5qs4 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-745925 -- exec busybox-7dff88458-zkbzf -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-745925 -- exec busybox-7dff88458-q5qs4 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-745925 -- exec busybox-7dff88458-zkbzf -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.71s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-745925 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-745925 -- exec busybox-7dff88458-q5qs4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-745925 -- exec busybox-7dff88458-q5qs4 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-745925 -- exec busybox-7dff88458-zkbzf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-745925 -- exec busybox-7dff88458-zkbzf -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.75s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (47.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-745925 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-745925 -v 3 --alsologtostderr: (46.966620002s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-745925 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (47.50s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-745925 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.21s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-745925 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-745925 cp testdata/cp-test.txt multinode-745925:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-745925 ssh -n multinode-745925 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-745925 cp multinode-745925:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1031533634/001/cp-test_multinode-745925.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-745925 ssh -n multinode-745925 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-745925 cp multinode-745925:/home/docker/cp-test.txt multinode-745925-m02:/home/docker/cp-test_multinode-745925_multinode-745925-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-745925 ssh -n multinode-745925 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-745925 ssh -n multinode-745925-m02 "sudo cat /home/docker/cp-test_multinode-745925_multinode-745925-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-745925 cp multinode-745925:/home/docker/cp-test.txt multinode-745925-m03:/home/docker/cp-test_multinode-745925_multinode-745925-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-745925 ssh -n multinode-745925 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-745925 ssh -n multinode-745925-m03 "sudo cat /home/docker/cp-test_multinode-745925_multinode-745925-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-745925 cp testdata/cp-test.txt multinode-745925-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-745925 ssh -n multinode-745925-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-745925 cp multinode-745925-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1031533634/001/cp-test_multinode-745925-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-745925 ssh -n multinode-745925-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-745925 cp multinode-745925-m02:/home/docker/cp-test.txt multinode-745925:/home/docker/cp-test_multinode-745925-m02_multinode-745925.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-745925 ssh -n multinode-745925-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-745925 ssh -n multinode-745925 "sudo cat /home/docker/cp-test_multinode-745925-m02_multinode-745925.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-745925 cp multinode-745925-m02:/home/docker/cp-test.txt multinode-745925-m03:/home/docker/cp-test_multinode-745925-m02_multinode-745925-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-745925 ssh -n multinode-745925-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-745925 ssh -n multinode-745925-m03 "sudo cat /home/docker/cp-test_multinode-745925-m02_multinode-745925-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-745925 cp testdata/cp-test.txt multinode-745925-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-745925 ssh -n multinode-745925-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-745925 cp multinode-745925-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1031533634/001/cp-test_multinode-745925-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-745925 ssh -n multinode-745925-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-745925 cp multinode-745925-m03:/home/docker/cp-test.txt multinode-745925:/home/docker/cp-test_multinode-745925-m03_multinode-745925.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-745925 ssh -n multinode-745925-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-745925 ssh -n multinode-745925 "sudo cat /home/docker/cp-test_multinode-745925-m03_multinode-745925.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-745925 cp multinode-745925-m03:/home/docker/cp-test.txt multinode-745925-m02:/home/docker/cp-test_multinode-745925-m03_multinode-745925-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-745925 ssh -n multinode-745925-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-745925 ssh -n multinode-745925-m02 "sudo cat /home/docker/cp-test_multinode-745925-m03_multinode-745925-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.79s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-745925 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-745925 node stop m03: (1.38585814s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-745925 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-745925 status: exit status 7 (397.907674ms)

                                                
                                                
-- stdout --
	multinode-745925
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-745925-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-745925-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-745925 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-745925 status --alsologtostderr: exit status 7 (400.750399ms)

                                                
                                                
-- stdout --
	multinode-745925
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-745925-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-745925-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 00:31:11.621565   43359 out.go:291] Setting OutFile to fd 1 ...
	I0814 00:31:11.621662   43359 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 00:31:11.621668   43359 out.go:304] Setting ErrFile to fd 2...
	I0814 00:31:11.621673   43359 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 00:31:11.621869   43359 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19429-9425/.minikube/bin
	I0814 00:31:11.622066   43359 out.go:298] Setting JSON to false
	I0814 00:31:11.622100   43359 mustload.go:65] Loading cluster: multinode-745925
	I0814 00:31:11.622188   43359 notify.go:220] Checking for updates...
	I0814 00:31:11.622501   43359 config.go:182] Loaded profile config "multinode-745925": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 00:31:11.622519   43359 status.go:255] checking status of multinode-745925 ...
	I0814 00:31:11.622980   43359 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 00:31:11.623029   43359 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 00:31:11.641030   43359 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34457
	I0814 00:31:11.641504   43359 main.go:141] libmachine: () Calling .GetVersion
	I0814 00:31:11.642183   43359 main.go:141] libmachine: Using API Version  1
	I0814 00:31:11.642203   43359 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 00:31:11.642611   43359 main.go:141] libmachine: () Calling .GetMachineName
	I0814 00:31:11.642774   43359 main.go:141] libmachine: (multinode-745925) Calling .GetState
	I0814 00:31:11.644368   43359 status.go:330] multinode-745925 host status = "Running" (err=<nil>)
	I0814 00:31:11.644385   43359 host.go:66] Checking if "multinode-745925" exists ...
	I0814 00:31:11.644702   43359 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 00:31:11.644739   43359 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 00:31:11.661507   43359 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39145
	I0814 00:31:11.661860   43359 main.go:141] libmachine: () Calling .GetVersion
	I0814 00:31:11.662334   43359 main.go:141] libmachine: Using API Version  1
	I0814 00:31:11.662366   43359 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 00:31:11.662655   43359 main.go:141] libmachine: () Calling .GetMachineName
	I0814 00:31:11.662822   43359 main.go:141] libmachine: (multinode-745925) Calling .GetIP
	I0814 00:31:11.665893   43359 main.go:141] libmachine: (multinode-745925) DBG | domain multinode-745925 has defined MAC address 52:54:00:eb:87:ad in network mk-multinode-745925
	I0814 00:31:11.666343   43359 main.go:141] libmachine: (multinode-745925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:87:ad", ip: ""} in network mk-multinode-745925: {Iface:virbr1 ExpiryTime:2024-08-14 01:28:32 +0000 UTC Type:0 Mac:52:54:00:eb:87:ad Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:multinode-745925 Clientid:01:52:54:00:eb:87:ad}
	I0814 00:31:11.666376   43359 main.go:141] libmachine: (multinode-745925) DBG | domain multinode-745925 has defined IP address 192.168.39.201 and MAC address 52:54:00:eb:87:ad in network mk-multinode-745925
	I0814 00:31:11.666449   43359 host.go:66] Checking if "multinode-745925" exists ...
	I0814 00:31:11.666751   43359 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 00:31:11.666792   43359 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 00:31:11.681681   43359 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39785
	I0814 00:31:11.682095   43359 main.go:141] libmachine: () Calling .GetVersion
	I0814 00:31:11.682488   43359 main.go:141] libmachine: Using API Version  1
	I0814 00:31:11.682511   43359 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 00:31:11.682761   43359 main.go:141] libmachine: () Calling .GetMachineName
	I0814 00:31:11.682942   43359 main.go:141] libmachine: (multinode-745925) Calling .DriverName
	I0814 00:31:11.683097   43359 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0814 00:31:11.683122   43359 main.go:141] libmachine: (multinode-745925) Calling .GetSSHHostname
	I0814 00:31:11.685868   43359 main.go:141] libmachine: (multinode-745925) DBG | domain multinode-745925 has defined MAC address 52:54:00:eb:87:ad in network mk-multinode-745925
	I0814 00:31:11.686348   43359 main.go:141] libmachine: (multinode-745925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:87:ad", ip: ""} in network mk-multinode-745925: {Iface:virbr1 ExpiryTime:2024-08-14 01:28:32 +0000 UTC Type:0 Mac:52:54:00:eb:87:ad Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:multinode-745925 Clientid:01:52:54:00:eb:87:ad}
	I0814 00:31:11.686394   43359 main.go:141] libmachine: (multinode-745925) DBG | domain multinode-745925 has defined IP address 192.168.39.201 and MAC address 52:54:00:eb:87:ad in network mk-multinode-745925
	I0814 00:31:11.686534   43359 main.go:141] libmachine: (multinode-745925) Calling .GetSSHPort
	I0814 00:31:11.686690   43359 main.go:141] libmachine: (multinode-745925) Calling .GetSSHKeyPath
	I0814 00:31:11.686862   43359 main.go:141] libmachine: (multinode-745925) Calling .GetSSHUsername
	I0814 00:31:11.686975   43359 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/multinode-745925/id_rsa Username:docker}
	I0814 00:31:11.764641   43359 ssh_runner.go:195] Run: systemctl --version
	I0814 00:31:11.770183   43359 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 00:31:11.784083   43359 kubeconfig.go:125] found "multinode-745925" server: "https://192.168.39.201:8443"
	I0814 00:31:11.784115   43359 api_server.go:166] Checking apiserver status ...
	I0814 00:31:11.784151   43359 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 00:31:11.796666   43359 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1117/cgroup
	W0814 00:31:11.805808   43359 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1117/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0814 00:31:11.805858   43359 ssh_runner.go:195] Run: ls
	I0814 00:31:11.809480   43359 api_server.go:253] Checking apiserver healthz at https://192.168.39.201:8443/healthz ...
	I0814 00:31:11.813387   43359 api_server.go:279] https://192.168.39.201:8443/healthz returned 200:
	ok
	I0814 00:31:11.813406   43359 status.go:422] multinode-745925 apiserver status = Running (err=<nil>)
	I0814 00:31:11.813415   43359 status.go:257] multinode-745925 status: &{Name:multinode-745925 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0814 00:31:11.813430   43359 status.go:255] checking status of multinode-745925-m02 ...
	I0814 00:31:11.813726   43359 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 00:31:11.813756   43359 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 00:31:11.828828   43359 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33445
	I0814 00:31:11.829208   43359 main.go:141] libmachine: () Calling .GetVersion
	I0814 00:31:11.829624   43359 main.go:141] libmachine: Using API Version  1
	I0814 00:31:11.829650   43359 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 00:31:11.829962   43359 main.go:141] libmachine: () Calling .GetMachineName
	I0814 00:31:11.830184   43359 main.go:141] libmachine: (multinode-745925-m02) Calling .GetState
	I0814 00:31:11.831595   43359 status.go:330] multinode-745925-m02 host status = "Running" (err=<nil>)
	I0814 00:31:11.831612   43359 host.go:66] Checking if "multinode-745925-m02" exists ...
	I0814 00:31:11.831885   43359 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 00:31:11.831923   43359 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 00:31:11.846319   43359 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39565
	I0814 00:31:11.846723   43359 main.go:141] libmachine: () Calling .GetVersion
	I0814 00:31:11.847165   43359 main.go:141] libmachine: Using API Version  1
	I0814 00:31:11.847185   43359 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 00:31:11.847445   43359 main.go:141] libmachine: () Calling .GetMachineName
	I0814 00:31:11.847598   43359 main.go:141] libmachine: (multinode-745925-m02) Calling .GetIP
	I0814 00:31:11.849884   43359 main.go:141] libmachine: (multinode-745925-m02) DBG | domain multinode-745925-m02 has defined MAC address 52:54:00:fa:ee:8f in network mk-multinode-745925
	I0814 00:31:11.850305   43359 main.go:141] libmachine: (multinode-745925-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:ee:8f", ip: ""} in network mk-multinode-745925: {Iface:virbr1 ExpiryTime:2024-08-14 01:29:32 +0000 UTC Type:0 Mac:52:54:00:fa:ee:8f Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:multinode-745925-m02 Clientid:01:52:54:00:fa:ee:8f}
	I0814 00:31:11.850337   43359 main.go:141] libmachine: (multinode-745925-m02) DBG | domain multinode-745925-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:fa:ee:8f in network mk-multinode-745925
	I0814 00:31:11.850487   43359 host.go:66] Checking if "multinode-745925-m02" exists ...
	I0814 00:31:11.850784   43359 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 00:31:11.850814   43359 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 00:31:11.865134   43359 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33703
	I0814 00:31:11.865553   43359 main.go:141] libmachine: () Calling .GetVersion
	I0814 00:31:11.866025   43359 main.go:141] libmachine: Using API Version  1
	I0814 00:31:11.866073   43359 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 00:31:11.866334   43359 main.go:141] libmachine: () Calling .GetMachineName
	I0814 00:31:11.866521   43359 main.go:141] libmachine: (multinode-745925-m02) Calling .DriverName
	I0814 00:31:11.866670   43359 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0814 00:31:11.866691   43359 main.go:141] libmachine: (multinode-745925-m02) Calling .GetSSHHostname
	I0814 00:31:11.869215   43359 main.go:141] libmachine: (multinode-745925-m02) DBG | domain multinode-745925-m02 has defined MAC address 52:54:00:fa:ee:8f in network mk-multinode-745925
	I0814 00:31:11.869587   43359 main.go:141] libmachine: (multinode-745925-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:ee:8f", ip: ""} in network mk-multinode-745925: {Iface:virbr1 ExpiryTime:2024-08-14 01:29:32 +0000 UTC Type:0 Mac:52:54:00:fa:ee:8f Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:multinode-745925-m02 Clientid:01:52:54:00:fa:ee:8f}
	I0814 00:31:11.869628   43359 main.go:141] libmachine: (multinode-745925-m02) DBG | domain multinode-745925-m02 has defined IP address 192.168.39.55 and MAC address 52:54:00:fa:ee:8f in network mk-multinode-745925
	I0814 00:31:11.869802   43359 main.go:141] libmachine: (multinode-745925-m02) Calling .GetSSHPort
	I0814 00:31:11.869952   43359 main.go:141] libmachine: (multinode-745925-m02) Calling .GetSSHKeyPath
	I0814 00:31:11.870104   43359 main.go:141] libmachine: (multinode-745925-m02) Calling .GetSSHUsername
	I0814 00:31:11.870249   43359 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19429-9425/.minikube/machines/multinode-745925-m02/id_rsa Username:docker}
	I0814 00:31:11.948637   43359 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 00:31:11.961819   43359 status.go:257] multinode-745925-m02 status: &{Name:multinode-745925-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0814 00:31:11.961852   43359 status.go:255] checking status of multinode-745925-m03 ...
	I0814 00:31:11.962229   43359 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 00:31:11.962265   43359 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 00:31:11.977198   43359 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43067
	I0814 00:31:11.977659   43359 main.go:141] libmachine: () Calling .GetVersion
	I0814 00:31:11.978150   43359 main.go:141] libmachine: Using API Version  1
	I0814 00:31:11.978177   43359 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 00:31:11.978484   43359 main.go:141] libmachine: () Calling .GetMachineName
	I0814 00:31:11.978665   43359 main.go:141] libmachine: (multinode-745925-m03) Calling .GetState
	I0814 00:31:11.980147   43359 status.go:330] multinode-745925-m03 host status = "Stopped" (err=<nil>)
	I0814 00:31:11.980162   43359 status.go:343] host is not running, skipping remaining checks
	I0814 00:31:11.980170   43359 status.go:257] multinode-745925-m03 status: &{Name:multinode-745925-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.19s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (38.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-745925 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-745925 node start m03 -v=7 --alsologtostderr: (38.357179254s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-745925 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (38.95s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-745925 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-745925 node delete m03: (1.651756817s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-745925 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.16s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (179.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-745925 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0814 00:40:05.519671   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/addons-937866/client.crt: no such file or directory" logger="UnhandledError"
E0814 00:40:17.251885   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/functional-770612/client.crt: no such file or directory" logger="UnhandledError"
E0814 00:42:14.185480   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/functional-770612/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-745925 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m58.599316789s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-745925 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (179.11s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (41.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-745925
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-745925-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-745925-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (60.312586ms)

                                                
                                                
-- stdout --
	* [multinode-745925-m02] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19429
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19429-9425/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19429-9425/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-745925-m02' is duplicated with machine name 'multinode-745925-m02' in profile 'multinode-745925'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-745925-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-745925-m03 --driver=kvm2  --container-runtime=crio: (39.850719866s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-745925
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-745925: exit status 80 (204.880734ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-745925 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-745925-m03 already exists in multinode-745925-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-745925-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (41.11s)

                                                
                                    
x
+
TestScheduledStopUnix (110.23s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-063994 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-063994 --memory=2048 --driver=kvm2  --container-runtime=crio: (38.649999532s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-063994 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-063994 -n scheduled-stop-063994
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-063994 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-063994 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-063994 -n scheduled-stop-063994
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-063994
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-063994 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0814 00:49:48.593511   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/addons-937866/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-063994
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-063994: exit status 7 (64.710051ms)

                                                
                                                
-- stdout --
	scheduled-stop-063994
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-063994 -n scheduled-stop-063994
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-063994 -n scheduled-stop-063994: exit status 7 (64.046928ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-063994" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-063994
--- PASS: TestScheduledStopUnix (110.23s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (216.01s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.815912896 start -p running-upgrade-095151 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E0814 00:50:05.519207   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/addons-937866/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.815912896 start -p running-upgrade-095151 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m7.672831794s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-095151 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0814 00:52:14.186227   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/functional-770612/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-095151 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m24.829951963s)
helpers_test.go:175: Cleaning up "running-upgrade-095151" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-095151
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-095151: (1.166061306s)
--- PASS: TestRunningBinaryUpgrade (216.01s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-083412 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-083412 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (77.399937ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-083412] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19429
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19429-9425/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19429-9425/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (85.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-083412 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-083412 --driver=kvm2  --container-runtime=crio: (1m24.960256707s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-083412 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (85.19s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.27s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.27s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (137.83s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.861775947 start -p stopped-upgrade-677470 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.861775947 start -p stopped-upgrade-677470 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m31.282730305s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.861775947 -p stopped-upgrade-677470 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.861775947 -p stopped-upgrade-677470 stop: (1.37583664s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-677470 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-677470 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (45.174227517s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (137.83s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (36.64s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-083412 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-083412 --no-kubernetes --driver=kvm2  --container-runtime=crio: (35.323412483s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-083412 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-083412 status -o json: exit status 2 (239.909425ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-083412","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-083412
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-083412: (1.072036553s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (36.64s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (27.99s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-083412 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-083412 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.99053666s)
--- PASS: TestNoKubernetes/serial/Start (27.99s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-083412 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-083412 "sudo systemctl is-active --quiet service kubelet": exit status 1 (198.678071ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (4.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (3.618747274s)
--- PASS: TestNoKubernetes/serial/ProfileList (4.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-083412
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-083412: (1.273984686s)
--- PASS: TestNoKubernetes/serial/Stop (1.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (33.94s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-083412 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-083412 --driver=kvm2  --container-runtime=crio: (33.943362387s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (33.94s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-083412 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-083412 "sudo systemctl is-active --quiet service kubelet": exit status 1 (186.184117ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.81s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-677470
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-612440 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-612440 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (102.404278ms)

                                                
                                                
-- stdout --
	* [false-612440] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19429
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19429-9425/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19429-9425/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 00:53:14.169662   54436 out.go:291] Setting OutFile to fd 1 ...
	I0814 00:53:14.169936   54436 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 00:53:14.169948   54436 out.go:304] Setting ErrFile to fd 2...
	I0814 00:53:14.169953   54436 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 00:53:14.170468   54436 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19429-9425/.minikube/bin
	I0814 00:53:14.171533   54436 out.go:298] Setting JSON to false
	I0814 00:53:14.172514   54436 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5740,"bootTime":1723591054,"procs":226,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0814 00:53:14.172589   54436 start.go:139] virtualization: kvm guest
	I0814 00:53:14.174518   54436 out.go:177] * [false-612440] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0814 00:53:14.176042   54436 notify.go:220] Checking for updates...
	I0814 00:53:14.176077   54436 out.go:177]   - MINIKUBE_LOCATION=19429
	I0814 00:53:14.177260   54436 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0814 00:53:14.178588   54436 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19429-9425/kubeconfig
	I0814 00:53:14.179820   54436 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19429-9425/.minikube
	I0814 00:53:14.180976   54436 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0814 00:53:14.182295   54436 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0814 00:53:14.183950   54436 config.go:182] Loaded profile config "force-systemd-env-900037": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 00:53:14.184055   54436 config.go:182] Loaded profile config "kubernetes-upgrade-492920": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0814 00:53:14.184135   54436 config.go:182] Loaded profile config "running-upgrade-095151": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0814 00:53:14.184197   54436 driver.go:392] Setting default libvirt URI to qemu:///system
	I0814 00:53:14.221314   54436 out.go:177] * Using the kvm2 driver based on user configuration
	I0814 00:53:14.222564   54436 start.go:297] selected driver: kvm2
	I0814 00:53:14.222579   54436 start.go:901] validating driver "kvm2" against <nil>
	I0814 00:53:14.222589   54436 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0814 00:53:14.224435   54436 out.go:177] 
	W0814 00:53:14.225638   54436 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0814 00:53:14.226922   54436 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-612440 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-612440

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-612440

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-612440

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-612440

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-612440

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-612440

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-612440

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-612440

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-612440

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-612440

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-612440" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-612440"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-612440" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-612440"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-612440" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-612440"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-612440

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-612440" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-612440"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-612440" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-612440"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-612440" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-612440" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-612440" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-612440" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-612440" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-612440" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-612440" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-612440" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-612440" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-612440"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-612440" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-612440"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-612440" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-612440"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-612440" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-612440"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-612440" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-612440"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-612440" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-612440" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-612440" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-612440" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-612440"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-612440" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-612440"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-612440" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-612440"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-612440" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-612440"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-612440" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-612440"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19429-9425/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 14 Aug 2024 00:52:55 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.39.14:8443
name: running-upgrade-095151
contexts:
- context:
cluster: running-upgrade-095151
extensions:
- extension:
last-update: Wed, 14 Aug 2024 00:52:55 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: running-upgrade-095151
name: running-upgrade-095151
current-context: ""
kind: Config
preferences: {}
users:
- name: running-upgrade-095151
user:
client-certificate: /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/running-upgrade-095151/client.crt
client-key: /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/running-upgrade-095151/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-612440

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-612440" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-612440"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-612440" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-612440"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-612440" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-612440"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-612440" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-612440"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-612440" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-612440"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-612440" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-612440"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-612440" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-612440"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-612440" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-612440"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-612440" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-612440"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-612440" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-612440"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-612440" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-612440"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-612440" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-612440"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-612440" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-612440"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-612440" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-612440"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-612440" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-612440"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-612440" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-612440"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-612440" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-612440"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-612440" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-612440"

                                                
                                                
----------------------- debugLogs end: false-612440 [took: 2.763642465s] --------------------------------
helpers_test.go:175: Cleaning up "false-612440" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-612440
--- PASS: TestNetworkPlugins/group/false (3.03s)

                                                
                                    
x
+
TestPause/serial/Start (89.96s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-074686 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-074686 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m29.959997085s)
--- PASS: TestPause/serial/Start (89.96s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (40.44s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-074686 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-074686 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (40.41199578s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (40.44s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (86.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-901410 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-901410 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (1m26.280249938s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (86.28s)

                                                
                                    
x
+
TestPause/serial/Pause (0.8s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-074686 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.80s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.26s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-074686 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-074686 --output=json --layout=cluster: exit status 2 (257.323044ms)

                                                
                                                
-- stdout --
	{"Name":"pause-074686","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-074686","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.26s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.69s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-074686 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.69s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.82s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-074686 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.82s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.64s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-074686 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-074686 --alsologtostderr -v=5: (1.635237566s)
--- PASS: TestPause/serial/DeletePaused (1.64s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.59s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.59s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (97.69s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-776907 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
E0814 00:56:57.254079   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/functional-770612/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-776907 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (1m37.692737s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (97.69s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-901410 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [6e286c90-4935-46d2-bbdc-79183d4cb5b4] Pending
helpers_test.go:344: "busybox" [6e286c90-4935-46d2-bbdc-79183d4cb5b4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [6e286c90-4935-46d2-bbdc-79183d4cb5b4] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.004409835s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-901410 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (54.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-585256 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-585256 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (54.192790314s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (54.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.99s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-901410 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-901410 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.99s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.51s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-776907 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c514e832-2998-4439-bb97-0d6d4eb4e499] Pending
helpers_test.go:344: "busybox" [c514e832-2998-4439-bb97-0d6d4eb4e499] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [c514e832-2998-4439-bb97-0d6d4eb4e499] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.004647719s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-776907 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.51s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.93s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-776907 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-776907 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.93s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-585256 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [a4d55bac-7806-4e93-b77c-e6c02920a740] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [a4d55bac-7806-4e93-b77c-e6c02920a740] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.004358598s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-585256 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.88s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-585256 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-585256 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.88s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (665.61s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-901410 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-901410 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (11m5.364713385s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-901410 -n embed-certs-901410
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (665.61s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (543.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-776907 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-776907 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (9m3.071437512s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-776907 -n no-preload-776907
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (543.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (6.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-179312 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-179312 --alsologtostderr -v=3: (6.286744324s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (6.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (563.73s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-585256 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-585256 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (9m23.483801538s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-585256 -n default-k8s-diff-port-585256
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (563.73s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-179312 -n old-k8s-version-179312
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-179312 -n old-k8s-version-179312: exit status 7 (63.813583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-179312 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (45.44s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-137211 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-137211 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (45.438946829s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (45.44s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-137211 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.49s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-137211 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-137211 --alsologtostderr -v=3: (10.489206706s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.49s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-137211 -n newest-cni-137211
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-137211 -n newest-cni-137211: exit status 7 (60.463944ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-137211 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (36.79s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-137211 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-137211 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (36.535812256s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-137211 -n newest-cni-137211
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (36.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (49.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-612440 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-612440 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (49.11554432s)
--- PASS: TestNetworkPlugins/group/auto/Start (49.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-137211 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-137211 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-137211 -n newest-cni-137211
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-137211 -n newest-cni-137211: exit status 2 (231.247186ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-137211 -n newest-cni-137211
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-137211 -n newest-cni-137211: exit status 2 (233.288033ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-137211 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-137211 -n newest-cni-137211
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-137211 -n newest-cni-137211
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (67.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-612440 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-612440 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m7.876012079s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (67.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (97.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-612440 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
E0814 01:27:14.185505   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/functional-770612/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-612440 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m37.978295677s)
--- PASS: TestNetworkPlugins/group/calico/Start (97.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-612440 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (14.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-612440 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-z4jg6" [db97cc67-f67a-493d-b2ca-0eb0485c0286] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-z4jg6" [db97cc67-f67a-493d-b2ca-0eb0485c0286] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 14.004737221s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (14.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (22.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-612440 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context auto-612440 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.144900606s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context auto-612440 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Done: kubectl --context auto-612440 exec deployment/netcat -- nslookup kubernetes.default: (5.535913607s)
--- PASS: TestNetworkPlugins/group/auto/DNS (22.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-s95bz" [a197bc12-87e4-4b20-9369-a5d7690405ad] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004436443s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-612440 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-612440 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-612440 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-612440 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-cc2tv" [ba6f933d-37cd-416c-ba0d-8da9a857a993] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-cc2tv" [ba6f933d-37cd-416c-ba0d-8da9a857a993] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004213138s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (70.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-612440 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-612440 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m10.086815993s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (70.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-612440 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-612440 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-612440 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (73.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-612440 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-612440 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m13.77896584s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (73.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (102.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-612440 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
E0814 01:28:33.967400   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/no-preload-776907/client.crt: no such file or directory" logger="UnhandledError"
E0814 01:28:33.973767   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/no-preload-776907/client.crt: no such file or directory" logger="UnhandledError"
E0814 01:28:33.985161   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/no-preload-776907/client.crt: no such file or directory" logger="UnhandledError"
E0814 01:28:34.006651   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/no-preload-776907/client.crt: no such file or directory" logger="UnhandledError"
E0814 01:28:34.048047   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/no-preload-776907/client.crt: no such file or directory" logger="UnhandledError"
E0814 01:28:34.129532   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/no-preload-776907/client.crt: no such file or directory" logger="UnhandledError"
E0814 01:28:34.291591   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/no-preload-776907/client.crt: no such file or directory" logger="UnhandledError"
E0814 01:28:34.613692   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/no-preload-776907/client.crt: no such file or directory" logger="UnhandledError"
E0814 01:28:35.255094   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/no-preload-776907/client.crt: no such file or directory" logger="UnhandledError"
E0814 01:28:36.537160   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/no-preload-776907/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-612440 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m42.018895908s)
--- PASS: TestNetworkPlugins/group/flannel/Start (102.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-8xmmq" [1f18ada1-bbed-46fc-af10-ed2db9062bf1] Running
E0814 01:28:39.098743   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/no-preload-776907/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004775451s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-612440 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-612440 replace --force -f testdata/netcat-deployment.yaml
E0814 01:28:44.220774   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/no-preload-776907/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-4fk9c" [57cc1fe8-eb30-49a7-b61b-53a2303e3cc9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-4fk9c" [57cc1fe8-eb30-49a7-b61b-53a2303e3cc9] Running
E0814 01:28:54.462809   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/no-preload-776907/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.005116602s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-612440 exec deployment/netcat -- nslookup kubernetes.default
E0814 01:28:55.459891   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/default-k8s-diff-port-585256/client.crt: no such file or directory" logger="UnhandledError"
E0814 01:28:55.466858   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/default-k8s-diff-port-585256/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-612440 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E0814 01:28:55.478123   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/default-k8s-diff-port-585256/client.crt: no such file or directory" logger="UnhandledError"
E0814 01:28:55.502172   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/default-k8s-diff-port-585256/client.crt: no such file or directory" logger="UnhandledError"
E0814 01:28:55.543828   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/default-k8s-diff-port-585256/client.crt: no such file or directory" logger="UnhandledError"
E0814 01:28:55.625974   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/default-k8s-diff-port-585256/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-612440 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (97.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-612440 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
E0814 01:29:14.945083   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/no-preload-776907/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-612440 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m37.625729765s)
--- PASS: TestNetworkPlugins/group/bridge/Start (97.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-612440 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-612440 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-bthpm" [8f3948f7-f811-484f-8268-1f151e9534e7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0814 01:29:15.959630   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/default-k8s-diff-port-585256/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-bthpm" [8f3948f7-f811-484f-8268-1f151e9534e7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.007032197s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-612440 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-612440 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-612440 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-612440 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-gqbpw" [6255346e-a289-47b9-bbff-dcaa02b91346] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-gqbpw" [6255346e-a289-47b9-bbff-dcaa02b91346] Running
E0814 01:29:35.467787   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312/client.crt: no such file or directory" logger="UnhandledError"
E0814 01:29:36.441314   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/default-k8s-diff-port-585256/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 13.004050272s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-612440 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-612440 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-612440 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-612440 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-7mnlk" [3dfd47b6-a81c-4ad9-a2e2-2cc7039b5ea3] Running
E0814 01:30:11.312275   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003946839s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-612440 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-612440 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-mvtzb" [28dbde78-b084-48df-b05e-3df55afae12f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0814 01:30:17.257791   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/functional-770612/client.crt: no such file or directory" logger="UnhandledError"
E0814 01:30:17.403248   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/default-k8s-diff-port-585256/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-mvtzb" [28dbde78-b084-48df-b05e-3df55afae12f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.00403308s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-612440 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-612440 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-612440 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-612440 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-612440 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-rgvbj" [deab4be8-468d-44a0-9de6-66b86473cef9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0814 01:30:52.274144   16589 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/old-k8s-version-179312/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-rgvbj" [deab4be8-468d-44a0-9de6-66b86473cef9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.003592135s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-612440 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-612440 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-612440 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    

Test skip (37/318)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.31.0/cached-images 0
15 TestDownloadOnly/v1.31.0/binaries 0
16 TestDownloadOnly/v1.31.0/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0
38 TestAddons/parallel/Olm 0
48 TestDockerFlags 0
51 TestDockerEnvContainerd 0
53 TestHyperKitDriverInstallOrUpdate 0
54 TestHyperkitDriverSkipUpgrade 0
105 TestFunctional/parallel/DockerEnv 0
106 TestFunctional/parallel/PodmanEnv 0
120 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
122 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
123 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
124 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
125 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
126 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
127 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
128 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
154 TestGvisorAddon 0
176 TestImageBuild 0
203 TestKicCustomNetwork 0
204 TestKicExistingNetwork 0
205 TestKicCustomSubnet 0
206 TestKicStaticIP 0
238 TestChangeNoneUser 0
241 TestScheduledStopWindows 0
243 TestSkaffold 0
245 TestInsufficientStorage 0
249 TestMissingContainerUpgrade 0
257 TestStartStop/group/disable-driver-mounts 0.22
272 TestNetworkPlugins/group/kubenet 2.77
280 TestNetworkPlugins/group/cilium 3.42
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:879: skipping: crio not supported
--- SKIP: TestAddons/serial/Volcano (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-655306" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-655306
--- SKIP: TestStartStop/group/disable-driver-mounts (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (2.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-612440 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-612440

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-612440

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-612440

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-612440

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-612440

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-612440

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-612440

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-612440

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-612440

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-612440

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-612440" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-612440"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-612440" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-612440"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-612440" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-612440"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-612440

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-612440" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-612440"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-612440" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-612440"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-612440" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-612440" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-612440" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-612440" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-612440" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-612440" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-612440" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-612440" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-612440" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-612440"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-612440" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-612440"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-612440" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-612440"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-612440" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-612440"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-612440" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-612440"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-612440" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-612440" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-612440" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-612440" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-612440"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-612440" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-612440"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-612440" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-612440"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-612440" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-612440"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-612440" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-612440"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19429-9425/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 14 Aug 2024 00:52:55 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.39.14:8443
name: running-upgrade-095151
contexts:
- context:
cluster: running-upgrade-095151
extensions:
- extension:
last-update: Wed, 14 Aug 2024 00:52:55 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: running-upgrade-095151
name: running-upgrade-095151
current-context: ""
kind: Config
preferences: {}
users:
- name: running-upgrade-095151
user:
client-certificate: /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/running-upgrade-095151/client.crt
client-key: /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/running-upgrade-095151/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-612440

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-612440" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-612440"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-612440" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-612440"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-612440" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-612440"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-612440" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-612440"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-612440" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-612440"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-612440" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-612440"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-612440" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-612440"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-612440" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-612440"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-612440" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-612440"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-612440" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-612440"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-612440" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-612440"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-612440" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-612440"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-612440" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-612440"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-612440" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-612440"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-612440" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-612440"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-612440" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-612440"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-612440" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-612440"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-612440" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-612440"

                                                
                                                
----------------------- debugLogs end: kubenet-612440 [took: 2.633529236s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-612440" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-612440
--- SKIP: TestNetworkPlugins/group/kubenet (2.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-612440 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-612440

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-612440

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-612440

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-612440

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-612440

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-612440

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-612440

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-612440

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-612440

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-612440

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-612440" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-612440"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-612440" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-612440"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-612440" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-612440"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-612440

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-612440" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-612440"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-612440" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-612440"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-612440" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-612440" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-612440" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-612440" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-612440" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-612440" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-612440" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-612440" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-612440" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-612440"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-612440" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-612440"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-612440" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-612440"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-612440" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-612440"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-612440" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-612440"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-612440

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-612440

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-612440" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-612440" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-612440

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-612440

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-612440" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-612440" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-612440" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-612440" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-612440" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-612440" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-612440"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-612440" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-612440"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-612440" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-612440"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-612440" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-612440"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-612440" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-612440"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19429-9425/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 14 Aug 2024 00:52:55 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.39.14:8443
name: running-upgrade-095151
contexts:
- context:
cluster: running-upgrade-095151
extensions:
- extension:
last-update: Wed, 14 Aug 2024 00:52:55 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: running-upgrade-095151
name: running-upgrade-095151
current-context: ""
kind: Config
preferences: {}
users:
- name: running-upgrade-095151
user:
client-certificate: /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/running-upgrade-095151/client.crt
client-key: /home/jenkins/minikube-integration/19429-9425/.minikube/profiles/running-upgrade-095151/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-612440

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-612440" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-612440"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-612440" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-612440"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-612440" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-612440"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-612440" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-612440"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-612440" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-612440"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-612440" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-612440"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-612440" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-612440"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-612440" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-612440"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-612440" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-612440"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-612440" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-612440"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-612440" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-612440"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-612440" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-612440"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-612440" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-612440"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-612440" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-612440"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-612440" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-612440"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-612440" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-612440"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-612440" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-612440"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-612440" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-612440"

                                                
                                                
----------------------- debugLogs end: cilium-612440 [took: 3.274153596s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-612440" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-612440
--- SKIP: TestNetworkPlugins/group/cilium (3.42s)

                                                
                                    
Copied to clipboard